I have attended a seminar discussing how to encourage reproducability in scientific research of the Internet. Obviously, everybody agrees that it is desirable that research findings are reproduced by independent studies (and I mean reproduced and not just repeated although repeated is more than nothing). The question, however, is how to get there. For me this is largely an issue of incentives and I am sure there are a couple of things that can be done to increase incentives to reproduce research. I particularly liked the idea to organize repathons, reproducability hackathons where students reproducing work can meet authors of papers they are trying to reproduce and I believe research funding organizations will need to use their power to make it easier to reproduce researchs.
What strikes me as odd, however, is how conservative and traditional some solution proposals were. Some people suggest tight hierarchical control structures to deal with possible side effects and misuse scenarios completely ignoring that there systems that work by essentially crowd sourcing the problem. An example is StackExchange, which uses reputation metrics to rank most useful answers. Other scientific communities tend to be much more open to pick up novel systems like ResearchGate, a social network for scientists and researchers. It seems computer scientists are among the last to eat their own dogfood while other disciplines seem to be much more open to use the technologies that the Internet provides.