Preregistration in a nutshell

Separate the generation and confirmation of hypotheses:

  Come up with an exciting research question  

  Write a paper proposal without confirmatory experiments  

  After the paper is accepted, run the experiments and report your results  

What does science get?

  • A healthy mix of positive and negative results
  • Reasonable ideas that don’t work still get published, avoiding wasteful replications
  • Papers are evaluated on the basis of scientific interest, not whether they achieve the best results

What do you get?

  • It's easier to plan your research: get feedback before investing in lengthy experiments
  • Your research is stronger: results have increased credibility
  • Convince people that they will learn something even if the result is negative

Workshop program and videos

2nd November

13:30David Forsyth (UIUC)
Error analysis, experimental protocols and the replication crisis (video)
14:00Zachary Lipton (CMU)
Troubling trends in machine learning scholarship (video)
14:30Workshop and pre-registration overview (video)
14:45Contributed talk (video)
An empirical study of the relation between network architecture and complexity (Emir Konuk and Kevin Smith)
15:00Poster session and coffee break
15:45Michela Paganini (FAIR)
Preregistration and blind analysis (video)
16:15Contributed talk (video)
Learning representational invariance instead of categorization (Alex Hernández and Peter König)
16:30Oguzhan Gencoglu (topdatascience.com)
The HARK side of deep learning (video)
17:00Contributed talk (video)
Towards generalizable distance estimation by leveraging graph information (Todd Houghton and John Kevin Cava)
17:15Discussion and closing remarks (video)

Proceedings

Emir Konuk, Kevin SmithAn empirical study of the relation between network architecture and complexityPDF Bib
Alex Hernández-García, Peter KönigLearning representational invariance instead of categorizationPDF Bib
John Kevin Cava, Todd Houghton, Hongbin YuTowards generalizable distance estimation by leveraging graph informationPDF Bib
Eimear O' Sullivan, Stefanos ZafeiriouExtending convolutional pose machines for facial landmark localization in 3D point cloudsPDF Bib
Mohamed Abbas Hedjazi, Yakup GençLearning to inpaint by progressively growing the mask regionsPDF Bib

Frequently asked questions

  • Don't we need a positive publication bias? After all, there many more ideas that don't work than ones that do. Why is it useful to allow negative results? There are several benefits to publishing negative results. If an idea is well-motivated and intuitively appealing, it may be needlessly repeated by multiple groups who replicate the negative outcome, but do not have a venue for sharing this knowledge with the community (see the CVPR 2017 workshop on negative results for a more detailed discussion of the benefits of publishing negative outcomes).
  • How does exploratory data analysis fit into this model? Exploratory analysis can come in multiple forms including: (1) Small scale experiments (typically on toy data); (2) Results listed in prior work. Both should be reported in the proposal paper as part of the justification for your idea. Neither should be considered by the reader of your paper as providing confirmatory evidence in support of your hypothesis (the goal of preregistration is to make this distinction explicit). By contrast, the confirmatory experimental protocol which you propose should seek to rigorously evaluate your hypothesis and must be performed on different data to your own exploratory experiments. However, for practical reasons, it may use datasets that have also been previously used in the literature (further discussion below).
  • What's the rationale for changing the review model? “Preregistration separates hypothesis-generating (exploratory) from hypothesis-testing (confirmatory) research. Both are important. But the same data cannot be used to generate and test a hypothesis, which can happen unintentionally and reduces the credibility of your results. Addressing this problem through planning improves the quality and transparency of your research, helping others who may wish to build on it.” (source: cos.io)
  • Will the papers be published in the ICCV proceedings? Yes. The proposal papers will be published in the ICCV proceedings, and available on IEEE Xplore and CVF Open Access. Due to the non-standard nature of the review process (experiments are published only many months after review), the results will be made available as an addendum to the proposal paper.
  • Doesn't prior work on existing benchmarks weaken my confirmatory experiments? Yes. Each prior result reported on a dataset leaks information that reduces its statistical utility (we are strongly in favour of limited-evaluation benchmarks for this reason). Unfortunately, from a pragmatic perspective, it is infeasible to expect every computer vision researcher to collect a new dataset for each hypothesis they wish to evaluate, so we must strike a reasonable balance here.
  • Is it OK to make changes to the preregistered experimental protocol? Although you should endeavour to follow the proposed experimental protocol as closely as possible, you may find that it is necessary to make small changes or refinements. These changes should be carefully documented when reporting the experimental results: it is important to make clear which protocols have been modified after observing the evidence.
  • Where can I found more information about preregistration? There are a number of good resources for further reading around the ideas related to preregistration, including, but not limited to:

Organisers

João F. Henriques (University of Oxford)
Samuel Albanie (University of Oxford)
Luca Bertinetto (FiveAI)
Jack Valmadre (Google Research)

Questions?