Speakers

( Video, Slides) Joelle Pineau

Joelle Pineau

FAIR, MILA, McGill

(Live) Jessica Zosa Forde

Jessica Zosa Forde

Brown University

( Video, Slides) Kirstie Whitaker

Kirstie Whitaker

Alan Turing Institute, Cambridge University

Schedule

Pre-registration in a nutshell

Separate the generation and confirmation of hypotheses:

   Come up with an exciting research question   

   Write a paper proposal without confirmatory experiments   

   After the paper is accepted, run the experiments and report your results   

What does science get?

  • A healthy mix of positive and negative results
  • Reasonable ideas that don’t work still get published, avoiding wasteful replications
  • Papers are evaluated on the basis of scientific interest, not whether they achieve the best results

What do you get?

  • It's easier to plan your research: get feedback before investing in lengthy experiments
  • Your research is stronger: results have increased credibility
  • Convince people that they will learn something even if the result is negative

Call for Papers

What is pre-registration and how does it improve peer review? Benchmarks on popular datasets have played a key role in the considerable measurable progress that machine learning has made in the last few years. But reviewers can be tempted to prioritize incremental improvements in benchmarks to the detriment of other scientific criteria, destroying many good ideas in their infancy. Authors can also feel obligated to make orthogonal improvements in order to “beat the state-of-the-art”, making the main contribution hard to assess.

Pre-registration changes the incentives by reviewing and accepting a paper before experiments are conducted. The emphasis of peer-review will be on whether the experiment plan can adequately prove or disprove one (or more) hypotheses. Some results will be negative, and this is welcomed. This way, good ideas that do not work will get published, instead of filed away and wastefully replicated many times by different groups. Finally, the clear separation between hypothesizing and confirmation (absent in the current review model) will raise the statistical significance of the results.

We are inviting submissions on the broad range of topics covered at NeurIPS! The paper template is structured like a mini-tutorial on the pre-registration process to get you started quickly. Pre-registered papers will be published at the workshop. Authors will then have the opportunity to submit the results paper to the Proceedings of Machine Learning Research (PMLR), a sister publication to the Journal of Machine Learning Research (JMLR). The review process for this second stage will aim to ensure that the authors have performed a good-faith attempt to complete the experiments described in their proposal paper.

Please note that the workshop and pre-registration phase has now taken place. The results phase is now active (see below).

Important info and dates

The review cycle for a pre-registered study consists of two stages: the proposal paper and the results paper. These stages reflect the exploratory (hypothesis generation) and confirmatory (hypothesis testing) phases of research.

Proposal paper

  • Read our mini-tutorial/template (PDF) — it serves as a paper template, describes the submission process and the intended spirit of a pre-registration.
  • Submit your paper anonymously to CMT. Differently from traditional submissions, the experimental section must only contain a description of experiments and protocol, and what conclusions can be drawn in different cases, without the results themselves. The pre-registration proposal should use the paper template. We recommend 4 pages, but we allow up to 5 pages (excluding references) for the pre-registration proposal. Note that, for some venues, only papers up to 4 pages (without references) are not considered as 'prior submission', see e.g. CVPR. For others, e.g. NeurIPS, non-archival workshops like ours do not count as dual submissions. The deadline for submissions is October 7th 9th (midnight Anywhere on Earth: GMT + 12).
  • Besides quality and potential impact of the idea, reviewers will also assess: (1) Are the experiments appropriate for validating the core hypothesis of the work? (2) Is the experimental protocol description sufficient to allow reproduction of the experiments? You will then have a rebuttal period (until October 22nd) to address the comments of the reviewers, by writing a short response.
  • Decisions will be sent to authors by October 30th.
  • On the day of the workop (December 11th, remote, from 9:20AM New York time)), authors will present their proposals and (optionally) their preliminary results.

Results paper

  • Authors carry out the experimental protocols proposed in their accepted proposal papers.
  • The results will be presented in second document known as the results paper. This will be appended to the proposal paper to form the complete document. The deadline for the results paper is going to be in April, 2021 (tentative) The deadline has been extended to Friday 7th May, 23:59, Anywhere on Earth time zone.
  • We will then support and encourage the final results to be published at the PMLR in combination with the pre-registered paper.
  • In case of interest, we will also organise a second virtual meeting at the end of April 2021 to discuss the experimental results and the lesssons learned. (to be determined)

Accepted Proposals

Playlist of all 1-minute preview videos

5 Kexue Fu, Xiaoyuan Luo, Manning Wang Point Cloud Overlapping Region Co-Segmentation Network PDF Video Poster
7 Udo Schlegel, Daniela Oelke, Daniel Keim, Mennatallah El-Assady An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks PDF Video Poster
17 Akshay L Chandra, Sai Vikas Desai, Chaitanya Devaguptapu, Vineeth N Balasubramanian On Initial Pools for Deep Active Learning PDF Video Poster
19 Liu Yuezhang, Bo Li, Qifeng Chen Evaluating Adversarial Robustness in Simulated Cerebellum PDF Video Poster
21 XueHao Gao, Yang Yang, Shaoyi Du Contrastive Self-Supervised Learning for Skeleton Action Recognition PDF Video Oral Poster
26 Ayush Jaiswal, Yue Wu, Pradeep Natarajan, Prem Natarajan Keypoints-aware Object Detection PDF Video Poster
27 Cade Gordon, Natalie Parde Latent Neural Differential Equations for Video Generation PDF Video Poster
28 Robert Vandermeulen, Rene Saitenmacher, Alexander Ritchie A Proposal for Supervised Density Estimation PDF Video Poster
31 Eimear O' Sullivan, Stefanos Zafeiriou PCA Retargeting: Encoding Linear Shape Models as Convolutional Mesh Autoencoders PDF Video Oral Poster
33 Rasmus Palm, Elias Najarro, Sebastian Risi Testing the Genomic Bottleneck Hypothesis in Hebbian Meta-Learning PDF Video Oral Poster
36 Rodrigo Alves, Antoine Ledent, Renato Assunção, Marius Kloft An Empirical Study of the Discreteness Prior in Low-Rank Matrix Completion PDF Video Poster
38 Elena Burceanu SFTrack++: A Fast Learnable Spectral Segmentation Approach for Space-Time Consistent Tracking PDF Video Poster
39 Rishika Bhagwatkar, Khurshed Fitter, Saketh Bachu, Akshay Kulkarni, Shital Chiddarwar Paying Attention to Video Generation PDF Video Poster
40 Chen Li, Xutan Peng, Hao Peng, Jianxin Li, Lihong Wang, Philip Yu, TextSGCN: Document-Level Graph Topology Refinement for Text Classification PDF Video Poster
41 Chase Dowling, Ted Fujimoto, Nathan Hodas Policy Convergence Under the Influence of Antagonistic Agents in Markov Games PDF Video Oral Poster
42 Arnout Devos ,Yatin Dandi Model-Agnostic Learning to Meta-Learn PDF Video Poster
44 Carianne Martinez, Adam Brink, David Najera-Flores, D. Dane Quinn, Eleni Chatzi, Stephanie Forrest, Confronting Domain Shift in Trained Neural Networks PDF Video Oral Poster
45 Joao Monteiro, Xavier Gibert, Jianqiao Feng, Vincent Dumoulin, Dar-Shyang Lee Domain Conditional Predictors for Domain Adaptation PDF Video Poster
47 Tanner Bohn, Xinyu Yun, Charles Ling Towards a Unified Lifelong Learning Framework PDF Video Poster
48 Hamid Eghbal-zadeh, Florian Henkel, Gerhard Widmer Context-Adaptive Reinforcement Learning using Unsupervised Learning of Context Variables PDF Video Poster
50 Aneesh Dahiya, Adrian Spurr, Otmar Hilliges Exploring self-supervised learning techniques for hand pose estimation PDF Video Poster
55 Sebastian Stabinger, David Peer, Antonio Rodriguez-Sanchez Training of Feedforward Networks Fails on a Simple Parity-Task PDF Supmat Video Poster
56 Pablo Barros, Ana Tanevska, Ozge Nilay Yalcin, Alessandra Sciutti Incorporating Rivalry in Reinforcement Learning for a Competitive Game PDF Video Poster
57 Steffen Schneider, Shubham Krishna, Luisa Eck, Wieland Brendel, Mackenzie Mathis, Matthias Bethge Generalized Invariant Risk Minimization: relating adaptation and invariant representation learning PDF Supmat Video Poster
58 Alex Lewandowski Generalization Across Space and Time in Reinforcement Learning PDF Video Poster
59 Prabhu Pradhan, Ruchit Rawal, Gopi Kishan Rendezvous between Robustness and Dataset Bias: An empirical study PDF Video Poster
60 Miles Cranmer, Peter Melchior, Brian Nord Unsupervised Resource Allocation with Graph Neural Networks PDF Video Oral Poster
62 Swaroop Mishra, Anjana Arunkumar, Bhavdeep Sachdeva Is High Quality Data All You Need? PDF Video Poster
67 Yi-Fan Li, Yang Gao, Yu Lin, Zhuoyi Wang, Latifur Khan Time Series Forecasting Using a Unified Spatial-Temporal Graph Convolutional Network PDF Supmat Video Poster
69 Norman Tasfi, Eder Santana, Miriam Capretz Policy Agnostic Successor Features PDF Video Poster
71 Owen Lockwood, Mei Si Playing Atari with Hybrid Quantum-Classical Reinforcement Learning PDF Video Poster
76 Ajinkya Mulay, Ayush Manish Agrawal, Tushar Semwal FedPerf: A Practitioners' Guide to Performance of Federated Learning Algorithms PDF Video Oral Poster
77 Harshvardhan Sikka, Atharva Tendle, Amr Kayid Multimodal Modular Meta-Learning PDF Video Poster
79 Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy PDF Video Poster
81 Ruizhe Li, Xutan Peng, Chenghua Lin, Frank Guerin, Wenge Rong On the low-density latent regions of VAE-based language models PDF Video Oral Poster
82 Pratyush Kumar, Aishwarya Praveen Das, Debayan Gupta Differential Euler: Designing a Neural Network approximator to solve the Chaotic Three Body Problem PDF Video Poster
83 Jiaqi Fan, Junxin Huang, Xiaochuan Yu, Chao He Data Subset Selection for Object Detection PDF Video Poster
85 Meenakshi Sarkar, Debasish Ghose Decomposing camera and object motion for an improved Video Sequence Prediction PDF Video Poster

FAQs

Organisers

João F. Henriques

João F. Henriques

University of Oxford

Samuel Albanie

Samuel Albanie

University of Oxford

Michela Paganini

Michela Paganini

Facebook AI Research

Gül Varol

Gül
Varol

University of Oxford

Reviewers

Many thanks to all the reviewers for their help:
Minttu Alakuijala
Yuki Asano
Max Bain
Fabien Baradel
Eloïse Berthier
Raphaël Berthier
Alberto Bietti
Hakan Bilen
Tolga Birdal
Oumayma Bounou
Margaux Bregere
Andrew Brown
Andrei Bursuc
Lénaïc Chizat
Jesse Dodge
Yuming Du
Christophe Dupuy
Sebastien Ehrhardt
Valentin Gabeur
Andrew Gambardella
Aude Genevay
Pascal Germain
Adam Golinski
Stuart Golodetz
Oliver Groth
Tom Gunter
Kai Han
Tengda Han
Yana Hasson
Eldar Insafutdinov
Ahmet Iscen
Xu Ji
Vicky Kalogeiton
A. Sophia Koepke
Viveka Kulharia
Valdimar Steinar Ericsson Laenen
Zihang Lai
Iro Laina
Shuda Li
Roxane Licandro
Erika Lu
Robert McCraith
Eric Metodiev
Grégoire Mialon
Liliane Momeni
Arsha Nagrani
Nantas Nardelli
Lukas Neumann
Maxime Oquab
Anuj Pahuja
Alexander Pashevich
Mandela Patrick
Loucas Pillaud Vivien
Ameya Prabhu
Tom Rainforth
Ignacio Rocco
Manon Romain
Vincent Roulet
Christian Rupprecht
Levent Sagun
Lukas Schäfer
Li Shen
Oriane Siméoni
Umut Simsekli
Robin Strudel
Adrien Taylor
Damien Teney
James Thornton
Jack Valmadre
Bichen Wu
Shangzhe Wu
Weidi Xie
Charig Yang
Chuhan Zhang

Questions?