Is This a Paper Review?

With paper submissions rocketing and the pool of experienced researchers stagnant, machine learning conferences, backs to the wall, have made the inevitable choice to inflate the ranks of peer reviewers, in the hopes that a fortified pool might handle the onslaught.

Infographic depicting NIPS submissions over time. The red bar plots fabricated data from the future.

With nearly every professor and senior grad student already reviewing at capacity, conference organizers have gotten creative, finding reviewers in unlikely places. Reached for comment, ICLR’s program chairs declined to reveal their strategy for scouting out untapped reviewing talent, indicating that these trade secrets might be exploited by rivals NeurIPS and ICML. Fortunately, on condition of anonymity, several (less senior) ICLR officials agreed to discuss a few unusual sources they’ve tapped:

  1. All of /r/machinelearning
  2. Twitter users who follow @ylecun
  3. Holders of registered .ai & .ml domains
  4. Commenters from ML articles posted to Hacker News
  5. YouTube commenters on Siraj Raval deep learning rap videos
  6. Employees of entities registered as owners of .ai & .ml domains
  7. Everyone camped within 4° of Andrej Karpathy at Burning Man
  8. GitHub handles forking TensorFlow, Pytorch, or MXNet in last 6 mos.
  9. A joint venture with Udacity to make reviewing for ICLR a course project for their Intro to Deep Learning class

Continue reading “Is This a Paper Review?”