Is This a Paper Review?

With paper submissions rocketing and the pool of experienced researchers stagnant, machine learning conferences, backs to the wall, have made the inevitable choice to inflate the ranks of peer reviewers, in the hopes that a fortified pool might handle the onslaught.

Infographic depicting NIPS submissions over time. The red bar plots fabricated data from the future.

With nearly every professor and senior grad student already reviewing at capacity, conference organizers have gotten creative, finding reviewers in unlikely places. Reached for comment, ICLR’s program chairs declined to reveal their strategy for scouting out untapped reviewing talent, indicating that these trade secrets might be exploited by rivals NeurIPS and ICML. Fortunately, on condition of anonymity, several (less senior) ICLR officials agreed to discuss a few unusual sources they’ve tapped:

  1. All of /r/machinelearning
  2. Twitter users who follow @ylecun
  3. Holders of registered .ai & .ml domains
  4. Commenters from ML articles posted to Hacker News
  5. YouTube commenters on Siraj Raval deep learning rap videos
  6. Employees of entities registered as owners of .ai & .ml domains
  7. Everyone camped within 4° of Andrej Karpathy at Burning Man
  8. GitHub handles forking TensorFlow, Pytorch, or MXNet in last 6 mos.
  9. A joint venture with Udacity to make reviewing for ICLR a course project for their Intro to Deep Learning class

With so many new referees, perhaps it’s not surprising to see, sprinkled among the stronger, more traditional reviews, a number of unusual ones: some oddly short, some oddly … net-speak (“imho, srs paper 4 real”), and some that challenge assumptions about what shared knowledge is pre-requisite for membership in this community (“who are you to say this matrix is degenerate?”) .

However, these reviews, which to a casual onlooker might signify incompetence, belie the earnest efforts of a cadre of new reviewers to rise to the occasion. I know this because, fortuitously, many of these new reviewers are avid readers of Approximately Correct, and for the last few weeks my email box has been overflowing with earnest questions from well-meaning neophytes.

If you’ve ever taught a course before, it might not come as a surprise  that their questions overlapped substantially.  So while we don’t normally post QA-type articles on Approximately Correct, an exception here seems both appropriate and efficient. I’ve compiled a few exemplar questions and provided succinct answers here in a rare QA piece that we’ll call “Is This a Paper Review”?

Henry in Pasadena writes:

Dear Approximately Correct,

I was assigned to review a paper. I read the abstract and formed an opinion tangentially related to the topic of the paper. Then I wrote a paragraph partly expressing my opinion and partly arguing with one of the anonymous commenters on an unrelated topic. This is standard practice on Hacker News, where I have earned over 2000 upvotes for similar reviews, which together form the basis of my qualifications to review for ICLR. Is this a paper review?

AC: No, that  is not a paper review.

Pandit in Mysore writes:

Dear Approximately Correct,

I began to read a paper about the convergence of gradient descent. Once they said “limit” I got lost, so I skipped to the back of the paper, where I noticed that they did not do any experiments on ImageNet. I wrote a one-line review. The title said “Not an expert but WTF?” and the body said “No experiments on ImageNet?” Is this a paper review?

AC: No, that  is not a paper review.

Xiao in Shanghai writes:

Dear Approximately Correct,

I trained an LSTM on past ICLR reviews. Then I ran it with the softmax temperature set to .01. The output was “Not novel [EOS].” I entered this in OpenReview. Is this a paper review?

AC: No, that  is not a paper review.

Jordan in Boulder writes:

Dear Approximately Correct,

When reviewing this paper, I noticed that it was vaguely similar in certain ways to an idea that I had in 1987. While I like the idea (as you might imagine), I assigned the paper a middling score, with one half of the review a solid discussion of the technical work and the other half devoted exclusively to enumerating my own papers and demanding that the author cite them. Is this a paper review?

AC: This sounds like a problematic paper review. But it could be a good review if you increase your score to what you think it might be were you dispassionate, tone down the stuff about your own papers, and send a thoughtful note to the metareviewer indicating a minor conflict of interest.

Rachel in New Jersey writes:

I read the paper. In the first 2 pages, there were 10 mathematical mistakes, including some that made the entire contribution of the paper obviously wrong. I stopped reading to conserve my time and wrote a short one-paragraph review that indicated the mistakes and said “not suitable for publication at ICLR.” Is this a paper review?

AC: While ordinarily so short a review might not be appropriate, this is a clear exception. Excellent review!

This piece was loosely inspired by Jason Feifer’s Is This a Selfie? 

Related Posts

AI Researcher Joins Johnson & Johnson, to Make More than $19 Squillion

Three weeks ago, New York Times reporter Cade Metz sent shockwaves through society with a startling announcement that A.I. researchers were making more than $1 Million dollars, even at a nonprofit!

AI super-hero and newly minted squillionaire Zachary Chase Lipton feeds a wallaby bitcoins while vacationing on Elon Musk’s interplanetary animal preserve on the Martian plains.

Within hours, I received multiple emails. Parents, friends, old classmates, my girlfriend all sent emails. Did you see the article? Maybe they wanted me to know what riches a life in private industry had in store for me? Perhaps they were curious if I was already bathing in Cristal, shopping for yachts, or planning to purchase an atoll among the Maldives? Perhaps the communist sympathizers in my social circles had renewed admiration for my abstention from such extreme opulence.

Continue reading “AI Researcher Joins Johnson & Johnson, to Make More than $19 Squillion”

ICML 2018 Registrations Sell Out Before Submission Deadline

In a shocking tweet, organizers of the 35th International Conference on Machine Learning (ICML 2018) announced today, through an official Twitter account, that this year’s conference has sold out. The announcement came as a surprise owing to the  timing.  Slated to occur in July, 2018, the conference has historically been attended by professors and graduate student authors, who attend primarily to present their research to audience of peers. With the submission deadline set for February 9th and registrations already closed, it remains unclear if and how authors of accepted papers might attend.

Continue reading “ICML 2018 Registrations Sell Out Before Submission Deadline”

Death Note: Finally, an Anime about Deep Learning

It’s about time someone developed an anime series about deep learning. In the last several years, I’ve paid close attention to deep learning. And while I’m far from an expert on anime, I’ve watched a nonzero number of anime cartoons. And yet through neither route did I encounter even one single anime about deep learning.

There were some close calls. Ghost in the Shell gives a vague pretense of addressing AI. But the character might as well be a body-jumping alien. Nothing in this story speaks to the reality of machine learning research.

In Knights of Sidonia, if you can muster the superhuman endurance required to follow the series past its only interesting season, you’ll eventually find out that the flying space-ship made out of remnants of Earth on which Tanikaze and friends photosynthesize, while taking breaks from fighting space monsters, while wearing space-faring versions of mecha suits … [breath] contains an artificially intelligent brain-emulating parasitic nematode. But no serious consideration of ML appears.

If you were looking to anime for a critical discourse on artificial intelligence, until recently you’d be disappointed.

Continue reading “Death Note: Finally, an Anime about Deep Learning”

DeepMind Solves AGI, Summons Demon

In recent years, the rapid advance of artificial intelligence has evoked cries of alarm from billionaire entrepreneur Elon Musk and legendary physicist Stephen Hawking. Others, including the eccentric futurist Ray Kurzweil, have embraced the coming of true machine intelligence, suggesting that we might merge with the computers, gaining superintelligence and immortality in the process. As it turns out, we may not have to wait much longer.

This morning, a group of research scientists at Google DeepMind announced that they had inadvertently solved the riddle of artificial general intelligence (AGI). Their approach relies upon a beguilingly simple technique called symmetrically toroidal asynchronous  bisecting convolutions. By the year’s end, Alphabet executives expect that these neural networks will exhibit fully autonomous self-improvement. What comes next may affect us all.

Continue reading “DeepMind Solves AGI, Summons Demon”