Is This a Paper Review?

With paper submissions rocketing and the pool of experienced researchers stagnant, machine learning conferences, backs to the wall, have made the inevitable choice to inflate the ranks of peer reviewers, in the hopes that a fortified pool might handle the onslaught.

Infographic depicting NIPS submissions over time. The red bar plots fabricated data from the future.

With nearly every professor and senior grad student already reviewing at capacity, conference organizers have gotten creative, finding reviewers in unlikely places. Reached for comment, ICLR’s program chairs declined to reveal their strategy for scouting out untapped reviewing talent, indicating that these trade secrets might be exploited by rivals NeurIPS and ICML. Fortunately, on condition of anonymity, several (less senior) ICLR officials agreed to discuss a few unusual sources they’ve tapped:

  1. All of /r/machinelearning
  2. Twitter users who follow @ylecun
  3. Holders of registered .ai & .ml domains
  4. Commenters from ML articles posted to Hacker News
  5. YouTube commenters on Siraj Raval deep learning rap videos
  6. Employees of entities registered as owners of .ai & .ml domains
  7. Everyone camped within 4° of Andrej Karpathy at Burning Man
  8. GitHub handles forking TensorFlow, Pytorch, or MXNet in last 6 mos.
  9. A joint venture with Udacity to make reviewing for ICLR a course project for their Intro to Deep Learning class

With so many new referees, perhaps it’s not surprising to see, sprinkled among the stronger, more traditional reviews, a number of unusual ones: some oddly short, some oddly … net-speak (“imho, srs paper 4 real”), and some that challenge assumptions about what shared knowledge is pre-requisite for membership in this community (“who are you to say this matrix is degenerate?”) .

However, these reviews, which to a casual onlooker might signify incompetence, belie the earnest efforts of a cadre of new reviewers to rise to the occasion. I know this because, fortuitously, many of these new reviewers are avid readers of Approximately Correct, and for the last few weeks my email box has been overflowing with earnest questions from well-meaning neophytes.

If you’ve ever taught a course before, it might not come as a surprise  that their questions overlapped substantially.  So while we don’t normally post QA-type articles on Approximately Correct, an exception here seems both appropriate and efficient. I’ve compiled a few exemplar questions and provided succinct answers here in a rare QA piece that we’ll call “Is This a Paper Review”?

Henry in Pasadena writes:

Dear Approximately Correct,

I was assigned to review a paper. I read the abstract and formed an opinion tangentially related to the topic of the paper. Then I wrote a paragraph partly expressing my opinion and partly arguing with one of the anonymous commenters on an unrelated topic. This is standard practice on Hacker News, where I have earned over 2000 upvotes for similar reviews, which together form the basis of my qualifications to review for ICLR. Is this a paper review?

AC: No, that  is not a paper review.

Pandit in Mysore writes:

Dear Approximately Correct,

I began to read a paper about the convergence of gradient descent. Once they said “limit” I got lost, so I skipped to the back of the paper, where I noticed that they did not do any experiments on ImageNet. I wrote a one-line review. The title said “Not an expert but WTF?” and the body said “No experiments on ImageNet?” Is this a paper review?

AC: No, that  is not a paper review.

Xiao in Shanghai writes:

Dear Approximately Correct,

I trained an LSTM on past ICLR reviews. Then I ran it with the softmax temperature set to .01. The output was “Not novel [EOS].” I entered this in OpenReview. Is this a paper review?

AC: No, that  is not a paper review.

Jordan in Boulder writes:

Dear Approximately Correct,

When reviewing this paper, I noticed that it was vaguely similar in certain ways to an idea that I had in 1987. While I like the idea (as you might imagine), I assigned the paper a middling score, with one half of the review a solid discussion of the technical work and the other half devoted exclusively to enumerating my own papers and demanding that the author cite them. Is this a paper review?

AC: This sounds like a problematic paper review. But it could be a good review if you increase your score to what you think it might be were you dispassionate, tone down the stuff about your own papers, and send a thoughtful note to the metareviewer indicating a minor conflict of interest.

Rachel in New Jersey writes:

I read the paper. In the first 2 pages, there were 10 mathematical mistakes, including some that made the entire contribution of the paper obviously wrong. I stopped reading to conserve my time and wrote a short one-paragraph review that indicated the mistakes and said “not suitable for publication at ICLR.” Is this a paper review?

AC: While ordinarily so short a review might not be appropriate, this is a clear exception. Excellent review!

This piece was loosely inspired by Jason Feifer’s Is This a Selfie? 

Related Posts

Troubling Trends in Machine Learning Scholarship

By Zachary C. Lipton* & Jacob Steinhardt*
*equal authorship

Originally presented at ICML 2018: Machine Learning Debates
[arXiv link]

1   Introduction

Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible.

What sort of papers best serve their readers? We can enumerate desirable characteristics: these papers should (i) provide intuition to aid the reader’s understanding, but clearly distinguish it from stronger conclusions supported by evidence; (ii) describe empirical investigations that consider and rule out alternative hypotheses [62]; (iii) make clear the relationship between theoretical analysis and intuitive or empirical claims [64]; and (iv) use language to empower the reader, choosing terminology to avoid misleading or unproven connotations, collisions with other definitions, or conflation with other related but distinct concepts [56].

Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship:

  1. Failure to distinguish between explanation and speculation.
  2. Failure to identify the sources of empirical gains, e.g. emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning.
  3. Mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g. by confusing technical and non-technical concepts.
  4. Misuse of language, e.g. by choosing terms of art with colloquial connotations or by overloading established technical terms.

AI Researcher Joins Johnson & Johnson, to Make More than $19 Squillion

Three weeks ago, New York Times reporter Cade Metz sent shockwaves through society with a startling announcement that A.I. researchers were making more than $1 Million dollars, even at a nonprofit!

AI super-hero and newly minted squillionaire Zachary Chase Lipton feeds a wallaby bitcoins while vacationing on Elon Musk’s interplanetary animal preserve on the Martian plains.

Within hours, I received multiple emails. Parents, friends, old classmates, my girlfriend all sent emails. Did you see the article? Maybe they wanted me to know what riches a life in private industry had in store for me? Perhaps they were curious if I was already bathing in Cristal, shopping for yachts, or planning to purchase an atoll among the Maldives? Perhaps the communist sympathizers in my social circles had renewed admiration for my abstention from such extreme opulence.

Continue reading “AI Researcher Joins Johnson & Johnson, to Make More than $19 Squillion”

Portfolio Approach to AI Safety Research

[This article originally appeared on the Deep Safety blog.]

dimensionsLong-term AI safety is an inherently speculative research area, aiming to ensure safety of advanced future systems despite uncertainty about their design or algorithms or objectives. It thus seems particularly important to have different research teams tackle the problems from different perspectives and under different assumptions. While some fraction of the research might not end up being useful, a portfolio approach makes it more likely that at least some of us will be right.

In this post, I look at some dimensions along which assumptions differ, and identify some underexplored reasonable assumptions that might be relevant for prioritizing safety research. In the interest of making this breakdown as comprehensive and useful as possible, please let me know if I got something wrong or missed anything important.

Continue reading “Portfolio Approach to AI Safety Research”

Death Note: Finally, an Anime about Deep Learning

It’s about time someone developed an anime series about deep learning. In the last several years, I’ve paid close attention to deep learning. And while I’m far from an expert on anime, I’ve watched a nonzero number of anime cartoons. And yet through neither route did I encounter even one single anime about deep learning.

There were some close calls. Ghost in the Shell gives a vague pretense of addressing AI. But the character might as well be a body-jumping alien. Nothing in this story speaks to the reality of machine learning research.

In Knights of Sidonia, if you can muster the superhuman endurance required to follow the series past its only interesting season, you’ll eventually find out that the flying space-ship made out of remnants of Earth on which Tanikaze and friends photosynthesize, while taking breaks from fighting space monsters, while wearing space-faring versions of mecha suits … [breath] contains an artificially intelligent brain-emulating parasitic nematode. But no serious consideration of ML appears.

If you were looking to anime for a critical discourse on artificial intelligence, until recently you’d be disappointed.

Continue reading “Death Note: Finally, an Anime about Deep Learning”

Machine Learning Security at ICLR 2017

(This article originally appeared here. Thanks to Janos Kramar for his feedback on this post.)

The overall theme of the ICLR conference setting this year could be summarized as “finger food and ships”. More importantly, there were a lot of interesting papers, especially on machine learning security, which will be the focus on this post. (Here is a great overview of the topic.)


On the attack side, adversarial perturbations now work in physical form (if you print out the image and then take a picture) and they can also interfere with image segmentation. This has some disturbing implications for fooling vision systems in self-driving cars, such as impeding them from recognizing pedestrians. Adversarial examples are also effective at sabotaging neural network policies in reinforcement learning at test time.

Continue reading “Machine Learning Security at ICLR 2017”

DeepMind Solves AGI, Summons Demon

In recent years, the rapid advance of artificial intelligence has evoked cries of alarm from billionaire entrepreneur Elon Musk and legendary physicist Stephen Hawking. Others, including the eccentric futurist Ray Kurzweil, have embraced the coming of true machine intelligence, suggesting that we might merge with the computers, gaining superintelligence and immortality in the process. As it turns out, we may not have to wait much longer.

This morning, a group of research scientists at Google DeepMind announced that they had inadvertently solved the riddle of artificial general intelligence (AGI). Their approach relies upon a beguilingly simple technique called symmetrically toroidal asynchronous  bisecting convolutions. By the year’s end, Alphabet executives expect that these neural networks will exhibit fully autonomous self-improvement. What comes next may affect us all.

Continue reading “DeepMind Solves AGI, Summons Demon”