Portfolio Approach to AI Safety Research

[This article originally appeared on the Deep Safety blog.]

dimensionsLong-term AI safety is an inherently speculative research area, aiming to ensure safety of advanced future systems despite uncertainty about their design or algorithms or objectives. It thus seems particularly important to have different research teams tackle the problems from different perspectives and under different assumptions. While some fraction of the research might not end up being useful, a portfolio approach makes it more likely that at least some of us will be right.

In this post, I look at some dimensions along which assumptions differ, and identify some underexplored reasonable assumptions that might be relevant for prioritizing safety research. In the interest of making this breakdown as comprehensive and useful as possible, please let me know if I got something wrong or missed anything important.

Continue reading “Portfolio Approach to AI Safety Research”

Do I really have to cite an arXiv paper?

With peak submission season for machine learning conferences just behind us, many in our community have peer-review on the mind. One especially hot topic is the arXiv preprint service. Computer scientists often post papers to arXiv in advance of formal publication to share their ideas and hasten their impact.

Despite the arXiv’s popularity, many authors are peeved, pricked, piqued, and provoked by requests from reviewers that they cite papers which are only published on the arXiv preprint.

“Do I really have to cite arXiv papers?”, they whine.

“Come on, they’re not even published!,” they exclaim.

The conversation is especially testy owing to the increased use (read misuse) of the arXiv by naifs. The preprint, like the conferences proper is awash in low-quality papers submitted by band-wagoners. Now that the tooling for deep learning has become so strong, it’s especially easy to clone a repo, run it on a new dataset, molest a few hyper-parameters, and start writing up a draft.

Of particular worry is the practice of flag-planting. That’s when researchers anticipate that an area will get hot. To avoid getting scooped / to be the first scoopers, authors might hastily throw an unfinished work on the arXiv to stake their territory: we were the first to work on X. All that follow must cite us. In a sublimely cantankerous rant on Medium, NLP/ML researcher Yoav Goldberg blasted the rising use of the (mal)practice. Continue reading “Do I really have to cite an arXiv paper?”

The Futurist’s Dilemma

The following passage is a musing on the futility of futurism. While I present a perspective, I am not married to it.

When I sat down to write this post, I briefly forgot how to spell “dilemma”. Fortunately, Apple’s spell-check magnanimously corrected me. But it seems likely, if I were cast away on an island without any automatic spell-checkers or other people to subject my brain to the cold slap of reality, that my spelling would slowly deteriorate.

And just yesterday, I had a strong intuition about trajectories through weight-space taken by neural networks along an optimization path. For at least ten minutes, I was reasonably confident that a simple trick might substantially lower the number of updates (and thus the time) it takes to train a neural network.

But for the ability to test my idea against an unforgiving reality, I might have become convinced of its truth. I might have written a paper, entitled “NO Need to worry about long training times in neural networks” (see real-life inspiration for farcical clickbait title). Perhaps I might have founded SGD-Trick University, and schooled the next generation of big thinkers on how to optimize neural networks.

Continue reading “The Futurist’s Dilemma”

NYU Law’s Algorithms and Explanations

Last week, on April 27th and 28th, I attended Algorithms and Explanations, an interdisciplinary conference hosted by NYU Law School’s Information Law Institute. The thrust of the conference could be summarized as follows:

  1. Humans make decisions that affect the lives of other humans
  2. In a number of regulatory contexts, humans must explain decisions, e.g.
    • Bail, parole, and sentencing decisions
    • Approving a line of credit
  3. Increasingly, algorithms “make” decisions traditionally made by man, e.g.
    • Risk models already used to make decisions regarding incarceration
    • Algorithmically-determined default risks already used to make loans
  4. This poses serious questions for regulators in various domains:
    • Can these algorithms offer explanations?
    • What sorts of explanations can they offer?
    • Do these explanations satisfy the requirements of the law?
    • Can humans actually explain their decisions in the first place?

The conference was organized into 9 panels. Each featured between 3 and 5 20-minute talks followed by a moderated discussion and Q&A. The first panel, moderated by Helen Nissenbaum (NYU & Cornell Tech), featured legal scholars (including conference organizer Katherine Strandburg) and addressed the legal arguments for explanations in the first place. A second panel featured sociologists Duncan Watts (MSR) and Jenna Burrell (Berkeley) as well as Solon Borocas (MSR), an organizer of the Fairness, Accountability and Transparency in Machine Learning workshop.

Katherine Jo Strandburg, NYU Law professor and conference organizer

Continue reading “NYU Law’s Algorithms and Explanations”

Machine Learning Security at ICLR 2017

(This article originally appeared here. Thanks to Janos Kramar for his feedback on this post.)

The overall theme of the ICLR conference setting this year could be summarized as “finger food and ships”. More importantly, there were a lot of interesting papers, especially on machine learning security, which will be the focus on this post. (Here is a great overview of the topic.)


On the attack side, adversarial perturbations now work in physical form (if you print out the image and then take a picture) and they can also interfere with image segmentation. This has some disturbing implications for fooling vision systems in self-driving cars, such as impeding them from recognizing pedestrians. Adversarial examples are also effective at sabotaging neural network policies in reinforcement learning at test time.


In more encouraging news, adversarial examples are not entirely transferable between different models. For targeted examples, which aim to be misclassified as a specific class, the target class is not preserved when transferring to a different model. For example, if an image of a school bus is classified as a crocodile by the original model, it has at most 4% probability of being seen as a crocodile by another model. The paper introduces an ensemble method for developing adversarial examples whose targets do transfer, but this seems to only work well if the ensemble includes a model with a similar architecture to the new model.

On the defense side, there were some new methods for detecting adversarial examples. One method augments neural nets with a detector subnetwork, which works quite well and generalizes to new adversaries (if they are similar to or weaker than the adversary used for training). Another approach analyzes adversarial images using PCA, and finds that they are similar to normal images in the first few thousand principal components, but have a lot more variance in later components. Note that the reverse is not the case – adding arbitrary variation in trailing components does not necessarily encourage misclassification.

There has also been progress in scaling adversarial training to larger models and data sets, which also found that higher-capacity models are more resistant against adversarial examples than lower-capacity models. My overall impression is that adversarial attacks are still ahead of adversarial defense, but the defense side is starting to catch up.


Press Failure: The Guardian’s “Meet Erica”

Meet Erica, the world’s most human-like autonomous android. From its title alone, this documentary promises a sensational encounter. As the screen fades in from black, a marimba tinkles lightly in the background and a Japanese alleyway appears. Various narrators ask us:

“What does it mean to think?”

“What is human creativity?”

“What does it mean to have a personality?”

“What is an interaction?”

“What is a minimal definition of humans?”

The title, these questions, and nearly everything that follows mislead. This article is an installment in a series of posts addressing the various sources of misinformation feeding the present AI hype cycle.

Continue reading “Press Failure: The Guardian’s “Meet Erica””

DeepMind Solves AGI, Summons Demon

In recent years, the rapid advance of artificial intelligence has evoked cries of alarm from billionaire entrepreneur Elon Musk and legendary physicist Stephen Hawking. Others, including the eccentric futurist Ray Kurzweil, have embraced the coming of true machine intelligence, suggesting that we might merge with the computers, gaining superintelligence and immortality in the process. As it turns out, we may not have to wait much longer.

This morning, a group of research scientists at Google DeepMind announced that they had inadvertently solved the riddle of artificial general intelligence (AGI). Their approach relies upon a beguilingly simple technique called symmetrically toroidal asynchronous  bisecting convolutions. By the year’s end, Alphabet executives expect that these neural networks will exhibit fully autonomous self-improvement. What comes next may affect us all.

Continue reading “DeepMind Solves AGI, Summons Demon”

Notes on Response to “The AI Misinformation Epidemic”

On Monday, I posted an article titled The AI Misinformation Epidemic. The article introduces a series of posts that will critically examine the various sources of misinformation underlying this AI hype cycle.

The post came about for the following reason: While I had contemplated the idea for weeks, I couldn’t choose which among the many factors to focus on and which to exclude. My solution was to break down the issue into several narrower posts. The AI Machine Learning Epidemic introduced the problem, sketched an outline for the series, and articulated some preliminary philosophical arguments.

To my surprise, it stirred up a frothy reaction. In a span of three days, the site received over 36,000 readers. To date, the article received 68 comments on the original post, 274 comments on hacker news, and 140 comments on machine learning subreddit.

To ensure that my post contributes as little novel misinformation as possible, I’d like to briefly address the response to the article and some common misconceptions shared by many comments. Continue reading “Notes on Response to “The AI Misinformation Epidemic””

Fake News Challenge – Revised and Revisited

The organizers of the The Fake News Challenge have subjected it to a significant overhaul. In this light, many of my criticisms of the challenge no longer apply.

Some context:

Last month, I posted a critical piece addressing the fake news challenge. Organized by Dean Pomerleau and Delip Rao, the challenge aspires to leverage advances in machine learning to combat the epidemic viral spread of misinformation that plagues social media. The original version of the the challenge asked teams to take a claim, such as “Hillary Clinton eats babies”, and output a prediction of its veracity together with supporting documentation (links culled from the internet). Presumably, their hope was that an on-the-fly artificially-intelligent fact checker could be integrated into social media services to stop people from unwittingly sharing fake news.

My response criticized the challenge as both ill-specified (fake-ness not defined), circular (how do we know the supporting documents are legit?) and infeasible (are teams supposed to comb the entire web?)

Continue reading “Fake News Challenge – Revised and Revisited”

Is Fake News a Machine Learning Problem?

On Friday, Donald J. Trump was sworn in as the 45th president of the United States. The inauguration followed a bruising primary and general election, in which social media played an unprecedented role. In particular, the proliferation of fake news emerged as a dominant storyline. Throughout the campaign, explicitly false stories circulated through the internet’s echo chambers. Some fake stories originated as rumors, others were created for profit and monetized with click-based advertisements, and according to US Director of National Intelligence James Clapper, many fake news were orchestrated by the Russian government with the intention of influencing the results.  While it is not possible to observe the counterfactual, many believe that the election’s outcome hinged on the influence of these stories.

For context, consider one illustrative case as described by the New York Times. On November 9th, 35-year old marketer Erik Tucker tweeted a picture of several buses, claiming that they were transporting paid protesters to demonstrate against Trump. The post quickly went viral, receiving over 16,000 shares on Twitter and 350,000 shares on Facebook. Trump and his surrogates joined in, promoting the story through social media. Tucker’s claim turned out to be a fabrication. Nevertheless, it likely reached millions of people, more than many conventional news stories.

A number of critics cast blame on technology companies like Facebook, Twitter, and Google, suggesting that they have a responsibility to address the fake news epidemic because their algorithms influence who sees which stories. Some linked the fake news phenomenon to the idea that personalized search results and news feeds create a filter bubble, a dynamic in which readers only encounter stories that they are likely to click on, comment on, or like. As a consequence, readers might only encounter stories that confirm pre-existing beliefs.

Facebook, in particular, has been strongly criticized for their trending news widget, which operated (at the time) without human intervention, giving viral items a spotlight, however defamatory or false. In September, Facebook’s trending news box promoted a story titled ‘Michele Obama was born a man’. Some have wondered why Facebook, despite its massive investment in artificial intelligence (machine learning), hasn’t developed an automated solution to the problem.

Continue reading “Is Fake News a Machine Learning Problem?”