NYU Law’s Algorithms and Explanations

Last week, on April 27th and 28th, I attended Algorithms and Explanations, an interdisciplinary conference hosted by NYU Law School’s Information Law Institute. The thrust of the conference could be summarized as follows:

  1. Humans make decisions that affect the lives of other humans
  2. In a number of regulatory contexts, humans must explain decisions, e.g.
    • Bail, parole, and sentencing decisions
    • Approving a line of credit
  3. Increasingly, algorithms “make” decisions traditionally made by man, e.g.
    • Risk models already used to make decisions regarding incarceration
    • Algorithmically-determined default risks already used to make loans
  4. This poses serious questions for regulators in various domains:
    • Can these algorithms offer explanations?
    • What sorts of explanations can they offer?
    • Do these explanations satisfy the requirements of the law?
    • Can humans actually explain their decisions in the first place?

The conference was organized into 9 panels. Each featured between 3 and 5 20-minute talks followed by a moderated discussion and Q&A. The first panel, moderated by Helen Nissenbaum (NYU & Cornell Tech), featured legal scholars (including conference organizer Katherine Strandburg) and addressed the legal arguments for explanations in the first place. A second panel featured sociologists Duncan Watts (MSR) and Jenna Burrell (Berkeley) as well as Solon Borocas (MSR), an organizer of the Fairness, Accountability and Transparency in Machine Learning workshop.

Katherine Jo Strandburg, NYU Law professor and conference organizer

I participated in the third panel, addressing technical issues regarding approaches to explaining algorithms (and the feasibility of the pursuit itself). I presented some of the arguments from my position piece The Mythos of Model Interpretability. The pith of my talk could be summarized compactly as follows:

  1. There are two separate questions that must be asked:
    • Can we develop algorithms (of any kind) that some day might satisfactorily explain their actions? (theoretical)
    • Can we satisfactorily explain the actions of the algorithms we’re using in the real world today? (actually matters now)
  2. Nearly all machine learning in the real world is supervised learning.
  3. Supervised learning operates on frozen snapshots of data, thus knows nothing of the dynamics driving observations.
  4. Supervised learning merely learns to model conditional probabilities, has no knowledge of the decisions theory that might be strapped on post-hoc to take actions or of downstream consequences.
  5. Supervised learning models are nearly always trained with biased data and are often trained to optimize the wrong objective (e.g. clicks vs newsworthiness).
  6. To summarize as a Tweet: “Modern machine learning: We train the wrong models on the wrong data to solve the wrong problems & feed the results into the wrong software“.
  7. With ML + naive decision theory making consequential decisions, a concerned society asks (reasonably) for explanations.
  8. The machine learning community generally lacks the critical thinking skills to understand the question.
  9. While a niche of  machine learning interpretability research has emerged, papers rarely identify what question they are asking, let alone provide answers.
  10. The research generally attempts to understand mechanistically, “what patterns did the model learn?” but not “why are those patterns there?”

Also in the technical panel, Anupam Datta (CMU) discussed an approach to infer whether a model has reconstructed any sensitive features (like race) via an intermediate representation in the model. Krishna Gummadi (Max Planck Institute) made an empirical case study of the explanations offered by Facebook for the ads they show.

Alexandra Chouldechova presented a deep look at recidivism prediction. Recidivism prediction is the practice of predicting, based on features of an inmate, the likelihood that someone will re-offend in the future. Of course, the ground truth data really only captures those inmates which both re-offend and get caught. We never get to see who commits crimes but doesn’t get caught. Typically, the purpose in making these predictions (probabilistic scores between 0 and 1) is to use them as risk scores for guiding decisions regarding incarceration.

Whether recidivism predictions from supervised models represent a reasonable or fundamentally flawed criteria for making parole or sentencing decisions was a recurring debate throughout both days of the conference. Personally, I’m inclined to believe that the entire practice of risk-based incarceration is fundamentally immoral/unfair, issues of bias aside.

Regardless of one’s take on the morality of risk-based incarceration, Chouldechova’s analysis was fascinating. In her talk, she motivated the use of model comparison as a way for understanding the effects of a black-box algorithm’s decisions. Chouldechova compared the scores assigned by Compass, a proprietary model to simply relying upon the count of prior offenses as a measure of risk.

While predicting risk based on the number of prior offenses has the benefit of punishing only for crimes already committed (not future crimes forecasted), it has the drawback of disproportionately punishing older people who may have been prolific criminals in their youth but have since outgrown crime. For a proper dive into this line of research, see Chouldechova’s recent publication on the topic.

The day concluded with a second panel of legal scholars. The Q&A section here exploded in fireworks as a lively debate ensued over what protections are actually afforded by the European Union’s recently passed General Data Protection Regulation legislation (set to take effect in 2018). While I won’t recap the debate in full detail, it centered on Sandra Wachter’s Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. In the paper, Wachter interrogates the “ambiguity and limited scope of the [GDPR’s] ‘right not to be subject to automated decision-making’”, suggesting that this “raises questions over the protection actually afforded to data subjects” and “runs the risk of being toothless”.

The second day of the conference turned towards specific application areas. Panels addressed:

  • Health: Federico Cabitza, Rich Caruana, Francesca Rossi, Ignacio Cofone
  • Consumer credit: Dan Raviv (Lendbuzz), Aaron Rieke (Upturn), Frank Pasquale (University of Maryland), Yafit Lev-Aretz, NYU
  • The media: Gilad Lotan (Buzzfeed), Nicholas Diakopoulos (University of Maryland), Brad Greenberg (Yale), Madelyn Sanfilippo, NYU]
  • The courts: Julius Adebayo (FastForward Labs), Paul Rifelj (Wisconsin Public Defenders), Andrea Roth (UC Berkeley), Amanda Levendowski, NYU
  • Predictive policing: Jeremy Heffner (Hunchlab), Dean Esserman (Police Foundation) Kiel Brennan-Marquez (NYU), Rebecca Wexler (Yale)

As parting thoughts, the following themes recurred [either in talks, discussions, or my private ruminations] throughout the conference:

  1. When we ask about explanations, who are they for?
    • Model builders?
    • Consumers?
    • Regulators?
  2. Discussions on the trade-off between accuracy vs. explainability are often ill-posed:
    • We often lose sight of the fact that models are typically optimized to do the wrong thing. Predicting clicks accurately is not the same thing as successfully choosing newsworthy content.
    • If we’re optimizing the wrong thing in the first place – then how can we assess a tradeoff between accuracy and explainability?
    • What does it mean to compare humans to algorithms quantitatively when the task is mis-specified?
  3. This research needs a home:
    • As with The Human use of Machine Learning before it, this conference did a wonderful job of bringing together scholars from a variety of disciplines. Adding in FAT-ML, it appears that a solid community is coalescing to study the social impacts of machine learning.
    • However, for publishing, the community remains fractured. Purely technical contributions (a new algorithm or visualization technique, say) have a home in the traditional venues. And discussions of policy have a home in legal journals.
    • It’s not clear where truly interdisciplinary research belongs. The failure of machine learning publications to entertain critical papers seems problematic. Perhaps it’s time that a proper publishing  conference or journal emerged from this community?

Machine Learning Security at ICLR 2017

(This article originally appeared here. Thanks to Janos Kramar for his feedback on this post.)

The overall theme of the ICLR conference setting this year could be summarized as “finger food and ships”. More importantly, there were a lot of interesting papers, especially on machine learning security, which will be the focus on this post. (Here is a great overview of the topic.)

food-and-ships

On the attack side, adversarial perturbations now work in physical form (if you print out the image and then take a picture) and they can also interfere with image segmentation. This has some disturbing implications for fooling vision systems in self-driving cars, such as impeding them from recognizing pedestrians. Adversarial examples are also effective at sabotaging neural network policies in reinforcement learning at test time.

adv-ex-policy.png

In more encouraging news, adversarial examples are not entirely transferable between different models. For targeted examples, which aim to be misclassified as a specific class, the target class is not preserved when transferring to a different model. For example, if an image of a school bus is classified as a crocodile by the original model, it has at most 4% probability of being seen as a crocodile by another model. The paper introduces an ensemble method for developing adversarial examples whose targets do transfer, but this seems to only work well if the ensemble includes a model with a similar architecture to the new model.

On the defense side, there were some new methods for detecting adversarial examples. One method augments neural nets with a detector subnetwork, which works quite well and generalizes to new adversaries (if they are similar to or weaker than the adversary used for training). Another approach analyzes adversarial images using PCA, and finds that they are similar to normal images in the first few thousand principal components, but have a lot more variance in later components. Note that the reverse is not the case – adding arbitrary variation in trailing components does not necessarily encourage misclassification.

There has also been progress in scaling adversarial training to larger models and data sets, which also found that higher-capacity models are more resistant against adversarial examples than lower-capacity models. My overall impression is that adversarial attacks are still ahead of adversarial defense, but the defense side is starting to catch up.

20170426_202937.jpg

Press Failure: The Guardian’s “Meet Erica”

Meet Erica, the world’s most human-like autonomous android. From its title alone, this documentary promises a sensational encounter. As the screen fades in from black, a marimba tinkles lightly in the background and a Japanese alleyway appears. Various narrators ask us:

“What does it mean to think?”

“What is human creativity?”

“What does it mean to have a personality?”

“What is an interaction?”

“What is a minimal definition of humans?”

The title, these questions, and nearly everything that follows mislead. This article is an installment in a series of posts addressing the various sources of misinformation feeding the present AI hype cycle.

Continue reading “Press Failure: The Guardian’s “Meet Erica””

DeepMind Solves AGI, Summons Demon

In recent years, the rapid advance of artificial intelligence has evoked cries of alarm from billionaire entrepreneur Elon Musk and legendary physicist Stephen Hawking. Others, including the eccentric futurist Ray Kurzweil, have embraced the coming of true machine intelligence, suggesting that we might merge with the computers, gaining superintelligence and immortality in the process. As it turns out, we may not have to wait much longer.

This morning, a group of research scientists at Google DeepMind announced that they had inadvertently solved the riddle of artificial general intelligence (AGI). Their approach relies upon a beguilingly simple technique called symmetrically toroidal asynchronous  bisecting convolutions. By the year’s end, Alphabet executives expect that these neural networks will exhibit fully autonomous self-improvement. What comes next may affect us all.

Continue reading “DeepMind Solves AGI, Summons Demon”

Notes on Response to “The AI Misinformation Epidemic”

On Monday, I posted an article titled The AI Misinformation Epidemic. The article introduces a series of posts that will critically examine the various sources of misinformation underlying this AI hype cycle.

The post came about for the following reason: While I had contemplated the idea for weeks, I couldn’t choose which among the many factors to focus on and which to exclude. My solution was to break down the issue into several narrower posts. The AI Machine Learning Epidemic introduced the problem, sketched an outline for the series, and articulated some preliminary philosophical arguments.

To my surprise, it stirred up a frothy reaction. In a span of three days, the site received over 36,000 readers. To date, the article received 68 comments on the original post, 274 comments on hacker news, and 140 comments on machine learning subreddit.

To ensure that my post contributes as little novel misinformation as possible, I’d like to briefly address the response to the article and some common misconceptions shared by many comments. Continue reading “Notes on Response to “The AI Misinformation Epidemic””

The AI Misinformation Epidemic

Interest in machine learning may be at an all-time high. Per Google Trends, people are searching for machine learning nearly five times as often as five years ago. And at the University of California San Diego (UCSD), where I’m presently a PhD candidate, we had over 300 students enrolled in both our graduate-level recommender systems and neural networks courses.

Much of this attention is warranted. Breakthroughs in computer vision, speech recognition, and, more generally, pattern recognition in large data sets, have given machine learning substantial power to impact industry, society, and other academic disciplines.

Continue reading “The AI Misinformation Epidemic”

Fake News Challenge – Revised and Revisited

The organizers of the The Fake News Challenge have subjected it to a significant overhaul. In this light, many of my criticisms of the challenge no longer apply.

Some context:

Last month, I posted a critical piece addressing the fake news challenge. Organized by Dean Pomerleau and Delip Rao, the challenge aspires to leverage advances in machine learning to combat the epidemic viral spread of misinformation that plagues social media. The original version of the the challenge asked teams to take a claim, such as “Hillary Clinton eats babies”, and output a prediction of its veracity together with supporting documentation (links culled from the internet). Presumably, their hope was that an on-the-fly artificially-intelligent fact checker could be integrated into social media services to stop people from unwittingly sharing fake news.

My response criticized the challenge as both ill-specified (fake-ness not defined), circular (how do we know the supporting documents are legit?) and infeasible (are teams supposed to comb the entire web?)

Continue reading “Fake News Challenge – Revised and Revisited”

The Deception of Supervised Learning – V2

[This article is a revised version reposted with permission from KDnuggets]

Imagine you’re a doctor tasked with choosing a cancer therapy. Or a Netflix exec tasked with recommending movies. You have a choice. You could think hard about the problem and come up with some rules. But these rules would be overly simplistic, not personalized to the patient or customer. Alternatively, you could let the data decide what to do!

The ability to programmatically make intelligent decisions by learning complex decision rules from big data is a driving selling point of machine learning. Leaps forward in the predictive accuracy of supervised learning techniques, especially deep learning, now yield classifiers that outperform human predictive accuracy on many tasks. We can guess how an individual will rate a movie, classify images, or recognize speech with jaw-dropping accuracy. So why not make our services smart by letting the data tell us what to do?

Continue reading “The Deception of Supervised Learning – V2”

Is Fake News a Machine Learning Problem?

On Friday, Donald J. Trump was sworn in as the 45th president of the United States. The inauguration followed a bruising primary and general election, in which social media played an unprecedented role. In particular, the proliferation of fake news emerged as a dominant storyline. Throughout the campaign, explicitly false stories circulated through the internet’s echo chambers. Some fake stories originated as rumors, others were created for profit and monetized with click-based advertisements, and according to US Director of National Intelligence James Clapper, many fake news were orchestrated by the Russian government with the intention of influencing the results.  While it is not possible to observe the counterfactual, many believe that the election’s outcome hinged on the influence of these stories.

For context, consider one illustrative case as described by the New York Times. On November 9th, 35-year old marketer Erik Tucker tweeted a picture of several buses, claiming that they were transporting paid protesters to demonstrate against Trump. The post quickly went viral, receiving over 16,000 shares on Twitter and 350,000 shares on Facebook. Trump and his surrogates joined in, promoting the story through social media. Tucker’s claim turned out to be a fabrication. Nevertheless, it likely reached millions of people, more than many conventional news stories.

A number of critics cast blame on technology companies like Facebook, Twitter, and Google, suggesting that they have a responsibility to address the fake news epidemic because their algorithms influence who sees which stories. Some linked the fake news phenomenon to the idea that personalized search results and news feeds create a filter bubble, a dynamic in which readers only encounter stories that they are likely to click on, comment on, or like. As a consequence, readers might only encounter stories that confirm pre-existing beliefs.

Facebook, in particular, has been strongly criticized for their trending news widget, which operated (at the time) without human intervention, giving viral items a spotlight, however defamatory or false. In September, Facebook’s trending news box promoted a story titled ‘Michele Obama was born a man’. Some have wondered why Facebook, despite its massive investment in artificial intelligence (machine learning), hasn’t developed an automated solution to the problem.

Continue reading “Is Fake News a Machine Learning Problem?”

Policy Field Notes: NIPS Update

By Jack Clark and Tim Hwang. 

Conversations about the social impact of AI often are very abstract, focusing on broad generalizations about technology rather than talking about the specific state of the research field. That makes it challenging to have a full conversation about what good public policy regarding AI would be like. In the interest of helping to bridge that gap, Jack Clark and I have been playing around with doing recaps that’ll take a selection of papers from a recent conference and talk about the longer term policy implications of the work. This one covers papers that appeared at NIPS 2016.

If it’s helpful to the community, we’ll plan to roll out similar recaps throughout 2017 — with the next one being ICLR in April.

Continue reading “Policy Field Notes: NIPS Update”