The Deception of Supervised Learning – V2

[This article is a revised version reposted with permission from KDnuggets]

Imagine you’re a doctor tasked with choosing a cancer therapy. Or a Netflix exec tasked with recommending movies. You have a choice. You could think hard about the problem and come up with some rules. But these rules would be overly simplistic, not personalized to the patient or customer. Alternatively, you could let the data decide what to do!

The ability to programmatically make intelligent decisions by learning complex decision rules from big data is a driving selling point of machine learning. Leaps forward in the predictive accuracy of supervised learning techniques, especially deep learning, now yield classifiers that outperform human predictive accuracy on many tasks. We can guess how an individual will rate a movie, classify images, or recognize speech with jaw-dropping accuracy. So why not make our services smart by letting the data tell us what to do?

Here’s the rub.

While the supervised paradigm is but one of several in the machine learning canon, nearly all machine learning deployed in the real world amounts to supervised learning. And supervised learning methods doesn’t tell us to do anything. That is, the theory and conception of supervised learning addresses pattern recognition but disregards the notion of interaction with an environment altogether.

[Quick crash course: in supervised learning,
we collect a dataset of input-output (X,Y) pairs.
The learning algorithm then uses this data to train a model.
This model is simply a mapping from inputs to outputs.
Now given a new input (such as a [drug,patient] pair),
we can predict a likely output (say, 5-year survival).
We determine the quality of the model by assessing its performance
(say error rate or mean squared error) on hold-out data.]

machinelearning

Now suppose we train a model to predict 5-year survival given some features of the patient and the assigned treatment protocol.
The survival model that we train doesn’t know why drug A was prescribed to some patients and not others.And it has no way of knowing what will happen when you apply drug A to patients who previously wouldn’t have received it.
That’s because supervised learning relies on the i.i.d. assumption.
In short, this means that we believe that the training data is representative of the future data we will encounter.

But any time we introduce a decision protocol based on machine learning to the world, we change the world. This violates the fundamental axioms upon which our model is based. Once we alter the distribution of future data, we should expect to invalidate our entire model.

For some tasks, like speech recognition, these concerns seem remote.
Use of a voice transcription tool might not, in the short run, change how we speak. But in more dynamic contexts, the risks should alarm us.

For example, Rich Caruana of Microsoft Research showed a real-life model trained to predict risk of death for pneumonia patients.
Presumably, this information could be used to aid in triage.
The model however, showed that asthma was predictive of lower risk.
This was a true correlation in the data, but it owed to the more aggressive treatment such co-morbid patients received. Put simply, a researcher taking actions based on this information would be mistaking correlation for causation. And if a hospital used the risk score for triage, they would actually recklessly put the asthma patients at risk, thus invalidating the learned model model.

Supervised models can’t tell us what to do because they fundamentally ignore the entire idea of an action. So what do people mean when they say that they act based on a model? Or when they say that the model (or the data) tells them what to do? How is Facebook’s newsfeed algorithm curating stories? How is Netflix’s recommender system curating movies?

Usually what people mean is simply that they strap an ad-hoc decision protocol onto a supervised predictive model. Say we have a model that takes a pair of patient and drug and predicts a probability of patient survival.

A typical ad hoc rule might say that we should give the drug that maximizes the predicted probability of survival.

latex-image-2

Problematically, any classifier we might normally train would learn patterns that are contingent upon historical standard of care.
For drug A, the model might predict better outcomes because the drug truly causes better outcomes. For drug B, causality might be reversed. The healthier patients might have been more likely to receive that drug B, and survived simply because they were healthier in the first place.
While oncologists are not so reckless as to employ this reasoning willy-nilly, it’s precisely the logic that underlies less consequential recommender systems all over the internet. Netflix might not account for how its recommendations influence your viewing habits, and Facebook’s algorithms likely don’t account for the effects of curation on reader behavior.

Failure to account for causality or interaction with the environment are but two among many deceptions underlying the modern use of supervised learning. Other, less fundamental, issues abound. For example, we often optimize surrogate objectives that only faintly resemble our true objectives. Search engines assume that mouse clicks indicate accurately answered queries. This means that if, in a momentary lapse of spine,
you click on a clickbait story, the model serving it registers a job a well done.

Some other issues to heap on the laundry list of common deceptions:

  • Failure to account for real-life cost sensitivity
  • Erroneous interpretation of predicted probabilities as quantifications of uncertainty
  • Ignoring differences between constructed training sets and real world data

The overarching point here is that problem formulation for most machine learning systems can be badly mismatched against the real-world problems we’re trying to solve. This mismatch leads people to wonder whether they can “trust” machine learning models.

Some machine learners suggest that the desire for an interpretation will pass – that it reflects an unease which will abate if the models are “good enough”. But good enough at what? Minimizing cross-entropy loss on a surrogate task while using  a toy dataset?

So where do we go from here?

Model Interpretability

One solution is to go ahead and throw caution to the wind but then to interrogate the models to see if they’re behaving acceptably.
These efforts seek to interpret models to mitigate the mismatch between real and optimized objectives. The idea behind most work in interpretability is that in addition to the predictions required by our evaluation metrics, models should yield some additional information, which we term an interpretation. Interpretations can come in many varieties, notably transparency and post-hoc interpretability.
The idea behind transparency is that we can introspect the model and determine precisely what it’s doing. Unfortunately, the most useful models aren’t especially transparent. Post-hoc interpretations, on the other hand, address techniques to extract explanations, even those from models we can’t quite introspect. In a recent paper (https://arxiv.org/abs/1606.03490), I attempt a broad taxonomy of both the objectives and techniques for interpreting supervised models.

model-metric-interpretation

Upgrade to More Sophisticated Paradigms of Learning

Another solution might be to close the gap between the real and modeled objectives. Some problems, like cost sensitivity, can be addressed within the supervised learning paradigm. Others, like causality, might require us to pursue fundamentally more powerful models of learning. Reinforcement learning (RL), for example, directly models an agent acting within a sequential decision making process. The framework captures the causal effects of taking actions and accounts for a distribution of data that changes per modifications to the policy. Unfortunately, practical RL techniques for sequential decision-making have only been reduced to practice on toy problems with relatively small action-spaces.
Notable advances include Google Deepmind’s Atari and Go-playing agents.

Several papers by groups including Steve Young’s lab at Cambridge (paper), the research team at Montreal startup Maluuba (arxiv.org/abs/1606.03152), and my own work with Microsoft Research’s Deep Learning team (arxiv.org/abs/1608.05081), seek to extend this progress into the more practically useful realm of dialogue systems.

Using RL in critical settings like medical care poses its own thorny set of problems. For example, RL agents typically learn by exploration. You could think of exploration as running an experiment. Just like a doctor might run a randomized trial, the RL agent periodically takes randomized actions, using the information gained to guide continued improvement of its policy. But when is it OK to run experiments with human subjects?
To do any research on human subjects, even the most respected researchers are required to submit to an ethics board. Can we then turn relatively imbecilic agents loose to experiment on human subjects absent oversight?

Conclusions

Supervised learning is simultaneously unacceptable, inadequate, and yet, at present, the most powerful tool at our disposal. While it’s only reasonable to pillory the paradigm with criticism, it remains the most practically useful tool around. Nonetheless, I’d propose the following takeaways:

  1. We should aspire to unseat the primacy of strictly supervised solutions. Improvements in reinforcement learning offer a promising alternative.
  2. Even within the supervised learning paradigm, we should work harder to eliminate those flaws of problem formulation that are avoidable.
  3. We should remain suspicious of the behavior of live systems, and devise mechanisms to both understand them and provide guard-rails to protect against unacceptable outcomes.

Is Fake News a Machine Learning Problem?

On Friday, Donald J. Trump was sworn in as the 45th president of the United States. The inauguration followed a bruising primary and general election, in which social media played an unprecedented role. In particular, the proliferation of fake news emerged as a dominant storyline. Throughout the campaign, explicitly false stories circulated through the internet’s echo chambers. Some fake stories originated as rumors, others were created for profit and monetized with click-based advertisements, and according to US Director of National Intelligence James Clapper, many fake news were orchestrated by the Russian government with the intention of influencing the results.  While it is not possible to observe the counterfactual, many believe that the election’s outcome hinged on the influence of these stories.

For context, consider one illustrative case as described by the New York Times. On November 9th, 35-year old marketer Erik Tucker tweeted a picture of several buses, claiming that they were transporting paid protesters to demonstrate against Trump. The post quickly went viral, receiving over 16,000 shares on Twitter and 350,000 shares on Facebook. Trump and his surrogates joined in, promoting the story through social media. Tucker’s claim turned out to be a fabrication. Nevertheless, it likely reached millions of people, more than many conventional news stories.

A number of critics cast blame on technology companies like Facebook, Twitter, and Google, suggesting that they have a responsibility to address the fake news epidemic because their algorithms influence who sees which stories. Some linked the fake news phenomenon to the idea that personalized search results and news feeds create a filter bubble, a dynamic in which readers only encounter stories that they are likely to click on, comment on, or like. As a consequence, readers might only encounter stories that confirm pre-existing beliefs.

Facebook, in particular, has been strongly criticized for their trending news widget, which operated (at the time) without human intervention, giving viral items a spotlight, however defamatory or false. In September, Facebook’s trending news box promoted a story titled ‘Michele Obama was born a man’. Some have wondered why Facebook, despite its massive investment in artificial intelligence (machine learning), hasn’t developed an automated solution to the problem.

Continue reading “Is Fake News a Machine Learning Problem?”

Policy Field Notes: NIPS Update

By Jack Clark and Tim Hwang. 

Conversations about the social impact of AI often are very abstract, focusing on broad generalizations about technology rather than talking about the specific state of the research field. That makes it challenging to have a full conversation about what good public policy regarding AI would be like. In the interest of helping to bridge that gap, Jack Clark and I have been playing around with doing recaps that’ll take a selection of papers from a recent conference and talk about the longer term policy implications of the work. This one covers papers that appeared at NIPS 2016.

If it’s helpful to the community, we’ll plan to roll out similar recaps throughout 2017 — with the next one being ICLR in April.

Continue reading “Policy Field Notes: NIPS Update”

AI Safety Highlights from NIPS 2016

[This article is cross-posted from my blog. Thanks to Jan Leike, Zachary Lipton, and Janos Kramar for providing feedback on this post.]

This year’s Neural Information Processing Systems conference was larger than ever, with almost 6000 people attending, hosted in a huge convention center in Barcelona, Spain. The conference started off with two exciting announcements on open-sourcing collections of environments for training and testing general AI capabilities – the DeepMind Lab and the OpenAI Universe. Among other things, this is promising for testing safety properties of ML algorithms. OpenAI has already used their Universe environment to give an entertaining and instructive demonstration of reward hacking that illustrates the challenge of designing robust reward functions.

I was happy to see a lot of AI-safety-related content at NIPS this year. The ML and the Law symposium and Interpretable ML for Complex Systems workshop focused on near-term AI safety issues, while the Reliable ML in the Wild workshop also covered long-term problems. Here are some papers relevant to long-term AI safety:

Continue reading “AI Safety Highlights from NIPS 2016”

Machine Learning Meets Policy: Reflections on HUML 2016

Last Friday, the University of Ca’ Foscari in Venice organized an IEEE workshop on the Human Use of Machine Learning (HUML 2016). The workshop, held at the European Centre for Living Technology, hosted roughly 30 participants and broadly addressed the social impacts and ethical problems stemming from the wide-spread use of machine learning.

HUML joins a growing number workshops for critical voices in the ML community. These include Fairness, Accountability and Transparency in Machine Learning (FAT-ML), the #Data4Good at ICML 2016, and Human Interpretability of Machine Learning (WHI), held this year at ICML and Interpretable ML for Complex Systems, held this year at NIPS. Among this company, HUML was notable especially notable for diversity of perspectives. While FAT-ML, DS4Good and WHI featured presentations primarily by members of the machine learning community, HUML brought together scholars from philosophy of science, law, predictive policing, and  machine learning.

Continue reading “Machine Learning Meets Policy: Reflections on HUML 2016”

Clopen AI: Openness in different aspects of AI development

[This article is cross-posted from my blog. Thanks to Jelena Luketina and Janos Kramar for their detailed feedback on this post.]

1-clopen-set

There has been a lot of discussion about the appropriate level of openness in AI research in the past year – the OpenAI announcement, the blog post Should AI Be Open?, a response to the latter, and Nick Bostrom’s thorough paper Strategic Implications of Openness in AI development.

There is disagreement on this question within the AI safety community as well as outside it. Many people are justifiably afraid of concentrating power to create AGI and determine its values in the hands of one company or organization. Many others are concerned about the information hazards of open-sourcing AGI and the resulting potential for misuse. In this post, I argue that some sort of compromise between openness and secrecy will be necessary, as both extremes of complete secrecy and complete openness seem really bad. The good news is that there isn’t a single axis of openness vs secrecy – we can make separate judgment calls for different aspects of AGI development, and develop a set of guidelines.

Continue reading “Clopen AI: Openness in different aspects of AI development”

Are Deep Neural Networks Creative? v2

[This article is a revised version reposted with permission from KDnuggets]

Are deep neural networks creative? Given recent press coverage of art-generating deep learning, it might seem like a reasonable question. In February, Wired wrote of a gallery exhibition featuring works generated by neural networks. The works were created using Google’s inceptionism, technique that transforms images by iteratively modifying them to enhance the activation of specific neurons in a deep net. Many of the images appear trippy, with rocks transforming into buildings or leaves into insects. Several other researchers have proposed techniques for generating images from neural networks for their aesthetic or stylistic qualities. One method, introduced by Leon Gatys of the University of Tubingen in Germany, can extract the style from one image (say a painting by Van Gogh), and apply it to the content of another image (say a photograph).

In the academic sphere, work on generative image modeling has emerged as a hot research topic. Generative adversarial networks (GANs), introduced by Ian Goodfellow, synthesize novel images by modeling the distribution of seen images. Already some researchers have looked into ways of using GANS to perturb natural images, as by adding smiles to photos.

computer-deep-learning-algorithm-painting-masters-fb__700

In parallel, researchers have also made rapid progress on generative language modeling. Character-level recurrent neural network (RNN) language models now permeate the internet, appearing to hallucinate passages of Shakespeare, Linux source code, and even Donald Trump’s Twitter eruptions. Not surprisingly, a wave of papers and demos soon followed, using LSTMs for generating rap lyrics and poetry.

Clearly, these advances emanate from interesting research and deserve the fascination they inspire.

In this post, rather than address the quality of the work (which is admirable), or explain the methods (which has been done ad nauseam), we’ll instead address the question, can these nets reasonably be called creative? Already, some make the claim. The landing page for deepart.io, a site which commercializes the “Deep Style” work, proclaims “TURN YOUR PHOTOS INTO ART”. If we accept creativity as a prerequisite for art, the claim is made here implicitly.

Continue reading “Are Deep Neural Networks Creative? v2”

The Failure of Simple Narratives

Approximately Correct is not a political blog in any traditional sense. The mission is not to prognosticate elections, like FiveThirtyEight, nor to revel in the political circus, like Politico. And the common variety political writing seems antithetical to our goals. Today, political arguments tend to follow an anti-scientific pattern of choosing a perspective first and then selectively reaching for supporting evidence. It’s everything we should hope to avoid.

But, per our mission statement, this blog aims to address the intersection of scientific and technical developments with social issues. And social issues -the economy, the environment, healthcare, news curation, et al. – are necessarily political. Moreover, scientific practice requires dispassionate discourse and the ability to change one’s beliefs given new information. In this light, the abstention of scientists from political discourse seems irresponsible.

[An aside: Not all political issues are scientific or technical. The relative value of free speech vs the danger of hate speech may be an intrinsically subjective judgment. But many issues, such as global warming, explicitly exhibit scientific dimensions.]

Technical developments can necessitate policy shifts. Absent the capacity to warm the planet or the ability to detect such warming, one couldn’t justify strong reforms to energy policy. Additionally, absent scientific understanding of the likely effects of policy, one cannot argue effectively for or against them. So sober scientific analysis has a role to play not just in evaluating policies, but also in evaluating individual arguments.

Machine learning and data science interact with politics in a third important way. The political landscapes of entire nations are immense. Take last night’s presidential election for example. Roughly 120 million people voted in 3,007 counties, 435 congressional districts and 50 states. Hardly any citizens have visited every state. Not even the candidates could possibly visit every county. Thus, our sense of the nation’s pulse, and our narratives regarding the driving forces in the election are ultimately shaped by a mixture of second-hand accounts and data science (as by extensive polling).

Simplistic Narratives

Simplistic narratives and data science play off of each other. Narratives influence the questions that pollsters ask. And each poll result invites simplistic analysis. In the remainder of this post, without expressing my personal opinions, I’d like to give a dispassionate analysis of several popular stories that have risen to prominence during this election, sampled from across both the Democratic-Republican and establishment/anti-establishment divide. I choose these narratives neither because they are completely true nor completely false. Each presents a seemingly simple thesis that  belies more complex realities. To be as even-handed as possible, I’ve chosen one each from the Clinton-learning and Trump-leaning narratives. Continue reading “The Failure of Simple Narratives”

The Foundations of Algorithmic Bias

This morning, millions of people woke up and impulsively checked Facebook. They were greeted immediately by content curated by Facebook’s newsfeed algorithms. To some degree, this news might have influenced their perceptions of the day’s news, the economy’s outlook, and the state of the election. Every year, millions of people apply for jobs. Increasingly, their success might lie in part in the hands of computer programs tasked with matching applications to job openings. And every year, roughly 12 million people are arrested. Throughout the criminal justice system, computer-generated risk-assessments are used to determine which arrestees should be set free. In all these situations, algorithms are tasked with making decisions. 

Algorithmic decision-making mediates more and more of our interactions, influencing our social experiences, the news we see, our finances, and our career opportunities. We task computer programs with approving lines of credit, curating news, and filtering job applicants. Courts even deploy computerized algorithms to predict “risk of recidivism”, the probability that an individual relapses into criminal behavior. It seems likely that this trend will only accelerate as breakthroughs in artificial intelligence rapidly broadened the capabilities of software. 

futurama-judge

Turning decision-making over to algorithms naturally raises worries about our ability to assess and enforce the neutrality of these new decision makers. How can we be sure that the algorithmically curated news doesn’t have a political party bias or job listings don’t reflect a gender or racial bias? What other biases might our automated processes be exhibiting that that we wouldn’t even know to look for?

Continue reading “The Foundations of Algorithmic Bias”

Mission Statement

This post introduces approximatelycorrect.com. The aspiration for this blog is to offer a critical perspective on machine learning. We intend to cover both technical issues and the fuzzier problems that emerge when machine learning intersects with society.

For explaining the technical details of machine learning, we enter a lively field. As recent breakthroughs in machine learning have attracted mainstream interest, many blogs have stepped up to provide high quality tutorial content. But at present, critical discussions on the broader effects of machine learning lag behind technical progress.

On one hand, this seems natural. First a technology must exist before it can have an effect. Consider the use of machine learning for face recognition. For many decades, the field has accumulated extensive empirical knowledge. But until recently, with the technology reduced to practice, any consideration of how it might be used could only be speculative.

But the precarious state of the critical discussion owes to more than chronology. It also owes to culture, and to the rarity of the relevant interdisciplinary expertise. The machine learning community traditionally investigates scientific questions. Papers address well-defined theoretical problems, or empirically compare methods with well-defined objectives. Unfortunately, many pressing issues at the intersection of machine learning and society do not admit such crisp formulations. But, with notable exceptions, consideration of social issues within the machine learning community remains too rare.

Conversely, those academics and journalists best equipped to consider economic and social issues rarely possess the requisite understanding of machine learning to anticipate the plausible ways the two arenas might intersect. As a result, coverage in the mainstream consistently misrepresents the state of research, misses many important problems, and hallucinates others. Too many articles address Terminator scenarios, overstating the near-term plausibility of human-like machine consciousness, assume the existence (at present) of self-motivated machines with their own desiderata, etc. Too few consider the precise ways that machine learning may amplify biases or perturb the job market.

In short, we see this troubling scenario:

  1. Machine learning models increasingly find industrial use, assisting in credit decisions, recognizing faces in police databases, curating the news on social networks, and enabling self-driving cars. 
  2. The majority of knowledgeable people in the machine learning community, with notable exceptions, are not in the habit of considering the relevant economic, social, and other philosophical issues.
  3. Those in the habit of considering the relevant issues rarely possess the relevant machine learning expertise.

Complicating matters, mainstream discussion of AI-related technologies introduces speculative or spurious ideas alongside sober ones without communicating uncertainty clearly. For example, the likelihood of a machine learning classifier making a mistake on a new example, and the likelihood of a machine learning causing massive unemployment and the likelihood of the entire universe being a simulation run by agents in some meta-universe are all discussed as though they can be a assigned some common form of probability.

Compounding the lack of rigor, we also observe a hype cycle in the media. PR machines actively promote sensational views of the work currently done in machine learning, even as the researchers doing that work view the hype with suspicion. The press also has an incentive to run with the hype: sensational news sells papers. The New York Times has referenced “Terminator” and “artificial intelligence” in the same story 5,000 times. It’s referenced “Terminator” and “machine learning” together in roughly 750 stories.

In this blog, we plan to bridge the gap between technical and critical discussions, treating both methodology and consequences as first-class concerns. 

One aspiration of this blog will be to communicate honestly about certainty. We hope to maintain a sober, academic voice, even when writing informally or about issues that aren’t strictly technical. While many posts will express opinions, we aspire to clearly indicate which statements are theoretical facts, which may be speculative but reflect a consensus of experts, and which are wild thought experiments. We also plan to discuss both immediate issues such as employment alongside more speculative consideration of what future technology we might anticipate. In all cases we hope to clearly indicate scope. In service of this goal, and in reference to the theory of learning, we adopt the name Approximately Correct. We hope to be as correct as possible as often as possible, and to honestly convey our confidence.