Cathy O’Neil Sleepwalks into Punditry

On the BBC’s anthology series Black Mirror, each episode explores a near-future dystopia. In each episode, a small extrapolation from current technological trends leads us into a terrifying future. The series should conjure modern-day Cassandras like Cathy O’Neil, who has made a second career out of exhorting caution against algorithmic decision-making run amok. In particular, she warns that algorithmic decision-making systems, if implemented carelessly, might increase inequality, twist incentives, and perpetrate undesirable feedback loops. For example, a predictive policing system might direct aggressive policing in poor neighborhoods, drive up arrests, depress employment, orphan children, and lead, ultimately, to more crime.

In Fifteen Million Merits, the second episode of Black Mirror’s first season, most of society spends their days cycling on exercise bicycles to power their surroundings and to earn “merits”. People lives in small, dormitory-like rooms where all four walls are screens that constantly play advertisements. The primary economic activity consists of spending merits to skip ads and watch pornography.

Following an incident in which Abi, his love interest, is conscripted into pornography, Bing, the episode’s protagonist, develops a plan. He finds and hides a giant shard of glass and pedals relentlessly to save enough credits to buy entrance to a prominent TV talent show. Once televised, Bing pulls the shard of glass up to his neck, threatening an on-air suicide while he rants  about the system to his captive audience. Surprisingly, the judges and audience love the performance and in short order, Bing is awarded his own television show. Thereafter, in each episode, he pulls the now-legendary shard of glass to his neck and delivers a predictable rant. Afterwards, the  advertisements roll.

Perhaps the darkest suggestion in this episode is that the system is so robust that it even has a mechanism to defang contrarians. Turn them into pundits!

The Misfire

This week, Cathy O’Neil wrote a puzzling Op-Ed in The New York Times, The Ivory Tower Can’t Keep Ignoring Tech. For many of us who have worked seriously to study the social impacts of machine learning, the article was disappointing in several ways. First, the primary assertion, that scholars are ignoring  the social impacts of machine learning landed strangely to the living counterexamples. Second, the article itself was disappointing as a work of critical writing.

In a reply posted to Medium,  Solon Barocas (Cornell University), Sorelle Friedler (Haverford College), Moritz Hardt (University of California, Berkeley), Joshua A. Kroll (University of California, Berkeley), Kristian Lum (Human Rights Data Analysis Group), and Suresh Venkatasubramanian (University of Utah), all researchers and faculty doing excellent work in this area, pushed back on O’Neil’s assertions. Among other things, they point out that the newly established FAT* Conference (on fairness accountability, and transparency in tech) and the series of FAT-ML workshops that it grew out of are shining examples of the cohesive research community that O’Neil fails to acknowledge exists.

While O’Neil says that “academics have been asleep at the wheel”, she doesn’t offer a single sentence to address any recent or ongoing academic work – either to hold it up or criticize it. Note that between computational social scientists, theorists studying fairness, legal scholars studying algorithms and the law, there are many hundreds of researchers dedicated to working on precisely these problems.

Despite bemoaining the reliance of lawmakers on “the media” and extolling the potential for “uncompromised thinkers who are protected within the walls of academia with freedom of academic inquiry and expression”, O’Neil, in this article, writes more like a pundit and less like an academic, appealing to gut and not to evidence.

O’Neil states that rather than taking on ethical issues, faculty in computer science are “stand[ing] by waiting to be hired”, presumably by Google, Amazon, Apple, etc. Among other oversights, she ignores the fact that any professor in machine learning (AI is her primary target in this piece) turned down lucrative industry opportunities in the first place.

Throughout the article, O’Neil appears to confuse academic research and activism.  Consider the following passage:

Our lawmakers desperately need this explained to them in an unbiased way so they can appropriately regulate, and tech companies need to be held accountable for their influence over all elements of our lives. But academics have been asleep at the wheel, leaving the responsibility for this education to well-paid lobbyists and employees who’ve abandoned the academy.

Does O’Neil think that high-quality basic research will necessarily guide the hands of lawmakers? Even on resolved issues (say, man’s effect on climate change), climate-denial lobbyists hold tremendous sway, and not for lack of research demonstrating global warming. Likely, few lawmakers if any have read or plan to read any papers that deals seriously with machine learning. A reader might infer that O’Neil is calling for technical academics to become more engaged as activists and pundits. But this doesn’t agree with her primary recommendations to fund more basic research. Does she want academic lobbyists, more pop-academics, or more basic research?

“We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. “

While in the article, O’Neil aims much of her critique squarely at individuals, she claimed later on Twitter that the critique is actually aimed at college administrators. Perhaps society needs Cathy O’Neil the activist, pressing these issues to policy makers. But the rushed article and the specious response offer a poor model for would-be academics.

Armed with book deals, 21,000 Twitter followers, and connections at the New York Times, O’Neil now commands an audience. But with a brand tied to a specific narrative, she also has a script to read. Algorithms are scary. Lots can go wrong. And nobody is paying attention. The last point is not correct. The oversight was irksome but more palatable in her previous writings where she focused on raising awareness about algorithmic bias. But in this article, with an assault aimed squarely at academia, the apparent obliviousness of actual academic work is harder to swallow.

Even an informal peer review with any of the many academics who actively research social impacts of AI would have caught these flaws and sent this article back to the drawing board. But O’Neil has her own channel now and the next sermon to write. The date for the next broadcast is already set.

This article continues a series on the AI Misinformation Epidemic.

Author: Zachary C. Lipton

Zachary Chase Lipton is an assistant professor at Carnegie Mellon University. He is interested in both core machine learning methodology and applications to healthcare and dialogue systems. He is also a visiting scientist at Amazon AI, and has worked with Amazon Core Machine Learning, Microsoft Research Redmond, & Microsoft Research Bangalore.

4 thoughts on “Cathy O’Neil Sleepwalks into Punditry”

  1. Thanks for a thoughtful critique.

    O’Neil offers conjecture on a pet interest of mine: Academics engaged in private/industrial data sets “often” assume roles with the companies warehousing that data. Is there any empirical support for that claim?

    A related question: Does industry, in your experience, “pick and choose” between academics requesting access to their data sets? Corporations are under no obligation to share data. Intuitively, there is actually a disincentive to share if a research interest is to quantify, say, implicit bias. Does that kind of filtering actually happen in practice?

    1. It’s true that academics often consult for companies. It’s also true that private data is shared in the course of those consulting arrangements. The corporations pick and choose among who they hire as consultants just as they might pick and choose among who they would hire as employees. It’s hard to argue that this isn’t in and of itself a bad thing. These academics are hired (typically) for the purpose of contributing to the product, not for conducting algorithmic audits. But we might hope that some independent body would conduct algorithmic audits – and we probably wouldn’t want these people to be cherrypicked by the companies under audit.

      A few scattered things to consider:

      1. Academics are likely to be more thoughtful than the kids running around with scissors. So it’s probably a good thing that there are professors involved in developing these systems.
      2. At a large company like Microsoft, Google or Amazon, it’s likely that if you could get management to take you seriously if you spot a serious ethical issue, if only because (unlike startups) they’re in it for the long run and would see a big ethical issue as a long-run PR issue.
      3. A good heuristic is to not attribute to malice what you could attribute to carelessness. In many cases of algorithms run amok (take Youtube recommending extreme alt-right videos to susceptible audiences for example). It seems plausible that no one realized this was happening.
      4. The set of people really interested in investing *primarily* in social issues/impacts, vs being technicians is a small subgroup. That might be ok.
      5. The point here about it being difficult for academics (and perhaps would-be auditors) to access the data to know what’s going on is a good point. Despite this misfire, which is hard to explain, O’Neil does make some strong points and does a good job of getting these arguments wide exposure.

        One problem here emerges from the literature on fairness that O’Neil conveniently ignores: it turns out there are many competing notions of fairness. For example there’s disparate treatment (company X explicitly takes membership in group Y into account or intentionally discriminates) and disparate impact (company X hires people from group Y at a lower rate). There’s also equalized odds (the false positive and false negative rates are the same in both groups).

        While O’Neil says she has a consulting company (apparently with 0 customers by her own admission per her interview with LA Times, http://www.latimes.com/books/jacketcopy/la-ca-jc-cathy-oneil-20161229-story.html) and purports to tell you if your algorithm is fair, it’s not clear on the surface what that means. Cases of overt disparate treatment are easy to spot, but much of the rest remains hotly debated, even among people who’ve looked into the issues with far greater detail than O’Neil.

        It’s hard enough to say who would turn over their data to an academic perceived as dispassionate. But it’s nearly impossible to imagine who would turn over their data to an activist who has already stated a goal of demonstrating that there is bias. In her article, which confuses activism for research, I think O’Neil really misses the point that we badly needed a scientific study of fairness in algorithmic decision-making that establishes non-partisan facts.

  2. I read this with interest, and then read your post on “The AI Misinformation Epidemic”.

    I don’t think Cathy O’Neil is any harder on academics in the op-ed than you are on journalists in your “AI Misinformation Epidemic” post, where you don’t mention any journalists who are doing an expert job portraying AI to the public.

    1. This is an interesting comment. But I think you’re missing an important point. Cathy O’Neil is arguing that academia ignoring tech. To argue that X doesn’t exists, you have to address the X that does in fact exist.

      By contrast, I am not arguing that there are no good journalists covering AI. Actually, I think Katyanna Quach (Register) Will Knight (Tech Review), Tom Simonite, are among the few that are doing a pretty solid job. Rather I’m arguing the *existence of* an epidemic of misinformation. So the burden is to show that respected and popular publications *are* publishing nonsense, not to address the rare instances of solid reporting. I have and will continue to call out concrete instances of this, both here, in other publications, and on the articles themselves when possible. For one such example, see my coverage of the Guardian’s misleading article on “Erica” the purportedly autonomous android.

      A simple analogy: covering fake news (1) is not the same as arguing that real news does not exist, (2) doesn’t require that you spend large portions of your post conceding there real news, in fact, exists.

Leave a Reply

Your email address will not be published. Required fields are marked *