AI Researcher Joins Johnson & Johnson, to Make More than $19 Squillion

Three weeks ago, New York Times reporter Cade Metz sent shockwaves through society with a startling announcement that A.I. researchers were making more than $1 Million dollars, even at a nonprofit!

AI super-hero and newly minted squillionaire Zachary Chase Lipton feeds a wallaby bitcoins while vacationing on Elon Musk’s interplanetary animal preserve on the Martian plains.

Within hours, I received multiple emails. Parents, friends, old classmates, my girlfriend all sent emails. Did you see the article? Maybe they wanted me to know what riches a life in private industry had in store for me? Perhaps they were curious if I was already bathing in Cristal, shopping for yachts, or planning to purchase an atoll among the Maldives? Perhaps the communist sympathizers in my social circles had renewed admiration for my abstention from such extreme opulence.

AT first, I REACHED FOR COLD WATER

Let’s set the record straight … I thought then.

  1. Among  the many talented researchers at OpenAI, only one – ONE!! – received “more than $1 million” in compensation. So the headline was off on a technicality. Moreover that one researcher, Ilya Sutskever, was (a) possibly the world’s best-positioned ML researcher under the age of 40  having through talent and luck been party to many of the breakthrough papers at the outset of the deep learning tsunami, and (b) was the research director for the entire enterprise. Was it that unusual for the head of a nonprofit with a 1billion dollar endowment to pull a big salary?
  2. The other outlier, Ian Goodfellow, earned $800k. However, he might be among the five most famous machine learning researchers in the world, having invented Generative Adversarial Networks (commonly called GANs) and having authored the most popular deep learning textbook. Perhaps this didn’t quite substantiate the generic claim.
  3. The other salaries, while high, weren’t that crazy. Some researchers “with more experience in the field” made between $275-$300k. I know of several undergraduate students who are earning $200k+ in total compensation as developers at FaceBook and Google, among whom half could not figure out an algorithm to prepare spaghetti with tomato sauce. Is it that crazy that a few famous machine learning researchers would make 1.5x their salaries?

While the job market for AI researchers was undeniably hot, perhaps this was a bit of a fish story?

THE HEEL OF EXPONENTIAL

Those innocent days, back in April 19th. How naive I was. How naive we all were.

You see, exponential growth doesn’t just plod along, gradually. One day you’re chewing on your Bazooka bubble gum and exponential growth drops out of the sky wearing anti-gravity boots, smacks you in the face, and blasts off through a tesseract into hitherto unexplored regions of spacetime. To give you a better feeling for what this looks like, let’s take a look at the exponential explosion of technology over the past 4 years.

The exponential growth in AI (doubly exponential, but let’s not get hung up on technicalities) is similarly jarring. And it has, in turn, induced exponential growth in AI job prospects.

Back in those innocent waning days of April, just as I contemplated a rejoinder to the New York Times, I received an unexpected call from Alex Gorsky, CEO of Fortune 500 mainstay Johnson and Johnson. He wanted to talk about AI.

JOHNSON and JOHNSON: PUTTING AI FIRST

Negotiations got off to a rocky start. At first, I thought really? Johnson and Johnson is getting into AI research?

Gorsky’s eyes narrowed and he began to tell me the story of how Johnson and Johnson became an AI First company. How should each bristle on a smart toothbrush decide which way to turn? How would a smart baby powder bottle decide the precise pattern by which to spray out into the air?

These are fundamental machine learning problems. And they require fundamentally new machine learning solutions. 

Cognitive computing isn’t going to cut it. Watson’s blew their entire budget on marketing and the Google Brain turns out not to even be a Brain! It’s time we cut to the chase and solve machine learning, as only Johnson and Johnson can.

Determining Compensation in an Exponential World

While Ilya’s take-home might have seemed eye-popping back in April, I soon learned that even interns these days (post-May 2018) make easily 8 figures. Moreover, we quickly ran into the problem that salary scales were shifting so quickly that any agreed-upon number was no longer relevant by the end of the conversation.

Finally, we settled on a new concept called the Squillion System of Currency (SSC). We still say “a squillion dollars” but that’s a misnomer to spare the casual reader the trouble of wrapping their minds around the true nature of the SSC. Here rather than measuring AI compensation in fiat currency, which is too brittle to support the great strain of new AI salaries, the SSC dispenses compensation in the form of post-AGI Singularity shares of wealth. After a few more precious minutes of haggling, we finally settled on the sum of $19 Squill.

So while it’s been a nice run in academia, it’s with both joy and sorrow that I announce that I’ll be leaving public life to join Johnson and Johnson as their first-ever Inter-Global Head of Artificial General Intelligence Research.

If you’d like to help us to leverage the power of the cloud to democratize AI through a blockchain built from neuro-symbolic links, please send a CV to approximatelycorrect [at] gmail [dot] com

Disclaimer: Several fake lawyers were consulted on the ethics and legality of using a fake article to promote a fake career move, all consented.

Leveraging GANs to combat adversarial examples

In 2014, Szegedy et al.  published an ICLR paper with a surprising discovery: modern deep neural networks trained for image classification exhibit the following vulnerability: by making only slight alterations to an input image, it’s possible to drastically fool a model that would otherwise classify the image correctly (say, as a dog), into outputting a completely wrong label (say, as a banana). Moreover, this attack is possible even with perturbations that are so tiny that a human couldn’t distinguish the altered image from the original.

These doctored images are called adversarial examples and the study of how to make neural networks robust to these attacks is an increasingly active area of machine learning research.

Continue reading “Leveraging GANs to combat adversarial examples”

Heuristics for Scientific Writing (a Machine Learning Perspective)

It’s January 28th and I should be working on my paper submissions. So should you! But why write when we can meta-write? ICML deadlines loom only twelve days away. And KDD follows shortly after. The schedule hardly lets up there, with ACL, COLT, ECML, UAI, and NIPS all approaching before the summer break. Thousands of papers will be submitted to each.

The tremendous surge of interest in machine learning along with ML’s democratization due to open source software, YouTube coursework, and the availability of preprint articles are all exciting happenings. But every rose has a thorn. Of the thousands of papers that hit the arXiv in the coming month, many will be unreadable. Poor writing will damn some to rejection while others will fail to reach their potential impact. Even among accepted and influential papers, careless writing will sow confusion and damn some papers to later criticism for sloppy scholarship (you better hope Ali Rahimi and Ben Recht don’t win another test of time award!).

But wait, there’s hope! Your technical writing doesn’t have to stink. Over the course of my academic career, I’ve formed strong opinions about how to write a paper (as with all opinions, you may disagree). While one-liners can be trite, I learned early in my PhD from Charles Elkan that many important heuristics for scientific  paper writing can be summed up in snappy maxims.  These days, as I work with younger students, teaching them how to write clear scientific prose, I find myself repeating these one-liners, and occasionally inventing new ones.

The following list consists of easy-to-memorize dictates, each with a short explanation. Some address language, some address positioning, and others address aesthetics. Most are just heuristics so take each with a grain of salt, especially when they come into conflict. But if you’re going to violate one of them, have a good reason. This can be a living document, if you have some gems, please leave a comment.

Continue reading “Heuristics for Scientific Writing (a Machine Learning Perspective)”

What are the tradeoffs between immediate and longer term AI safety efforts?

[This article is also cross-posted to the Deep Safety blog.]

Something I often hear in the machine learning community and media articles is “Worries about superintelligence are a distraction from the *real* problem X that we are facing today with AI” (where X = algorithmic bias, technological unemployment, interpretability, data privacy, etc). This competitive attitude gives the impression that immediate and longer-term safety concerns are in conflict. But is there actually a tradeoff between them?

tradeoff

We can make this question more specific: what resources might these two types of issues be competing for?

Continue reading “What are the tradeoffs between immediate and longer term AI safety efforts?”

AI safety going mainstream at NIPS 2017

[This article originally appeared on the Deep Safety blog.]

convention_center__hero

This year’s NIPS gave me a general sense that near-term AI safety is now mainstream and long-term safety is slowly going mainstream. On the near-term side, I particularly enjoyed Kate Crawford’s keynote on neglected problems in AI fairness, the ML security workshops, and the Interpretable ML symposium debate that addressed the “do we even need interpretability?” question in a somewhat sloppy but entertaining way. There was a lot of great content on the long-term side, including several oral / spotlight presentations and the Aligned AI workshop.

Continue reading “AI safety going mainstream at NIPS 2017”

Cathy O’Neil Sleepwalks into Punditry

On the BBC’s anthology series Black Mirror, each episode explores a near-future dystopia. In each episode, a small extrapolation from current technological trends leads us into a terrifying future. The series should conjure modern-day Cassandras like Cathy O’Neil, who has made a second career out of exhorting caution against algorithmic decision-making run amok. In particular, she warns that algorithmic decision-making systems, if implemented carelessly, might increase inequality, twist incentives, and perpetrate undesirable feedback loops. For example, a predictive policing system might direct aggressive policing in poor neighborhoods, drive up arrests, depress employment, orphan children, and lead, ultimately, to more crime.

Continue reading “Cathy O’Neil Sleepwalks into Punditry”

Macro-causality and social science

Consider a little science experiment we’ve all done, to find out if a switch controls a light. How many data points does it usually take to convince you? Not many! Even if you didn’t do a randomized trial yourself, and observed somebody else manipulating the switch you’d figure it out pretty quickly. This type of science is easy!

One thing that makes this easy is that you already know the right level of abstraction for the problem: what a switch is, and what a bulb is. You also have some prior knowledge, e.g. that switches typically have two states, and that it often controls things like lights. What if the data you had was actually a million variables, representing the state of every atom in the switch, or in the room?

Continue reading “Macro-causality and social science”

ICML 2018 Registrations Sell Out Before Submission Deadline

In a shocking tweet, organizers of the 35th International Conference on Machine Learning (ICML 2018) announced today, through an official Twitter account, that this year’s conference has sold out. The announcement came as a surprise owing to the  timing.  Slated to occur in July, 2018, the conference has historically been attended by professors and graduate student authors, who attend primarily to present their research to audience of peers. With the submission deadline set for February 9th and registrations already closed, it remains unclear if and how authors of accepted papers might attend.

Continue reading “ICML 2018 Registrations Sell Out Before Submission Deadline”

Embracing the Diffusion of AI Research in Yerevan, Armenia

In July of this year, NYU Professor of Psychology Gary Marcus argued in the New York Times that AI is stuck, failing to progress towards a more general, human-like intelligence. To liberate AI from it’s current stuckness, he proposed a big science initiative. Covetously referencing the thousands of bodies (employed at) and billions of dollars (lavished on) CERN, he wondered whether we ought to launch a concerted international AI mission.

Perhaps owing to my New York upbringing, I admire Gary’s contrarian instincts. With the press pouring forth a fine slurry of real and imagined progress in machine learning, celebrating any story about AI as a major breakthrough, it’s hard to state the value of a relentless critical voice reminding the community of our remaining shortcomings.

But despite the seductive flash of big science and Gary’s irresistible chutzpah, I don’t buy this particular recommendation. Billion-dollar price tags and frightening head counts are bugs, not features. Big science requires getting those thousands of heads to agree about what questions are worth asking. A useful heuristic that applies here:

The larger an organization, the simpler its elevator pitch needs to be.

Machine learning research doesn’t yet have an agreed-upon elevator pitch. And trying to coerce one prematurely seems like a waste of resources. Dissent and diversity of viewpoints are valuable. Big science mandates overbearing bureaucracy and some amount of groupthink, and sometimes that’s necessary. If, as in physics, an entire field already agrees about what experiments come next and these happen to be thousand-man jobs costing billions of dollars, then so be it

Continue reading “Embracing the Diffusion of AI Research in Yerevan, Armenia”

A Random Walk Through EMNLP 2017

EMNLP – the conference on Empirical Methods for Natural Language Processing – was held this year in Copenhagen, the capital of the small state of Denmark. Nevertheless, this year’s conference had the largest attendance in EMNLP’s history.

The surge in attendance should not be too surprising, as it follows similarly frothy demand for other academic machine learning conferences, such as NIPS (which recently sold out before workshop authors could even submit their papers).

The EMNLP conference focuses on data-driven approaches to NLP, which really describes all work in NLP, so I suppose we can call it a venue for “very data-driven NLP”. It’s a popular conference, and the premier conference of ACL’s SIGDAT (ACL’s special interest group for linguistic data and corpus-based approaches to NLP).

This event went off without a hitch, with plenty of eating and socializing space in the vicinity. For 1200 people. Must’ve been a lot of hard work. Continue reading “A Random Walk Through EMNLP 2017”