AI Researcher Joins Johnson & Johnson, to Make More than $19 Squillion

Three weeks ago, New York Times reporter Cade Metz sent shockwaves through society with a startling announcement that A.I. researchers were making more than $1 Million dollars, even at a nonprofit!

AI super-hero and newly minted squillionaire Zachary Chase Lipton feeds a wallaby bitcoins while vacationing on Elon Musk’s interplanetary animal preserve on the Martian plains.

Within hours, I received multiple emails. Parents, friends, old classmates, my girlfriend all sent emails. Did you see the article? Maybe they wanted me to know what riches a life in private industry had in store for me? Perhaps they were curious if I was already bathing in Cristal, shopping for yachts, or planning to purchase an atoll among the Maldives? Perhaps the communist sympathizers in my social circles had renewed admiration for my abstention from such extreme opulence.

AT first, I REACHED FOR COLD WATER

Let’s set the record straight … I thought then.

  1. Among  the many talented researchers at OpenAI, only one – ONE!! – received “more than $1 million” in compensation. So the headline was off on a technicality. Moreover that one researcher, Ilya Sutskever, was (a) possibly the world’s best-positioned ML researcher under the age of 40  having through talent and luck been party to many of the breakthrough papers at the outset of the deep learning tsunami, and (b) was the research director for the entire enterprise. Was it that unusual for the head of a nonprofit with a 1billion dollar endowment to pull a big salary?
  2. The other outlier, Ian Goodfellow, earned $800k. However, he might be among the five most famous machine learning researchers in the world, having invented Generative Adversarial Networks (commonly called GANs) and having authored the most popular deep learning textbook. Perhaps this didn’t quite substantiate the generic claim.
  3. The other salaries, while high, weren’t that crazy. Some researchers “with more experience in the field” made between $275-$300k. I know of several undergraduate students who are earning $200k+ in total compensation as developers at FaceBook and Google, among whom half could not figure out an algorithm to prepare spaghetti with tomato sauce. Is it that crazy that a few famous machine learning researchers would make 1.5x their salaries?

While the job market for AI researchers was undeniably hot, perhaps this was a bit of a fish story?

THE HEEL OF EXPONENTIAL

Those innocent days, back in April 19th. How naive I was. How naive we all were.

You see, exponential growth doesn’t just plod along, gradually. One day you’re chewing on your Bazooka bubble gum and exponential growth drops out of the sky wearing anti-gravity boots, smacks you in the face, and blasts off through a tesseract into hitherto unexplored regions of spacetime. To give you a better feeling for what this looks like, let’s take a look at the exponential explosion of technology over the past 4 years.

The exponential growth in AI (doubly exponential, but let’s not get hung up on technicalities) is similarly jarring. And it has, in turn, induced exponential growth in AI job prospects.

Back in those innocent waning days of April, just as I contemplated a rejoinder to the New York Times, I received an unexpected call from Alex Gorsky, CEO of Fortune 500 mainstay Johnson and Johnson. He wanted to talk about AI.

JOHNSON and JOHNSON: PUTTING AI FIRST

Negotiations got off to a rocky start. At first, I thought really? Johnson and Johnson is getting into AI research?

Gorsky’s eyes narrowed and he began to tell me the story of how Johnson and Johnson became an AI First company. How should each bristle on a smart toothbrush decide which way to turn? How would a smart baby powder bottle decide the precise pattern by which to spray out into the air?

These are fundamental machine learning problems. And they require fundamentally new machine learning solutions. 

Cognitive computing isn’t going to cut it. Watson’s blew their entire budget on marketing and the Google Brain turns out not to even be a Brain! It’s time we cut to the chase and solve machine learning, as only Johnson and Johnson can.

Determining Compensation in an Exponential World

While Ilya’s take-home might have seemed eye-popping back in April, I soon learned that even interns these days (post-May 2018) make easily 8 figures. Moreover, we quickly ran into the problem that salary scales were shifting so quickly that any agreed-upon number was no longer relevant by the end of the conversation.

Finally, we settled on a new concept called the Squillion System of Currency (SSC). We still say “a squillion dollars” but that’s a misnomer to spare the casual reader the trouble of wrapping their minds around the true nature of the SSC. Here rather than measuring AI compensation in fiat currency, which is too brittle to support the great strain of new AI salaries, the SSC dispenses compensation in the form of post-AGI Singularity shares of wealth. After a few more precious minutes of haggling, we finally settled on the sum of $19 Squill.

So while it’s been a nice run in academia, it’s with both joy and sorrow that I announce that I’ll be leaving public life to join Johnson and Johnson as their first-ever Inter-Global Head of Artificial General Intelligence Research.

If you’d like to help us to leverage the power of the cloud to democratize AI through a blockchain built from neuro-symbolic links, please send a CV to approximatelycorrect [at] gmail [dot] com

Disclaimer: Several fake lawyers were consulted on the ethics and legality of using a fake article to promote a fake career move, all consented.

Portfolio Approach to AI Safety Research

[This article originally appeared on the Deep Safety blog.]

dimensionsLong-term AI safety is an inherently speculative research area, aiming to ensure safety of advanced future systems despite uncertainty about their design or algorithms or objectives. It thus seems particularly important to have different research teams tackle the problems from different perspectives and under different assumptions. While some fraction of the research might not end up being useful, a portfolio approach makes it more likely that at least some of us will be right.

In this post, I look at some dimensions along which assumptions differ, and identify some underexplored reasonable assumptions that might be relevant for prioritizing safety research. In the interest of making this breakdown as comprehensive and useful as possible, please let me know if I got something wrong or missed anything important.

Continue reading “Portfolio Approach to AI Safety Research”

Death Note: Finally, an Anime about Deep Learning

It’s about time someone developed an anime series about deep learning. In the last several years, I’ve paid close attention to deep learning. And while I’m far from an expert on anime, I’ve watched a nonzero number of anime cartoons. And yet through neither route did I encounter even one single anime about deep learning.

There were some close calls. Ghost in the Shell gives a vague pretense of addressing AI. But the character might as well be a body-jumping alien. Nothing in this story speaks to the reality of machine learning research.

In Knights of Sidonia, if you can muster the superhuman endurance required to follow the series past its only interesting season, you’ll eventually find out that the flying space-ship made out of remnants of Earth on which Tanikaze and friends photosynthesize, while taking breaks from fighting space monsters, while wearing space-faring versions of mecha suits … [breath] contains an artificially intelligent brain-emulating parasitic nematode. But no serious consideration of ML appears.

If you were looking to anime for a critical discourse on artificial intelligence, until recently you’d be disappointed.

Continue reading “Death Note: Finally, an Anime about Deep Learning”

Machine Learning Security at ICLR 2017

(This article originally appeared here. Thanks to Janos Kramar for his feedback on this post.)

The overall theme of the ICLR conference setting this year could be summarized as “finger food and ships”. More importantly, there were a lot of interesting papers, especially on machine learning security, which will be the focus on this post. (Here is a great overview of the topic.)

food-and-ships

On the attack side, adversarial perturbations now work in physical form (if you print out the image and then take a picture) and they can also interfere with image segmentation. This has some disturbing implications for fooling vision systems in self-driving cars, such as impeding them from recognizing pedestrians. Adversarial examples are also effective at sabotaging neural network policies in reinforcement learning at test time.

Continue reading “Machine Learning Security at ICLR 2017”

DeepMind Solves AGI, Summons Demon

In recent years, the rapid advance of artificial intelligence has evoked cries of alarm from billionaire entrepreneur Elon Musk and legendary physicist Stephen Hawking. Others, including the eccentric futurist Ray Kurzweil, have embraced the coming of true machine intelligence, suggesting that we might merge with the computers, gaining superintelligence and immortality in the process. As it turns out, we may not have to wait much longer.

This morning, a group of research scientists at Google DeepMind announced that they had inadvertently solved the riddle of artificial general intelligence (AGI). Their approach relies upon a beguilingly simple technique called symmetrically toroidal asynchronous  bisecting convolutions. By the year’s end, Alphabet executives expect that these neural networks will exhibit fully autonomous self-improvement. What comes next may affect us all.

Continue reading “DeepMind Solves AGI, Summons Demon”