Hope Returns to the Machine Learning Universe

If you’re not living under a rock, then you’ve surely encountered the Heroes of Deep Learning, an inspiring, diverse band of Deep Learning all-stars whose sheer grit, determination, and—[dare we say?]—genius, catalyzed the earth-shaking revolution that has brought to market such technological marvels as DeepFakes, GPT-7, and Gary Marcus.

But these are no ordinary times. And as the world contends with a rampaging virus, incendiary wildfires, and smouldering social unrest, no ordinary heroes will suffice. However, you needn’t fear. Hope has returned to the Machine Learning Universe, and boy, oh boy the timing couldn’t be better.

As confirmed to us by several independent witnesses, the sun, moon, and stars have been joined in the night’s sky by new, supernatural, sights. After a months-long meticulous investigation, including consultations with NASA, MI6, and Singularity University, we can confirm the presence, on Earth, of the Superheroes of Deep Learning!

Continue reading “Hope Returns to the Machine Learning Universe”

5 Habits of Highly Effective Data Scientists

While COVID has negatively impacted many sectors, bringing the global economy to its knees, one sector has not only survived but thrived: Data Science. If anything, the current pandemic has only scaled up demand for data scientists, as the world’s leaders scramble to make sense of the exponentially expanding data streams generated by the pandemic. 

“These days the data scientist is king. But extracting true business value from data requires a unique combination of technical skills, mathematical know-how, storytelling, and intuition.” 1

Geoff Hinton

According to Gartner’s 2020 report on AI, 63% of the United States labor force has either (i) already transitioned; or (ii) is actively transitioning; towards a career in data science. However, the same report shows that only 5% of this cohort eventually lands their dream job in Data Science.

We interviewed top executives in Big Data, Machine Learning, Deep Learning, and Artificial General Intelligence; and distilled these 5 tips to guarantee success in Data Science.2

Continue reading “5 Habits of Highly Effective Data Scientists”

The Greatest Trade Show North of Vegas (Pressing Lessons from NeurIPS 2018)

What is a conference? Common definitions provide only a vague sketch: “a meeting of two or more persons for discussing matters of common concern” (Merriam Webster a); “a usually formal interchange of views” (Merriam-Webster b); “a formal meeting for discussion” (Google a).

What qualifies as a meeting? Are all congregations of people in all places conferences? How formal must it be? Must the borders be agreed upon? Does it require a designated name? What counts as a discussion? How many discussions can fit in one conference? Does a sufficiently formal meeting held within the allotted times and assigned premises of a larger, longer conference constitute a sub-conference?

A fun-house array of screens transforms the ordinarily mundane task of locating the current speaker into a puzzle solving exercise at NeurIPS 2018. (photo source https://girlknowstech.com/my-experience-at-neurips-2018/#jp-carousel-4891)

Absent context, the word verges on vacuous. And yet in professional contexts, e.g., among computer science academics, culture endows precise meaning. Google also offers a more colloquial definitions that cuts closer:

Continue reading “The Greatest Trade Show North of Vegas (Pressing Lessons from NeurIPS 2018)”

The Blockchain Bubble will Pop, What Next?

Last week, I flew from London to Tel Aviv. The man sitting to my right was a road warrior, just this side of a late-night bender in London. He was rocking an ostentatious pair of headphones and a pair of pants ripped wide apart at both knees. Perhaps a D.J.? At some point, circumstances emerged for us to commiserate over the experience of flying on Easyjet (not the easiest). Soon after, we stumbled through the obligatory airplane smalltalk: Where are you going? What do you do?

Turns out I was flying next to the CEO of an AI+Blockchain startup.

This image ran in an article in the Express discussing conspiracy theories suggesting that cryptocurrencies were invented by an advanced artificial intelligence.

It’s always a bit surreal when I learn of entrepreneurs combining AI with blockchain technology. For the past few years, whenever I found my myself bored among Silicon Valley socialites, this was my go-to satirical startup. What do you do? Startup CEO. What does your startup do? Deep learning on the blockchain… in The Cloud. Whoa. Continue reading “The Blockchain Bubble will Pop, What Next?”

From AI to ML to AI: On Swirling Nomenclature & Slurried Thought

Artificial intelligence is transforming the way we work (Venture Beat), turning all of us into hyper-productive business centaurs (The Next Web).  Artificial intelligence will merge with human brains to transform the way we think (The Verge). Artificial intelligence is the new electricity (Andrew Ng).  Within five years, artificial intelligence will be behind your every decision (Ginni Rometty of IBM via Computer World ).

Before committing all future posts to the coming revolution, or abandoning the blog altogether to beseech good favor from our AI overlords at the AI church, perhaps we should ask, why are today’s headlines, startups and even academic institutions suddenly all embracing the term artificial intelligence (AI)?

In this blog post, I hope to prod all stakeholders (researchers, entrepreneurs, venture capitalists, journalists, think-fluencers, and casual observers alike) to ask the following questions:

  1. What substantive transformation does this switch in the nomenclature from machine learning (ML) to  artificial intelligence (AI) signal?
  2. If the research hasn’t categorically changed, then why are we rebranding it?
  3. What are the dangers, to both scholarship and society, of mindlessly shifting the way we talk about research to maximize buzz?

Continue reading “From AI to ML to AI: On Swirling Nomenclature & Slurried Thought”

Heuristics for Scientific Writing (a Machine Learning Perspective)

It’s January 28th and I should be working on my paper submissions. So should you! But why write when we can meta-write? ICML deadlines loom only twelve days away. And KDD follows shortly after. The schedule hardly lets up there, with ACL, COLT, ECML, UAI, and NIPS all approaching before the summer break. Thousands of papers will be submitted to each.

The tremendous surge of interest in machine learning along with ML’s democratization due to open source software, YouTube coursework, and the availability of preprint articles are all exciting happenings. But every rose has a thorn. Of the thousands of papers that hit the arXiv in the coming month, many will be unreadable. Poor writing will damn some to rejection while others will fail to reach their potential impact. Even among accepted and influential papers, careless writing will sow confusion and damn some papers to later criticism for sloppy scholarship (you better hope Ali Rahimi and Ben Recht don’t win another test of time award!).

But wait, there’s hope! Your technical writing doesn’t have to stink. Over the course of my academic career, I’ve formed strong opinions about how to write a paper (as with all opinions, you may disagree). While one-liners can be trite, I learned early in my PhD from Charles Elkan that many important heuristics for scientific  paper writing can be summed up in snappy maxims.  These days, as I work with younger students, teaching them how to write clear scientific prose, I find myself repeating these one-liners, and occasionally inventing new ones.

The following list consists of easy-to-memorize dictates, each with a short explanation. Some address language, some address positioning, and others address aesthetics. Most are just heuristics so take each with a grain of salt, especially when they come into conflict. But if you’re going to violate one of them, have a good reason. This can be a living document, if you have some gems, please leave a comment.

Continue reading “Heuristics for Scientific Writing (a Machine Learning Perspective)”

What are the tradeoffs between immediate and longer term AI safety efforts?

[This article is also cross-posted to the Deep Safety blog.]

Something I often hear in the machine learning community and media articles is “Worries about superintelligence are a distraction from the *real* problem X that we are facing today with AI” (where X = algorithmic bias, technological unemployment, interpretability, data privacy, etc). This competitive attitude gives the impression that immediate and longer-term safety concerns are in conflict. But is there actually a tradeoff between them?

tradeoff

We can make this question more specific: what resources might these two types of issues be competing for?

Continue reading “What are the tradeoffs between immediate and longer term AI safety efforts?”

Cathy O’Neil Sleepwalks into Punditry

On the BBC’s anthology series Black Mirror, each episode explores a near-future dystopia. In each episode, a small extrapolation from current technological trends leads us into a terrifying future. The series should conjure modern-day Cassandras like Cathy O’Neil, who has made a second career out of exhorting caution against algorithmic decision-making run amok. In particular, she warns that algorithmic decision-making systems, if implemented carelessly, might increase inequality, twist incentives, and perpetrate undesirable feedback loops. For example, a predictive policing system might direct aggressive policing in poor neighborhoods, drive up arrests, depress employment, orphan children, and lead, ultimately, to more crime.

Continue reading “Cathy O’Neil Sleepwalks into Punditry”