Last week, I flew from London to Tel Aviv. The man sitting to my right was a road warrior, just this side of a late-night bender in London. He was rocking an ostentatious pair of headphones and a pair of pants ripped wide apart at both knees. Perhaps a D.J.? At some point, circumstances emerged for us to commiserate over the experience of flying on Easyjet (not the easiest). Soon after, we stumbled through the obligatory airplane smalltalk: Where are you going? What do you do?
Turns out I was flying next to the CEO of an AI+Blockchain startup.
It’s always a bit surreal when I learn of entrepreneurs combining AI with blockchain technology. For the past few years, whenever I found my myself bored among Silicon Valley socialites, this was my go-to satirical startup. What do you do? Startup CEO. What does your startup do? Deep learning on the blockchain… in The Cloud. Whoa.Continue reading “The Blockchain Bubble will Pop, What Next?”
Before committing all future posts to the coming revolution, or abandoning the blog altogether to beseech good favor from our AI overlords atthe AI church, perhaps we should ask, why are today’s headlines, startups and even academic institutions suddenly all embracing the term artificial intelligence (AI)?
In this blog post, I hope to prod all stakeholders (researchers, entrepreneurs, venture capitalists, journalists, think-fluencers, and casual observers alike) to ask the following questions:
What substantive transformation does this switch in the nomenclature from machine learning (ML) to artificial intelligence (AI) signal?
If the research hasn’t categorically changed, then why are we rebranding it?
What are the dangers, to both scholarship and society, of mindlessly shifting the way we talk about research to maximize buzz?
It’s January 28th and I should be working on my paper submissions. So should you! But why write when we can meta-write? ICML deadlines loom only twelve days away. And KDD follows shortly after. The schedule hardly lets up there, with ACL, COLT, ECML, UAI, and NIPS all approaching before the summer break. Thousands of papers will be submitted to each.
The tremendous surge of interest in machine learning along with ML’s democratization due to open source software, YouTube coursework, and the availability of preprint articles are all exciting happenings. But every rose has a thorn. Of the thousands of papers that hit the arXiv in the coming month, many will be unreadable. Poor writing will damn some to rejection while others will fail to reach their potential impact. Even among accepted and influential papers, careless writing will sow confusion and damn some papers to later criticism for sloppy scholarship (you better hope Ali Rahimi and Ben Recht don’t win another test of time award!).
But wait, there’s hope! Your technical writing doesn’t have to stink. Over the course of my academic career, I’ve formed strong opinions about how to write a paper (as with all opinions, you may disagree). While one-liners can be trite, I learned early in my PhD from Charles Elkan that many important heuristics for scientific paper writing can be summed up in snappy maxims. These days, as I work with younger students, teaching them how to write clear scientific prose, I find myself repeating these one-liners, and occasionally inventing new ones.
The following list consists of easy-to-memorize dictates, each with a short explanation. Some address language, some address positioning, and others address aesthetics. Most are just heuristics so take each with a grain of salt, especially when they come into conflict. But if you’re going to violate one of them, have a good reason. This can be a living document, if you have some gems, please leave a comment.
[This article is also cross-posted to the Deep Safety blog.]
Something I oftenhear in the machine learning community and media articles is “Worries about superintelligence are a distraction from the *real* problem X that we are facing today with AI” (where X = algorithmic bias, technological unemployment, interpretability, data privacy, etc). This competitive attitude gives the impression that immediate and longer-term safety concerns are in conflict. But is there actually a tradeoff between them?
We can make this question more specific: what resources might these two types of issues be competing for?
On the BBC’s anthology series Black Mirror, each episode explores a near-future dystopia. In each episode, a small extrapolation from current technological trends leads us into a terrifying future. The series should conjure modern-day Cassandras like Cathy O’Neil, who has made a second career out of exhorting caution against algorithmic decision-making run amok. In particular, she warns that algorithmic decision-making systems, if implemented carelessly, might increase inequality, twist incentives, and perpetrate undesirable feedback loops. For example, a predictive policing system might direct aggressive policing in poor neighborhoods, drive up arrests, depress employment, orphan children, and lead, ultimately, to more crime.