Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible.
What sort of papers best serve their readers? We can enumerate desirable characteristics: these papers should (i) provide intuition to aid the reader’s understanding, but clearly distinguish it from stronger conclusions supported by evidence; (ii) describe empirical investigations that consider and rule out alternative hypotheses ; (iii) make clear the relationship between theoretical analysis and intuitive or empirical claims ; and (iv) use language to empower the reader, choosing terminology to avoid misleading or unproven connotations, collisions with other definitions, or conflation with other related but distinct concepts .
Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship:
Failure to distinguish between explanation and speculation.
Failure to identify the sources of empirical gains, e.g. emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning.
Mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g. by confusing technical and non-technical concepts.
Misuse of language, e.g. by choosing terms of art with colloquial connotations or by overloading established technical terms.
Within hours, I received multiple emails. Parents, friends, old classmates, my girlfriend all sent emails. Did you see the article? Maybe they wanted me to know what riches a life in private industry had in store for me? Perhaps they were curious if I was already bathing in Cristal, shopping for yachts, or planning to purchase an atoll among the Maldives? Perhaps the communist sympathizers in my social circles had renewed admiration for my abstention from such extreme opulence.
[This article originally appeared on the Deep Safety blog.]
Long-term AI safety is an inherently speculative research area, aiming to ensure safety of advanced future systems despite uncertainty about their design or algorithms or objectives. It thus seems particularly important to have different research teams tackle the problems from different perspectives and under different assumptions. While some fraction of the research might not end up being useful, a portfolio approach makes it more likely that at least some of us will be right.
In this post, I look at some dimensions along which assumptions differ, and identify some underexplored reasonable assumptions that might be relevant for prioritizing safety research. In the interest of making this breakdown as comprehensive and useful as possible, please let me know if I got something wrong or missed anything important.
It’s about time someone developed an anime series about deep learning. In the last several years, I’ve paid close attention to deep learning. And while I’m far from an expert on anime, I’ve watched a nonzero number of anime cartoons. And yet through neither route did I encounter even one single anime about deep learning.
There were some close calls. Ghost in the Shell gives a vague pretense of addressing AI. But the character might as well be a body-jumping alien. Nothing in this story speaks to the reality of machine learning research.
In Knights of Sidonia, if you can muster the superhuman endurance required to follow the series past its only interesting season, you’ll eventually find out that the flying space-ship made out of remnants of Earth on which Tanikaze and friends photosynthesize, while taking breaks from fighting space monsters, while wearing space-faring versions of mecha suits … [breath] contains an artificially intelligent brain-emulating parasitic nematode. But no serious consideration of ML appears.
If you were looking to anime for a critical discourse on artificial intelligence, until recently you’d be disappointed.
(This article originally appeared here. Thanks to Janos Kramar for his feedback on this post.)
The overall theme of the ICLR conference setting this year could be summarized as “finger food and ships”. More importantly, there were a lot of interesting papers, especially on machine learning security, which will be the focus on this post. (Here is a great overview of the topic.)
In recent years, the rapid advance of artificial intelligence has evoked cries of alarm from billionaire entrepreneur Elon Musk and legendary physicist Stephen Hawking. Others, including the eccentric futurist Ray Kurzweil, have embraced the coming of true machine intelligence, suggesting that we might merge with the computers, gaining superintelligence and immortality in the process. As it turns out, we may not have to wait much longer.
This morning, a group of research scientists at Google DeepMind announced that they had inadvertently solved the riddle of artificial general intelligence (AGI). Their approach relies upon a beguilingly simple technique called symmetrically toroidal asynchronous bisecting convolutions. By the year’s end, Alphabet executives expect that these neural networks will exhibit fully autonomous self-improvement. What comes next may affect us all.