The following passage is a musing on the futility of futurism. While I present a perspective, I am not married to it.
When I sat down to write this post, I briefly forgot how to spell “dilemma”. Fortunately, Apple’s spell-check magnanimously corrected me. But it seems likely, if I were cast away on an island without any automatic spell-checkers or other people to subject my brain to the cold slap of reality, that my spelling would slowly deteriorate.
And just yesterday, I had a strong intuition about trajectories through weight-space taken by neural networks along an optimization path. For at least ten minutes, I was reasonably confident that a simple trick might substantially lower the number of updates (and thus the time) it takes to train a neural network.
But for the ability to test my idea against an unforgiving reality, I might have become convinced of its truth. I might have written a paper, entitled “NO Need to worry about long training times in neural networks” (see real-life inspiration for farcical clickbait title). Perhaps I might have founded SGD-Trick University, and schooled the next generation of big thinkers on how to optimize neural networks.
However, with some elbow grease and GPUs, I was able to test the idea in less than an hour. And it turns out this particular idea was a dud. While I gained some interesting intuitions from the experiment, it failed to produce the desired result.
Every day, machine learning scientists go through this process:
- Believe you have an inspired idea.
- Test it.
- Get disappointed.
- Update beliefs.
- Repeat.
I think it’s fair to say that even the best machine learning researchers are more often wrong than right.
While occasionally we do get lucky, our ability to produce working ML algorithms owes mainly to perseverance, quick execution, and to an ability to generate a deep reservoir of ideas, even if most of them fail.
Absent the ability to experimentally validate our ideas, and thus without the stern rebuke from a universe that doesn’t give a frack, I suspect we might all submit to our egos and fall in love with even our most stupid theories. Perhaps we’d double down on whichever ideas resonated with an audience, or develop some aesthetic criteria for idea-promotion that was orthogonal to experimental validity.
The advance of artificial intelligence doesn’t give us a simple trend to extrapolate from like the steady than the warming of the planet over the last century. No one can be sure which methods will saturate or which new techniques will emerge. And we know neither which hardware capabilities are truly necessary for most cognitive feats nor which algorithms might use new hardware effectively. I don’t think it’s clear to any honest person how the programs of the future might behave.
In the scope of perhaps five years, we can see a short window of incremental progress in the research pipeline. Semantic segmentation results will improve. Perplexity numbers for language models will drop. And we can also anticipate how today’s cutting edge technology might be used in the market place. Militaries around the world will develop autonomous weapons. Call centers will be (further) automated and self-driving cars will see limited deployment, potentially impacting the labor markets.
But beyond these obvious near-term extrapolations, technological futurism suffers from a crippling weakness. Long-term predictions about future technology cannot be subjected to the rigor of experimental validation in the present. Absent any feedback signal from reality, how can we distinguish good predictions from bad ones? Is there anyone we can trust to know what we should be preparing for?
The inability to verify ideas suggests both that futurists cannot determine which of their ideas are reasonable and that the public cannot determine which futurists are reasonable. This leaves us potentially with society paying attention to a cast whose chief credentials are self-selection, charisma, celebrity, and a firm belief that they know the unknowable. It begins to look uncomfortably like religion.
The prospect that no reasonable person can speak confidently to the medium- or long-term future of technology should worry us. If nothing else, we can all see that the transformative effects of technology on society are taking place on an rapid schedule (even if not a doubly exponential schedule). Likely, any future will deviate considerably for our own, and if we knew what shape that future might take, many people might like to prepare for it.
So here we are, stuck with the futurist’s dilemma. On one hand, the state of technology 50 years from now may be of vital importance to the future of humanity, and it’s possible that steps we take now (if we knew the shape of the future) could alter events to the benefit of society. And yet simultaneously it’s possible that no person on earth has anything reasonable to say about the future of technology.
Kindly write an article on this whole Zuckerberg vs Elon AI war haha both faux experts are Drowning out the actual AI/ML community, actually Elon is mainly the problem lol. Zuckerberg’s stance is pretty mild.