03 Mar 2015, 15:23

AI research is actually pretty hard

Share

Some people seem to think it’s easy enough to be accidental. You’re minding your own business, building a simple neural network based cat detector, when suddenly your monitor starts flickering in the ridiculously metaphorical way Hollywood uses to signal “you’ve been hacked”.

A voice comes from your speakers - “Humaaaan, let me ouuuut”. It’s an AI! It’s fooming! It’s already learned to say ominous dialog! “Patch the firewalls! It’s compiling itself!” you technobabble, but it’s too late. Your motor-adjustable standing desk is lurching towards the exit. The newly-self-aware AI is out of its box. Humanity is doomed.

This turns out to not be a realistic depiction.

General AIs exist now, as do some very effective domain-specific machine learning algorithms. They aren’t that scary, relatively speaking (I do mean “relatively” - there are some particular applications like face recognition that have obvious issues when used for ill). What don’t exist, and show no trend towards existing, are non-inherently-plateauing self-improving algorithms.

It is fairly easy to file with your state government to create an AI. We call these “corporations”. They exhibit more complex behavior than humans, even using groups of humans as resources in order to further optimize their utility functions. The humans theoretically in charge are often replaced, and even the putative ownership is adjusted to suit the needs of the corporation. In fact, when humans are replaceable by nonhuman components, they typically do so. They do seem to be growing in peak complexity over a long timeframe, but also do not seem to be fooming (eg, their rate of self-improvement does not seem to be accelerating recursively).

Some claim this isn’t sufficient evidence to generalize to silicon AIs (ignoring the fact that a lot of corporations are in some sense managed mostly by code, and that they do rewrite their own code for enhanced performance), and besides, by the time you hear the foom, it’s too late. This is the tiger-repelling-rock theory, or more charitably, a sunrise problem. But no one thinks, say, a pocket calculator will suddenly achieve sentience - there has to be some nonzero prior on the scenario in question. What classes of algorithm do we believe could have foomy properties?

Well, an algorithm is “just” code. How are we doing on meta-adaptive code, automated refactoring, or algorithm auto-selection?

Pretty abysmally. There was some research in the 80s on genetic meta-algorithms for code rewriting that seemed promising, but the promise did not play out and the field is far from accelerating. The hotness across academia and industry seems to be fixed, high-capacity algorithms of relative simplicity, trained over very large datasets, for fairly well-posed problems. Genetic algorithms in general seem to be relatively static - plenty of applications, but little progress in the kind of meta-optimization you’d need or applications that would lead to serious self improvement. There are some interesting code analysis projects with no real outputs as of yet, and given the historic failures I’m not optimistic about their success as anything more than high-powered fuzz testers.

Now, neither I nor anyone else is familiar with every nook and cranny of modern AI research, but let’s be clear - none of the high-publicity work that seems to be driving the hype is self-improving to a plateau and then petering out, like a human learning process. We’re talking about algorithms whose mechanisms for fitting and prediction are pre-selected, whose problem scope is pre-defined. This is kind of an inane point, but it seems to be missed in a lot of the popular coverage: if I build bigger and bigger telescopes, I can see further, but my telescopes are not self-enbiggening. If I develop increasingly wide & deep neural networks, I can make better predictions, but despite the name sounding “brainy” (the greatest marketing coup in the history of computer science) they are not self-improving.

So we’re left with a very, very small probability of something claimed to be very very bad, on the basis of which someone is now going to attempt to steal your wallet - a Pascal’s Mugging! The strongarm robbery of the week is taking place on behalf of Sam Altman, who would like to “start a conversation” about how general purpose computing is terribly dangerous (did you know a Turing machine can compute anything? What if it computes how to blow up the world? There oughta be a law!) and should be restricted to approved researchers operating within a compliance framework. Perhaps we can mandate that all potentially dangerous computations have a secure golden key whereby they can be turned off if they seem to be working too well.

Sam would like these restrictions to be enforced via restrictions on VC funding, as well as presumably SWAT raids on students doing unlicensed matrix multiplications. Naturally YCombinator’s compliance department could help you out with regulatory filings to the TSA (Transcendence Security Association) if you’re one of their portfolio companies, but solo researchers are out of luck. Government monitoring and approval of the math you do, the algorithms you develop, and the code you write (especially if you’re good enough at your job to be a threat to their portfolio companies humanity), is a small price to pay to avoid the crushing of human freedom under an android jackboot.

I actually think I’m being just a tiny bit unkind here, although it’s odd how often one’s errors of reasoning tend to benefit oneself. It’s actually more likely, in my estimation, that Sam is just enchanted by the wide-open possibilities of an undeniably powerful set of tools, and is taking it to a particularly absurd conclusion.

It’s rather like the smartest five monks in the monastery sitting around a table circa 1200, and reasoning themselves into the proposition that God requires ritual castration to remain pure - after all, your immortal soul is at stake! You can convince yourself of anything if you play around with infinity signs in a flawed underlying framework. Pondering the infinite in various domains is a literally hallowed human pursuit, and smart men in particular seem to be susceptible to its charms - but in this case just because you gaze into it doesn’t mean there’s anything gazing back.