The great race is on. It isn’t the one on television, but it is one that has put the world’s wealthiest companies in fierce competition to secure market share in artificial intelligence.
The handful of big-tech companies and their satellites may have spent as much as $1 trillion on machine-learning and data center infrastructure to stuff their AI systems with billions of bits of information hoovered up from public and private sources on the internet.
These companies — Amazon, Google, Meta, Microsoft and OpenAI among them — are rich and have made their creators rich beyond compare because of information technology. Their challenge is to hold onto what they have now and to secure their futures in the next great opportunity: AI.
An unfortunate result of the wild dash to secure the franchise is that the big-tech companies — and I have confirmed this with some senior employees — have rushed new products to market before they are ready.
The racers figure that the embarrassment of so-called hallucinations (errors) is better than letting a competitor get out in front.
The challenge is that if one of the companies — and Google is often mentioned — isn’t on the leaderboard, it could fail. It could happen: Remember “MySpace”?
The downside of this speedy race is that safety systems aren’t in place or effective — a danger that could spell operational catastrophe, particularly regarding so-called backdoors.
According to two savants in the AI world, Derek Reveron and John Savage, there is a clear-and-present danger presented by this urgency for market speed over dangerous consequences.
Savage is the An Wang professor emeritus of computer science at Brown, and Reveron is chair of the National Security Affairs Department at the Naval War College in Newport, Rhode Island.
Reveron and Savage have been sounding the alarm on backdoors, first in their book, “Security in the Cyber Age: An Introduction to Policy and Technology,” published by Cambridge University Press early this year, and later in an article in Binding Hook, a British website with a focus on cybersecurity and AI.
“AI systems are trained neural networks, not computer programs. A neural net has many artificial neurons with parameters on neuron inputs that are adjusted (trained) to achieve a close match between the actual and the desired outputs. The inputs (stimuli) and desired output responses constitute a training set, and the process of training a neural net is called machine-learning,” the co-authors write.
Backdoors were initially developed by telephone companies to assist the government in criminal or national security cases. That was before AI.
Savage told me that backdoors pose a grave threat because, through them, bad actors can insert malign information — commands or instructions — into computers in general and backdoors in machine-learning-based AI systems in particular. Some backdoors can be undetectable and capable of inflicting great damage.
Savage said he is especially worried about the military using AI prematurely and making the nation more vulnerable rather than safer.
He said an example would be a weapon fired from a drone fighter jet flying under AI guidance alongside a piloted fighter jet where the weapon fired by a drone could be directed to do a U-turn and come right back and destroy the piloted plane. Extrapolate that to the battlefield or to an aerial bombardment.
Savage says that researchers have recently shown that undetectable backdoors can be inserted into AI systems during the training process, which is a new, extremely serious, and largely unappreciated cybersecurity hazard.
The risk is exacerbated because feeding billions of words into big-tech companies’ machine-learning systems is now done in low-wage countries. This was highlighted in a recent “60 Minutes” episode about workers in Kenya earning $2 an hour, feeding data to machine-learning systems for American tech companies.
The bad actors can attack American AI by inserting dangerous misinformation in Kenya or in any other low-wage country. Of course, they can launch backdoor attacks here, too, where AI is used to write code, and then control for that code is lost.
In their Binding Hook article, Reveron and Savage make a critical point about AI. It isn’t just another more advanced computer system. It is fundamentally different and less manageable by its human masters. It lacks an underlying theory to explain its anomalous behavior, which is why the AI specialists who train machine-learning systems cannot explain this behavior.
Deploying technology with serious deficits is always risky until a way to compensate for them has been discovered. Trouble in is trouble out.