A whole new thing to worry about has just arrived. It joins a list of existential concerns for the future, along with global warming, the wobbling of democracy, the relationship with China, the national debt, the supply chain crisis, and the wreckage in the schools.
Artificial intelligence, known as AI, has had pride of place on the worry list for several weeks. Its arrival was trumpeted for a long time, including by the government and by techies across the board. But it took ChatGPT, an AI chatbot developed by OpenAI, for the hair on the back of the national neck to rise.
Now we know the race into the unknown is speeding up. The tech biggies, like Google and Facebook, are trying to catch the lead claimed by Microsoft. They are rushing headlong into a science the experts say they only partially understand. They really don’t know how these complex systems work; maybe like a book that the author cannot read after having written it.
Incalculable acres of newsprint and untold decibels of broadcasting have been raising the alarm ever since a ChatGPT test told a New York Times reporter that it was in love with him and he should leave his wife. Guffaws all around, but also fear and doubt about the future.
Will this Frankenstein creature turn on us? Maybe it loves just one person, hates the rest of us, and plans to do something about it.
In an interview on the PBS television program “White House Chronicle,” John Savage, An Wang professor emeritus of computer science at Brown University, told me there was a danger of over-reliance, and hence mistakes, on decisions made using AI.
For example, he said, some Stanford students partly covered a stop sign with black and white pieces of tape. AI misread the sign as signaling it was OK to travel 45 miles an hour. Similarly, Savage said the slightest calibration error in a medical operation using artificial intelligence could result in a fatality.
Savage believes AI needs to be regulated and that any information generated by AI needs verification. As a journalist, it is the latter that alarms.
Already, AI is writing fake music almost undetectably. There is a real possibility that it can write legal briefs. So why not usurp journalism for ulterior purposes and put stiffs like me out of work?
AI images can already be made to speak and look like the humans they are aping. How will you recognize a “deep fake” from the real thing? Probably, you won’t.
Currently, we are struggling with what is fact and where is the truth. There is so much disinformation, so speedily dispersed that some journalists are in a state of shell shock, particularly in Eastern Europe, where legitimate writers and broadcasters are assaulted daily with disinformation from Russia.
“How can we tell what is true?” a reporter in Vilnius, Lithuania, asked me during an Association of European Journalists’ meeting as the Russian disinformation campaign was revving up before the Russian invasion of Ukraine.
Well, that is going to get a lot harder. “You need to know the provenance of information and images before they are published,” Brown University’s Savage said.
But how? In a newsroom on deadline, we have to trust the information we have. One wonders to what extent malicious users of the new technology will infiltrate research materials or, later, the content of encyclopedias. Or, are the tools of verification themselves trustworthy?
Obviously, there will be upsides to thinking-machines scouring the internet for information on which to make decisions. I think of handling nuclear waste; disarming old weapons; simulating the battlefield; incorporating historical knowledge; and seeking new products and materials. Medical research will accelerate, one assumes.
However, privacy may be a thing of the past — it almost certainly will be.
Just consider that attractive person you saw at the supermarket but were unsure what would happen if you initiated a conversation. Snap a picture on your camera, and in no time AI will tell you who the stranger is, whether the person might want to know you and, if that should be your interest, whether the person is married, in a relationship or just waiting to meet someone like you. Or whether he or she is a spy for a hostile government.
AI might save us from ourselves. But we should ask how badly we need saving — and be prepared to ignore the answer. Damn it, we are human.
James White says
the hype is Fear the AI.
Government does not need to regulate AI or any computer programming