- Warnings about the potential dangers of advanced AI have been increasing in recent months.
- Some of these statements are vague and experts disagree on what exactly the main risks are.
- These are some of the potential threats from advanced AI, and how to think about the risks.
AI is as dangerous as nuclear war and global pandemics.
That’s according to the latest warning issued by the Center for AI Safety (CAIS). The statement is backed by major players in the AI industry including Sam Altman, head of ChatGPT creator OpenAI.
The warning is one of many that have been issued in recent months. Some of the tech’s early creators claim we’re barreling head-first toward the destruction of humanity while others are warning that regulation is desperately needed.
Some of these statements have left people struggling to make sense of the increasingly hyperbolic claims.
David Krueger, an AI expert and assistant professor at Cambridge University, said that while people might want concrete scenarios when it comes to the existential risk of AI, it’s still difficult to point to these with any degree of certainty.
“I’m not concerned because there is an imminent threat in the sense where I can see exactly what the threat is. But I think we don’t have a lot of time to prepare for potential upcoming threats,” he told Insider.
With that in mind, here are some of the potential issues experts are worried about.
1. An AI takeover
One of the most commonly cited risks is that AI will get out of its creator’s control.
Artificial general intelligence (AGI) refers to AI that is as smart or smarter than humans at a broad range of tasks. Current AI systems are not sentient but they are created to be humanlike. ChatGPT, for example, is built to make users feel like they are chatting with another person, said Janis Wong said of The Alan Turing Institute.
Experts are divided on how exactly to define AGI but generally agree that the potential technology presents dangers to humanity that need to be researched and regulated, Insider’s Aaron Mok reported.
Krueger said the most obvious example of these dangers is military competition between nations.
“Military competition with autonomous weapons — systems that by design have the ability to affect the physical world and cause harm — it seems more clear how such systems could end up killing lots of people,” he said.
“A total war scenario powered by AI in a future when we have advanced systems that are smarter than people, I think it’d be very likely that the systems would get out of control and might end up killing everybody as a result,” he added.
2. AI causing mass unemployment
There’s a growing consensus that AI is a threat to some jobs.
Abhishek Gupta, founder of the Montreal AI Ethics Institute, said the prospect of AI-induced job losses was the most “realistic, immediate, and perhaps pressing” existential threat.
“We need to look at the lack of purpose that people would feel at the loss of jobs en masse,” he told Insider. “The existential part of it is what are people going to do and where are they going get their purpose from?”
“That is not to say that work is everything, but it is quite a bit of our lives,” he added.
CEOs are starting to be upfront about their plans to leverage AI. IBM CEO Arvind Krishna, for example, recently announced the company would slow hiring for roles that could be replaced with AI.
“Four or five years ago, nobody would have said anything like that statement and be taken seriously,” Gupta said of IBM.
3. AI bias
If AI systems are used to help make wider societal decisions, systematic bias can become a serious risk, experts told Insider.
There have already been several examples of bias in generative AI systems, including early versions of ChatGPT. You can read some of the shocking answers from the chatbot here. OpenAI has added more guardrails to help ChatGPT evade problematic answers from users asking the system for offensive content.
Generative AI image models can produce harmful stereotypes, according to tests run by Insider earlier this year.
If there are instances of undetected bias in AI systems that are used to make real-world decisions, for example approving welfare benefits, that could have serious consequences, Gupta said.
The training data is often based on predominantly English language data, and funding for training other AI models with different languages is limited, according to Wong.
“So there’s a lot of people who are excluded or certain languages will be trained less well as other languages as well,” she said.
Read the full article here