AI could threaten humanity in 2 years, warns UK AI task force adviser
The U.K. prime minister’s AI task force adviser said large AI models would need regulation and control in the next two years to curb major existential risks.
The artificial intelligence (AI) task force adviser to the prime minister of the United Kingdom said humans have roughly two years to control and regulate AI before it becomes too powerful.
In an interview with a local U.K. media outlet, Matt Clifford, who also serves as the chair of the government’s Advanced Research and Invention Agency (ARIA), stressed that current systems are getting “more and more capable at an ever-increasing rate.”
He continued to say that if officials don’t consider safety and regulations now, the systems will become “very powerful” in two years.
“We’ve got two years to get in place a framework that makes both controlling and regulating these very large models much more possible than it is today.”
Clifford warned that there are “a lot of different types of risks” when it comes to AI, both in the near term and long term, which he called “pretty scary.”
The interview came following a recent open letter published by the Center for AI Safety, signed by 350 AI experts, including OpenAI CEO Sam Altman, that said AI should be treated as an existential threat similar to that posed by nuclear weapons and pandemics.
“They’re talking about what happens once we effectively create a new species, sort of an intelligence that’s greater than humans.”
The AI task force adviser said that these threats posed by AI could be “very dangerous” and could “kill many humans, not all humans, simply from where we’d expect models to be in two years’ time.”
Related: AI-related crypto returns rose up to 41% after ChatGPT launched: Study
According to Clifford, regulators and developers’ primary focus should be understanding how to control the models and then implementing regulations on a global scale.
For now, he said his greatest fear is the lack of understanding of why AI models behave the way they do.
“The people who are building the most capable systems freely admit that they don’t understand exactly how [AI systems] exhibit the behaviors that they do.”
Clifford highlighted that many of the leaders of organizations building AI also agree that powerful AI models must undergo some kind of audit and evaluation process before deployment.
Currently, regulators worldwide are scrambling to understand the technology and its ramifications, while trying to create regulations that protect users and still allow for innovation.
On June 5, officials in the European Union went so far as to suggest mandating all AI-generated content should be labeled as such to prevent disinformation.
In the U.K., a front-bench member of the opposition Labour Party echoed the sentiments mentioned in the Center for AI Safety’s letter, saying technology should be regulated like medicine and nuclear power.
Magazine: AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more