Risk of extinction by AI should be global priority, say experts

Hundreds of tech leaders call for world to treat AI as danger on par with pandemics and nuclear war

By Geneva Abdul
May 30, 2023

A group of leading technology experts from across the world have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars.

The statement, signed by hundreds of executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology posed to humanity.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said. Signatories included the chief executives of Google’s DeepMind, the ChatGPT developer OpenAI, and the AI startup Anthropic.

Global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology owing to existential fears it could significantly affect job markets, harm the health of millions and weaponise disinformation, discrimination and impersonation.

This month the man often touted as the godfather of AI – Geoffrey Hinton, also a signatory – quit Google citing its “existential risk”. The risk was echoed and acknowledged by No 10 last week for the first time – a swift change of tack within government that came two months after publishing an AI white paper industry figures have warned is already out of date.

Continue reading at www.theguardian.com

We remind our readers that publication of articles on our site does not mean that we agree with what is written. Our policy is to publish anything which we consider of interest, so as to assist our readers  in forming their opinions. Sometimes we even publish articles with which we totally disagree, since we believe it is important for our readers to be informed on as wide a spectrum of views as possible.