Artificial Intelligence – force for Peace or Extinction?
Can Artificial Intelligence (AI) help bring peace to the world? Or, as hundreds of experts from across the world said in a recent statement, it presents a risk of human extinction similar to those from pandemics and nuclear war?
First the Negative View
Geoffrey Hinton, sometimes called the godfather of AI, left his leading role at Google in May 2023 so that he felt able to speak openly about his worries about AI, saying it presented an “existential risk” to humanity.
The Prime Minister, Rishi Sunak, met with leaders of companies developing AI systems at the end of May to discuss how to regulate these developments and manage the risks including existential threats “so that the public can have confidence that AI is used in a safe and responsible way”.
Current AI systems already distribute fake news and disinformation as readily as truth, write essays for students so making examinations far more complex to manage and present ethical questions such as those UNESCO has outlined. But the real threat comes from possible future systems which could possess “superintelligence”, systems more intelligent than humans.
Superintelligent systems would probably regard humans as a threat and attempt to control or kill us. This idea was explored in a book by Nick Bostrom called “Superintelligence: Paths, Dangers, Strategies“.
The Center for AI Safety (CAIS) says that
AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.
See Wikipedia for more about the existential risk from artificial general intelligence.
The Positive View
Fellow of the Center on International Cooperation, Branka Panic, is the founder of AI for Peace, a San Francisco based thinktank whose vision is a future in which AI benefits peace, security and sustainable development and where diverse voices influence creation of AI and related technologies.
However the site is not very active and has not produced a newsletter since June 2022.
Peace One Day, whose founder Jeremy Gilley was behind the establishment of 21 September as International Day of Peace, ran an event called “AI to Peace 2023” to examine two questions:
- What role will AI play in relation to humanity’s destruction?
- What role will it play in humanity’s survival?
The discussions shared the thoughts of leading technical experts, academics and activists on the impact Artificial Intelligence already has on society, and what further developments in technology could mean for the future of humanity. Guests discussed topics such as whether AI makes life simpler, safer and more efficient, or if AI exacerbates deeply entrenched views, costs workers their jobs and multiplies already existing problems, sharing their expertise on the subject.
Videos of the events can be found on YouTube. The video below is a discussion of the role of AI in humanity’s survival.