SEOUL, South Korea (AP) – The world's leading artificial intelligence companies opened a mini-summit on AI by pledging to develop their technology safely, including pulling back if the most extreme risks cannot be contained. .
World leaders are expected to hammer out further agreements on artificial intelligence as they meet virtually on Tuesday to discuss not only the potential risks of AI, but also its benefits and ways to foster innovation.
The AI Seoul Summit is a low-key follow-up to the high-profile AI Safety Summit held in November at Bletchley Park in the UK, with participating countries anticipating the potential “disastrous consequences” brought about by incredible advances in AI. We agreed to work together to reduce the risk.
Seven months after the Bletchley Park meeting, United Nations Secretary-General António Guterres said in the opening session, “From disinformation to mass surveillance to the possibility of autonomous lethal weapons, we are witnessing life-changing technological advances and… “We are witnessing new life-threatening risks.”
In a video address, the UN Secretary-General said there is a need for universal guardrails and regular dialogue on AI. “We cannot sleepwalk into a dystopian future where the power of AI is controlled by a small number of people, or worse, by algorithms beyond human comprehension,” he said.
The two-day conference, co-hosted by the South Korean and UK governments, comes as major technology companies such as Meta, OpenAI and Google unveil the latest versions of their AI models.
The companies are among 16 AI companies that have made voluntary commitments about the safety of their AI as negotiations progress, the UK government said. These companies, which also include Amazon, Microsoft, France's Mistral AI, China's Zhipu.ai, and the United Arab Emirates' G42, are committed to responsible governance and public transparency, and are committed to ensuring the safety of their cutting-edge AI models. I vowed to secure it.
This pledge includes publishing a safety framework that defines how the risks of these models will be measured. In extreme cases where the risks are severe and “unbearable” and the risks cannot be mitigated, AI companies will need to hit the kill switch and stop developing and deploying their models and systems.
story continues
Since last year's UK conference, the AI industry has “increasingly focused on its most pressing concerns, including misinformation and disinformation, data security, bias, and the constant surveillance of humans,” one of the conference's leaders said. said Aiden Gomez, CEO of Cohere. AI companies that have signed the agreement. “It is important to continue to consider all possible risks, prioritizing those that are most likely to cause problems if not properly addressed.”
On Tuesday evening, South Korean President Yoon Seok-youl and British Chancellor Rishi Sunak are due to meet with other world leaders, industry chiefs and heads of international organisations for a virtual summit. The online summit will be followed by an in-person meeting on Wednesday of digital ministers and experts, organisers said.
While the UK meeting focused on AI safety issues, the agenda for this week's meeting has been expanded to include “innovation and inclusion,” Wang Yoon-jeong, deputy director of South Korea's National Security Agency, told reporters on Monday. told.
Wang said participants will then “discuss not only the risks posed by AI, but also the positive aspects of AI and how it can contribute to humanity in a balanced manner.”
According to Park Sang-wook, President Yoon's chief of staff for science and technology, the AI agreement will include the results of discussions on safety, innovation and inclusiveness.
Governments around the world are scrambling to develop regulations around AI, even as AI technology advances rapidly and threatens to transform many aspects of daily life, from education and the workplace to copyright and privacy. I am. There are concerns that advances in AI could destroy jobs, deceive people and spread disinformation.
This week's meeting is just one in a series of efforts to develop AI guardrails. The United Nations General Assembly approved its first resolution on the safe use of AI systems, the US and China recently held their first high-level talks on AI, and the European Union's world-first AI law is set to come into force later this year. be.
__
Chan contributed to this report from London. Associated Press writer Edith M. Lederer contributed from the United Nations.
Kim Hyun-jin and Kelvin Chan, Associated Press