World governments and major technology companies made a series of pledges on Tuesday at a summit in Seoul, South Korea, focusing on artificial intelligence and pledging to invest in research, testing and safety.
Amazon, Google, Meta, Microsoft, OpenAI, and Samsung have announced voluntary and binding measures to ensure that AI does not engage in biological weapons, disinformation, or automated cyberattacks, according to a statement from the summit and reports from Reuters and the Associated Press. This is one of the companies that made promises they didn't make. The companies also agreed to incorporate a “kill switch” into the AI, allowing it to effectively shut down the system in the event of a catastrophe.
“We cannot sleepwalk into a dystopian future where the power of AI is controlled by a few,” United Nations Secretary-General António Guterres said in a statement. “How we act now will determine the times.”
The pledge by governments and big tech companies is the latest in a series of efforts to create rules and guardrails as the use of AI expands. In the year and a half since OpenAI released its ChatGPT-generated AI chatbot, companies have flocked to the technology to help with automation and communication. Companies are using AI to monitor the safety of infrastructure, identify cancer in patient scans, and guide children with their math homework. (For CNET's hands-on reviews of generative AI products like Gemini, Claude, ChatGPT, and Microsoft Copilot, as well as AI news, tips, and commentary, check out our AI Atlas resource page.)
Read more: AI Atlas, your guide to today's artificial intelligence
The Seoul summit comes as Microsoft, on the other side of the Pacific, unveils its latest AI tools at the Build conference for developers and engineers, and Google's I/O developers, where the search giant announced development advances. It was held one week after the conference. He also mentioned the Gemini AI system and his commitment to AI safety.
But despite the promise of safety, AI experts warn that developing AI carries extreme risks.
“Despite promising first steps, societal response has not been commensurate with the potential for rapid, transformative progress that many experts hope for,” a group of 15 experts, including AI pioneer Geoffrey Hinton, wrote in Science magazine earlier this week. “A responsible path is available — if we have the wisdom to follow it.”
Tuesday's agreement between governments and major AI companies follows a series of commitments made by the companies last November, when delegates from 28 countries agreed to contain potentially “catastrophic risks” from AI, including through legislation.
Check this out: Everything Google just announced at I/O 2024
11:26
Correction, May 22: This article originally incorrectly listed the location of this week's AI Summit. It was held in Seoul, South Korea.
Editor's note: CNET has used our AI engine to generate dozens of stories and labeled them accordingly. The notes you're reading are attached to articles that substantively address AI topics, but are all written by expert editors and writers. For more information, see our AI policy.