President Joe Biden's administration is pressuring the tech industry and financial institutions to shut down a growing market for abusive sexual images created with artificial intelligence technology.
New generative AI tools are making it easy to transform someone's likeness into a sexually explicit AI deepfake and then share the realistic image in chat rooms and on social media, leaving victims, whether they're celebrities or children, with little to stop it.
The White House on Thursday issued an invitation for companies to cooperate voluntarily in the absence of federal legislation, with officials promising a series of specific steps they hope will help curb the private sector's ability to create, spread and monetize non-consensual AI imagery, including explicit images of children.
“When generative AI emerged, everyone was speculating where the first real harm was going to happen, and I think we know the answer,” said Arati Prabhakar, director of the White House Office of Science and Technology Policy and Biden's chief science adviser.
She told The Associated Press that there has been an “alarming increase” in non-consensual imagery facilitated by AI tools that primarily targets women and girls and threatens to upend their lives.
“If you're a teenage girl, if you're a gay kid, these are issues that people are experiencing right now,” she said. “We've seen an acceleration because of generative AI, which is evolving very quickly, and the quickest thing that can happen is for companies to stand up and take responsibility.”
The document, shared with The Associated Press ahead of Thursday's announcement, calls for action not only from AI developers but also from payment processors, financial institutions, cloud computing providers, search engines and the gatekeepers that control listings in mobile app stores – specifically Apple and Google.
The government said the private sector should work actively to “prevent the monetisation” of image-based sexual abuse, particularly restricting paying access to sites that promote explicit images of minors.
Prabhakar said many payment platforms and financial institutions have already stated that they will not support companies that promote abusive imagery.
“But sometimes it's not enforced. Sometimes there are no terms and conditions,” she said. “This is an example of where enforcement could be more rigorous.”
Cloud service providers and mobile app stores can also “regulate web services or mobile applications that are marketed for the purpose of creating or altering sexually explicit images without the consent of individuals,” the document said.
And whether the images posted online are AI-generated or real nudes, online platforms should make it easier for victims to remove them.
Taylor Swift is the best-known victim of pornographic deepfake images. When AI-generated images of the singer-songwriter being abused began circulating on social media in January, her most ardent fans fought back. Microsoft promised to step up safety measures after it was discovered that some of Swift's images were the product of the company's AI visual design tool.
A growing number of schools in the United States and other countries are struggling with deepfake images, AI-generated nude images of students, in some cases after classmates were found to have used the technology to create and share the images with their classmates.
Last summer, the Biden administration brokered voluntary commitments from major tech companies, including Amazon, Google, Meta and Microsoft, to implement a range of safeguards before releasing new AI systems to the public.
Then, in October, President Biden signed an ambitious executive order aimed at shaping how companies develop AI so that it can benefit them without endangering public safety. While the order focuses on broader concerns about AI, including national security, it also takes note of the emerging problem of AI-generated child abuse imagery and the search for better ways to detect it.
But Biden also said the administration's AI safeguards would need to be backed up by law. A bipartisan group of senators is currently calling on Congress to spend at least $32 billion over the next three years to fund measures to develop and safely guide artificial intelligence, but they have largely held off on calling for those safeguards to be enacted into law.
Jennifer Klein of the White House Gender Policy Council said encouraging companies to take voluntary steps “doesn't change the fundamental need for Congress to act here.”
Creating and possessing sexual images of children, even if they are fake, is already a crime under longstanding laws. Federal prosecutors earlier this month charged a Wisconsin man with using a popular AI image-generating tool called Stable Diffusion to create thousands of realistic images of minors engaging in sexual acts. The man's lawyer declined to comment after his arraignment on Wednesday.
Yet there is little oversight over the technological tools and services that enable the creation of such images, some of which are hosted on ephemeral commercial websites that reveal little information about their operators or the underlying technology.
In December, the Stanford Internet Observatory said it had found thousands of suspected child sexual abuse images in LAION, a massive AI database that indexes online images and captions and is used to train leading AI image creation tools such as Stable Diffusion.
London-based Stability AI, which owns the latest version of Stable Diffusion, said this week that it “did not approve the release” of the earlier model allegedly used by the Wisconsin man. Such open-source models are difficult to revert because their technical components are publicly available on the Internet.
Prabhakar said it's not just open-source AI technology that's causing harm.
“This is a much broader issue,” she says, “and unfortunately, this seems to be a category where a lot of people are using image generators, and we've seen an explosion in this space recently, but I don't think it breaks down neatly into open source and proprietary systems.”
To stay up to date on how AI is shaping the future of business, subscribe to our Eye on AI newsletter. Sign up for free.
Source link