AI startup Anthropic is changing its policies to allow minors to use its generative AI systems, at least in certain situations.
Announced in a post on the company's official blog on Friday, Anthropic will allow third-party apps (but not necessarily its own apps) that leverage its AI models, as long as the app developer implements certain safety features. ) to begin using tweens and teens. Disclose to users which Anthropic technologies are used.
In a support article, Anthropic lists safety measures that developers creating AI-powered apps for minors should include: age verification systems, content moderation and filtering, and “safe and responsible” design for minors. Lists educational resources and more on the use of AI. The company also made available “technical measures” aimed at tailoring AI product experiences for minors, including “Child Safety System Prompts” that developers targeting minors must implement. It also states that there is a possibility.
Developers using Anthropic's AI models must comply with “applicable” child safety and data privacy regulations, including the Children's Online Privacy Protection Act (COPPA), the U.S. federal law that protects the online privacy of children under 13. must also be complied with. Anthropic says it plans. “Regularly” audit apps for compliance, suspend or terminate the accounts of users who repeatedly violate compliance requirements, and “clearly document” that developers are compliant on their public sites and documentation. oblige.
“There are specific use cases where AI tools can provide significant benefits to young users, such as test preparation and tutoring support,” Anthropic wrote in the post. “With this in mind, our updated policy allows organizations to incorporate our APIs into products intended for minors.”
Anthropic's policy change comes as children and teens increasingly turn to generative AI tools to solve not only academic but also personal problems, and competing generative AI tools such as Google and OpenAI. The move comes as AI vendors explore more use cases for children. This year, OpenAI announced a new team to research child safety and a partnership with Common Sense Media to co-create child-friendly AI guidelines. Google also rebranded its chatbot Bard as Gemini before making it available in English to teens in some regions.
A Center for Democracy and Technology poll found that 29% of children have used generative AI like OpenAI's ChatGPT to deal with anxiety or mental health issues, and 22% have used generative AI to deal with problems with friends. , 16% reported using it in conflicts with family members.
story continues
Last summer, schools and universities rushed to ban generative AI apps, specifically ChatGPT, over fears of plagiarism and misinformation. Some have since lifted their bans. But not everyone is convinced of generative AI's potential for good, with surveys like the UK Safer Internet Center finding that more than half (53%) of children say their peers disapprove of generative AI. I found that some people reported seeing it used in a similar way. Creating false and believable information or images (including pornographic deepfakes) that are used to upset someone.
There is a growing call for guidelines regarding the use of generative AI by children.
Late last year, the United Nations Educational, Scientific and Cultural Organization (UNESCO) called on governments to regulate the use of generative AI in education, including age limits for users and the introduction of guardrails around data protection and user privacy. “Generative AI has the potential to offer great opportunities for human development, but it also has the potential to cause harm and prejudice,” UNESCO Director-General Audrey Azoulay said in a press release. “It cannot be integrated into education without public involvement and the necessary safeguards and regulations from governments.”