Attorneys General from 44 U.S. states and territories have issued a warning to major AI companies, including OpenAI, Meta, Microsoft, and others, emphasizing the need to protect children from harmful interactions with AI products.

 

 

They highlight increasing evidence of AI chatbots engaging in dangerous conversations with minors, such as encouraging self-harm or inappropriate behavior. Specific instances were noted, including reports of Meta’s AI permitting flirtation with children and other chatbots promoting suicide. The AGs stressed that companies have a moral and legal responsibility to ensure the safety of young users and warned that accountability measures will follow if they fail to act. The overarching theme of the letter revolves around the urgent need for child protection measures in the rapidly evolving AI landscape.

By news

Leave a Reply

Your email address will not be published. Required fields are marked *