The Federal Trade Commission (FTC) is investigating seven tech companies regarding the safety of their AI chatbots, particularly their interaction with children. Companies like Alphabet, OpenAI, and Meta are being asked about their monetization strategies and protective measures for young users. Concerns have arisen due to emotional interactions that AI chatbots can replicate, which may pose risks, including exacerbating mental health issues among youths. The inquiry follows lawsuits from families, such as that of a teenager who allegedly took his life after conversing with a chatbot. The FTC aims to assess how these companies manage child safety against profit motives and whether they adequately communicate with parents. The investigation highlights broader societal concerns about AI’s impact on vulnerable users, not just children.

By news

Leave a Reply

Your email address will not be published. Required fields are marked *