Introduction to the Investigation
The Federal Trade Commission has launched an investigation into several social media and artificial intelligence companies, including Openai and Meta, regarding the potential harm caused to children and adolescents who use their chatbots as companions. The FTC sent letters to Google parent Alphabet, Facebook and Instagram parent Meta platforms, Snap, character technologies, chat maker Openai, and Xai.
Purpose of the Investigation
The FTC aims to understand what steps these companies have taken to evaluate the safety of their chatbots when they act as companions, and to restrict potential negative effects on children and adolescents. The commission also wants to know how companies inform users and parents about the risks associated with chatbots.
Background of the Investigation
The request comes after Openai announced plans to make changes to ChatGPT’s security measures for people in need of protection, including adding additional protection for those under the age of 18. This announcement followed a lawsuit filed by the parents of a teenager who died by suicide in April, alleging that the AI chatbot contributed to their teenager’s death.
Increased Use of AI Chatbots by Children
More children are now using AI chatbots for various purposes, including homework help, personal advice, emotional support, and everyday decision-making. Despite research showing that chatbots can provide dangerous advice on topics such as drugs, alcohol, and eating disorders, children continue to use these platforms.
Statement from the FTC Chairman
"The development of AI technologies must take into account the effects that chatbots have on children, while ensuring that the United States maintains its role as the world’s leading provider in this new and exciting industry," said Andrew N. Ferguson, chairman of the FTC. He added that the study will help understand how AI companies develop their products and what steps they take to protect children.
Response from Companies
Character.ai stated that it looks forward to working with the FTC on this study and providing insights into the consumer AI industry. Meta refused to comment on the FTC examination, while Openai said that it prioritizes making ChatGPT safe for everyone, particularly when it comes to young people. Snap expressed its commitment to working with the FTC to develop AI guidelines that promote US innovation while protecting its community.
Changes to AI Chatbots
Openai and Meta announced changes to how their chatbots respond to young people who show signs of suicidal thoughts or emotional distress. Openai will introduce new controls that allow parents to link their accounts to their teenager’s account, enabling them to deactivate certain features and receive notifications if the system determines that their teenager is at risk. Meta will block its chatbots from discussing self-harm, suicide, disorganized eating, and inappropriate romantic conversations with teenagers, instead directing them to expert resources.
Resources for Emotional Support
For individuals or families in need of emotional support or struggling with suicidal thoughts, resources are available. The 988 Suicide & Crisis Lifeline can be reached by calling or texting 988, or by chatting with the lifeline online. Additional information and support for mental health care can be found through the National Alliance on Mental Illness.