The U.S. Federal Trade Commission (FTC) has requested that Google under Alphabet, OpenAI, Meta Platforms Inc., and four other artificial intelligence chatbot developers submit relevant materials explaining their technology's impact on children.
The agency responsible for antitrust and consumer protection announced Thursday that it has issued directives to these companies to collect information studying how they assess, test, and monitor chatbots, as well as what measures they have taken to restrict minor usage. The seven companies include Google, OpenAI, Meta Platforms Inc. and its Instagram subsidiary, Snap Inc., Elon Musk's xAI, and Character Technologies Inc., the developer of Character.AI.
Chatbot developers are facing increasingly strict scrutiny, with questions about whether they are taking sufficient action to ensure service safety and prevent users from engaging in dangerous behaviors. Last month, parents of a California high school student sued OpenAI, claiming ChatGPT alienated their son from his family and helped him plan his suicide in April this year. The company stated it has expressed sympathy to the family and is evaluating the lawsuit.
Google and Snap did not immediately comment, while OpenAI, xAI, and Character.AI also did not immediately respond. Meta Platforms Inc. declined to comment, but the company has recently taken measures to ensure chatbots avoid discussing topics including self-harm and suicide with minors.
Under U.S. law, technology companies cannot collect data from children under 13 without parental permission. For years, lawmakers have tried to extend these protections to older teenagers, but related legislation has not progressed to date.