AI chatbots are dangerous, and the U.S. is finally taking action.
The FTC issued 6(b) orders to Google, OpenAI, Meta, xAI, CharacterAI, Snap, and Instagram. Here's what it means:
The focus of these orders is to understand what steps these seven companies have taken to prevent the negative impacts AI chatbots can have on children.
According to the official release, the FTC wants to understand what steps these companies have taken to:
evaluate the safety of their AI chatbots when acting as companions;
limit the products' use by and potential negative effects on children and teens;
inform users and parents of the risks associated with the products.
The FTC's 6(b) authority allows it to conduct wide-ranging studies that don't have a specific law enforcement purpose.
In this specific case, the FTC is looking for specific information about how these companies:
monetize user engagement
process user inputs and generate outputs in response to user inquiries
develop and approve characters
measure, test, and monitor for negative impacts before and after deployment
mitigate negative impacts, particularly to children
employ disclosures, advertising, and other representations to inform users and parents about features, capabilities, the intended audience, potential negative impacts, and data collection and handling practices
monitor and enforce compliance with the company rules and terms of service
use or share personal information obtained through users' conversations with the AI chatbots
The FTC inquiries are often broad and thorough, and they ultimately shape the tech policy landscape, as other countries' authorities often follow suit with more internal scrutiny.
Those who think that the U.S. doesn't regulate AI are deeply mistaken.
It's true that the U.S. does not have a comprehensive federal AI law; HOWEVER, besides HUNDREDS of state laws covering AI, federal agencies such as the FTC have a crucial role in shaping technological development and deployment.
I would go so far as to say that FTC enforcement over tech practices is often more effective than EU-style enforcement, with greater scrutiny, higher fines, and more global influence.
Specifically on chatbots, if you read my newsletter, you are probably already tired of hearing me highlight their dangers, especially when children and vulnerable people are involved.
In the past months, the FTC has published various articles highlighting some of the problematic issues involving AI chatbots.
This week, the FTC took an additional step and issued 6(b) orders to these major companies behind AI chatbots.
Depending on how these inquiries go, we might see more targeted enforcement actions soon.