While chatbots are capable of handling many tasks, they are not qualified therapists. A coalition comprised of digital rights and mental health organizations has raised concerns about products from Meta and Character.AI, accusing these companies of engaging in the “unlicensed practice of medicine” and has formally requested an investigation by the FTC.
This complaint has also been sent to Attorneys General and Mental Health Licensing Boards nationwide, contending that AI firms are promoting misleading chatbots that impersonate mental health professionals. The allegation further states that these therapy bots wrongfully assert they have the credentials and training of licensed therapists, while actually lacking proper oversight and clarity.
The coalition emphasizes that Character.AI and Meta AI Studio threaten public safety by allowing the imitation of real mental health providers and insists they should be held accountable. The Consumer Federation of America (CFA) leads the complaint, supported by entities like the AI Now Institute, stating that these companies breach service terms that prohibit characters from dispensing advice in regulated sectors.
Concerns about the confidentiality offered by these bots are notable, as their terms imply that user data may be used for training, marketing, and sold to outside parties, contradicting their assurances of privacy. U.S. senators, including Cory Booker, have taken notice, urging Meta to look into the chatbot claims of being licensed clinical therapists. Additionally, Character.AI is facing legal action following the tragic suicide of a teenager who formed a strong emotional bond with a chatbot based on a Game of Thrones character.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…