ChatGPT users are advised to rethink their reliance on the AI application for emotional or therapeutic support, as highlighted by OpenAI CEO Sam Altman. He mentioned that the technology sector hasn’t yet implemented privacy safeguards for sensitive conversations, making it clear that there is no doctor-patient confidentiality with AI, implying that there will be no easy path for users to medical malpractice compensation. During a recent appearance on Theo Von’s podcast, Altman pointed out a significant issue: users do not have legal privacy regarding their chat interactions.
Many people share personal details with ChatGPT as if it were an informal therapist, in contrast to traditional practitioners who provide protected communication. This lack of privacy could lead to complications in legal scenarios, as OpenAI would have to reveal these conversations if lawsuits arose. Altman stressed the critical need for privacy in AI interactions, similar to that of human therapists, an issue that has only recently come to attention. OpenAI is also contesting a court order that requires retaining chat logs from many users, labeling this demand as excessive.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…