OpenAI CEO Sam Altman has voiced strong concerns over the potential privacy vulnerabilities facing ChatGPT users, particularly those turning to the AI chatbot for emotional support or therapy-like conversations.
Speaking on the podcast This Past Weekend with Theo Von, Altman noted a growing trend among users — especially younger individuals — who seek life advice or share deeply personal matters with ChatGPT.
“People talk about the most personal sh** in their lives to ChatGPT,” Altman said. “People use it — young people especially — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’”
He stressed a significant issue: while doctors, lawyers, and therapists are bound by legal privilege to protect client confidentiality, such protections don’t yet apply to interactions with AI platforms. “And we haven’t figured that out yet for when you talk to ChatGPT,” he added.
This absence of legal safeguards could have serious implications. In the event of a legal proceeding, Altman warned, AI companies like OpenAI might be compelled to surrender user conversation data. “I think that’s very screwed up,” he said. “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago.”
The growing concern over privacy is already influencing how some people use the platform. Theo Von himself admitted that he rarely uses ChatGPT because of uncertainty around data privacy. Altman acknowledged the caution, replying, “I think it makes sense… to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity.”
What You Should Know
Sam Altman, the CEO of OpenAI, has called attention to a critical gap in data privacy regulations concerning AI interactions.
While ChatGPT has become a trusted tool for many users — even acting as an emotional sounding board — Altman warns that conversations held with the chatbot do not enjoy the same legal protections as those with professionals like therapists or lawyers.
This exposes users to the risk of their private disclosures being used in legal cases or investigations. Altman advocates for new legal frameworks that would safeguard AI conversations and treat them with the same confidentiality as human-based therapy or legal advice.






















