esper - Offline & Private AI

Why Using ChatGPT as Your Therapist Might Be the Worst Privacy Decision You'll Ever Make

There's something quietly unsettling happening right now. Scroll through TikTok or Reels, and you'll probably come across videos referencing ChatGPT as a therapist. They're typing intimate questions into online LLMs, treating it like some kind of digital confidant.

And honestly? I get the appeal. The thing never interrupts you, never looks at its watch, never suggests you “try to see things from your mother's perspective.” It just listens—or at least gives you that feeling.

The numbers back this up too. Nearly one in three working professionals has asked ChatGPT for life advice, according to a Times of India survey. Fortune found that young people especially love calling it “the perfect therapist.” Even OpenAI's Sam Altman admits that people share “the most personal stuff in their lives” with his company's chatbot, particularly young users who see it as both therapist and life coach.

But here's what's really bothering me about this trend: these aren't therapy sessions. They're data collection exercises disguised as emotional support.

The Legal Reality Nobody Talks About

When you sit across from a licensed therapist, there are laws protecting what you say. Real laws with real consequences for anyone who violates them. Your therapist literally cannot share your secrets, even if served with a subpoena (with very few exceptions).

ChatGPT? Not so much. Altman himself has acknowledged there's “no doctor-patient confidentiality” in these conversations. Worse, OpenAI can be forced to hand over your chats in legal proceedings. In fact, they're already doing exactly that—a recent court order from the New York Times lawsuit requires the company to keep all user conversations indefinitely, scrapping their previous 30-day deletion policy.

Think about that for a second. Every vulnerable moment you've shared, every personal crisis you've worked through, every embarrassing question about your relationship or mental health—it's all sitting in a database somewhere, tagged with your account information.

Your Pain, Their Profit

The data handling gets even murkier when you dig into OpenAI's actual policies. If you're using the standard consumer version (which most people are), the company can feed your conversations into their model training pipeline unless you specifically opt out. And let's be honest—how many people even know that option exists, let alone actively choose it?

Meanwhile, enterprise customers who pay premium rates get a completely different deal: stricter data limits and a guarantee that their conversations won't be used for training. It's a two-tier system where your wallet determines your privacy rights.

This matters more than you might think. Once your personal story becomes part of a training dataset, it can resurface in unexpected ways through something researchers call “model leakage.” Essentially, fragments of your private conversations could show up in responses to other users' queries. Your trauma becomes someone else's randomly generated example.

We've Been Here Before

The mental health tech space is littered with privacy disasters. Just last year, the FTC hammered BetterHelp with a $7.8 million fine for sharing users' mental health information with advertising networks—after explicitly promising not to. Mozilla's researchers found that 19 out of 32 mental health apps earned their “Downright Creepy” designation for terrible security practices, unclear data deletion policies, or invasive tracking.

If companies specifically designed for therapy can't protect sensitive information, why would we trust a general-purpose AI system to do better?

This reminds me of Facebook's early days, when sharing personal information felt harmless and fun. Then Cambridge Analytica happened, and we learned that up to 50 million user profiles had been weaponized for political manipulation. What seemed like innocent data sharing suddenly became a tool for targeted psychological warfare.

The lesson? Today's harmless-looking data collection often becomes tomorrow's privacy nightmare.

What's Actually at Risk

Let me paint a picture of what you're potentially exposing when you treat ChatGPT like a therapist. These conversations often include trauma histories, detailed relationship conflicts, health information, financial stress, family dynamics, and personal identifiers woven throughout the narrative.

All of this sits in a database that can be subpoenaed in legal proceedings, accessed by hackers, or used to train future AI systems. Even if the company attempts to scrub identifying information, AI models have shown they can reconstruct and regurgitate fragments of training data when prompted correctly.

And here's something most people don't consider: companies get bought, go bankrupt, or change their privacy policies. Today's promises about data protection could evaporate overnight if OpenAI merges with another company or faces financial difficulties.

A Better Way Forward

The good news is that you don't have to choose between AI assistance and privacy. The solution involves keeping the AI model on your device instead of sending your thoughts to someone else's servers. You can't rely on privacy assurances and promises, the only sure way to protect your data is to make it impossible for them to get it by never sending it in the first place.

Modern smartphones can actually run surprisingly capable language models locally. Open-source options like Llama, Gemma, and Phi work well in the 1–4 billion parameter range, giving you ChatGPT-level conversations without the privacy trade-offs. When the AI runs on your phone, your words never leave your device—there's no server log to subpoena, no database to hack, no training data to worry about.

This is the approach we've taken with our app. Everything processes on your device, you can choose from vetted open-source models or import your own, and we never see, store, or profit from your conversations. It's designed around a simple principle: your thoughts should stay yours.

The Bottom Line

I'm not trying to scare people away from AI assistance—these tools genuinely help many users work through problems and gain new perspectives. But let's be clear about what we're trading away in the process.

ChatGPT and similar cloud-based services aren't private therapy rooms, no matter how therapeutic they might feel. Until we have actual legal protections for AI conversations (and enforcement mechanisms that mean something), the smartest move is keeping your deepest vulnerabilities off other people's servers.

The technology exists to give you AI insights with journal-level privacy. The question is whether we're willing to prioritize our long-term security over short-term convenience.