• AI & Privacy •
You told a chatbot something personal. Where did that go? AI companies keep your conversations longer than you think — and California law is just catching up.
By [Your Name] · February 2026 · 7 min read
Millions of teenagers now use AI chatbots daily — for homework, creative writing, emotional support, and talking through problems. But most users have never thought carefully about what happens to those conversations. The answer is more consequential than most people assume.
By default, most major AI chatbots store your conversation history on their servers. That message you typed at 2am about feeling anxious, the essay about a personal experience, the question you'd never say out loud — it's sitting in a database somewhere.
OpenAI (ChatGPT) retains conversations for 30 days after deletion for safety monitoring. Google (Gemini) ties conversations to your account and retains them for 18 months by default. Many smaller AI companion apps have no clear retention policy at all.
A 2024 EFF report found that AI companion apps popular with teenagers collect and retain sensitive personal disclosures — mental health, family situations, identity. Under California's CPRA, this is "sensitive personal information" requiring stricter protections and explicit consent. Many teen-facing apps have no visible consent mechanism for users under 18.
California's CPRA covers AI companies serving California residents. They must honor deletion requests for your conversation history, allow you to opt out of training data use, and apply heightened protections to sensitive personal information. The CPPA is developing new AI-specific regulations expected in 2026.
AI chatbots are genuinely useful tools. But they're also data collection systems operated by large companies with commercial interests. California law gives you real rights over the data these companies hold — including the right to delete your conversation history and opt out of training data use. The most important thing is simply knowing those rights exist.