People are telling AI chatbots things they would not tell their closest friends — their real name, their location, their health problems, their family photos. Here is exactly what happens to that data and why it is far more dangerous than you think.
On March 19, 2026, Senator Bernie Sanders sat down with a phone propped on a stand and asked Anthropic's Claude a direct question: what would surprise the American people about how their personal data is being collected? The three-minute conversation went viral with over 1.1 million views — not because the AI said anything new, but because it said it plainly enough that it became impossible to ignore. Claude told Sanders that companies collect what you search, where you go, what you buy, and how long you pause on a webpage before deciding not to click. When Sanders pressed on why, the AI gave a one-word answer: money. You can watch the full exchange on Sanders' official channel here.
The irony was not lost on many observers: a senator using an AI to warn about AI. But the bigger question the video raised — and never quite answered — is this: if Claude knows exactly how personal data is exploited, what does that tell you about what happens when you share your personal data with Claude? That is what this post is about.
AI chatbots feel like a private conversation. They are not. Every message you type, every photo you upload, every detail you share is being logged, stored and — in most cases — used to train the very models you are talking to. The companies behind these tools are building the most intimate data sets ever assembled, and you are handing it to them voluntarily, one prompt at a time.
Your real name and location
The moment you tell an AI your full name and where you live, that information is tied to every other thing you have ever shared in that session. AI companies retain conversation logs — sometimes indefinitely. If that data is ever breached, sold, subpoenaed by law enforcement, or used as training data, your name is now attached to every personal question you asked: your health concerns, your financial worries, your relationship problems. A name and a city is enough for a data broker to merge your AI conversation history with your existing public profile and sell it.
Photos of yourself and your family
Uploading photos to AI tools is one of the most dangerous things people do without thinking about it. When you upload an image, the AI company receives not just the pixels but the metadata embedded in the file — which can include the exact GPS coordinates of where the photo was taken, the device you used and the timestamp. Beyond metadata, companies use uploaded images to train facial recognition and image generation models. Your face — and the faces of your children — can become part of a dataset used to generate synthetic images, train surveillance tools, or build biometric profiles without your knowledge or consent.
Health and medical information
People routinely describe symptoms, medications, mental health struggles and diagnoses to AI chatbots. This feels safer than Googling because it feels like a conversation rather than a search. It is not safer. Unlike a doctor or therapist, AI companies have no legal obligation to keep your health information confidential under HIPAA in most contexts. Health data is among the most valuable and most exploited categories of personal information. It is used to set insurance premiums, deny employment, and target advertising. Describing your depression, your chronic illness or your prescription history to an AI chatbot is handing that information to a corporation with no meaningful legal protection.
Financial details
Sharing your salary, your debt situation, your account balances or your investment details with an AI is not like telling a financial advisor — it is like posting it in a semi-public forum. People ask AI tools to help them with budgets and investing decisions and in doing so reveal their complete financial picture. This data, once collected, can be used to target you with predatory financial products, sold to lenders to inform credit decisions, or accessed by bad actors in the event of a data breach. The AI does not need to know your real numbers to help you — and you should never give them.
Personal details about other people
When you tell an AI about a conflict with a friend, describe a colleague's behaviour, or share details about a family member's situation, you are submitting that person's private information without their consent. They did not agree to be profiled by an AI company. Their information is now part of a corporate data set alongside yours. This is especially serious when it involves children — describing your child's behaviour, struggles or routines to an AI is creating a data profile of a minor who has no ability to consent or protect themselves.
Passwords, account details and private documents
It happens more often than you would expect. People paste a document into an AI to summarise it and that document contains social security numbers, account credentials, confidential business information or client data. Some people paste their passwords directly asking for advice on password strength. Every piece of text you submit is processed on external servers. Treat the AI input box the same way you would treat a public forum — never paste anything you would not be comfortable seeing on a billboard.
What to do instead
- Use AI tools on a separate, anonymous account with no real name, no profile photo and no identifying information attached
- Before uploading any photo, strip its metadata using MetaClean (metaclean.app) — it runs entirely in your browser, processes files locally using WebAssembly, and never uploads your images to any server. You can verify this yourself by opening browser DevTools and watching the Network tab show zero upload requests
- For health questions, use symptom checkers that are HIPAA-compliant and do not retain your data — or speak to an actual medical professional
- Never paste real financial figures — use made-up but realistic numbers when asking AI for budgeting help
- Review the privacy settings of any AI tool you use and opt out of data being used for training wherever that option exists — on ChatGPT this is under Settings → Data Controls → Improve the model for everyone
- If you are using AI for work, assume your employer and the AI company can both see everything you type
- Use locally-run AI models like Ollama or LM Studio for sensitive questions — these run entirely on your own device and never send your data to external servers
The most important mindset shift is this: an AI chatbot is not a confidant. It is a product. And like every product built by a corporation, the data you give it has value to people other than you. The less of your real self you hand over, the less there is to exploit.