← Back to Website
Weekly Blog

Privacy Insights

Real AI threats. Practical fixes. No fluff. A new post every week so you always know what to watch out for and what to do about it.

How AI Is Quietly Building a Profile on You Right Now
Every search, every click, every pause on a video — AI systems are stitching together a detailed profile of who you are. Here is exactly how it works and what you can do about it today.

You do not have to do anything suspicious for AI to build a profile on you. You just have to exist online. Every interaction you have — every search query, every product page you linger on, every video you pause — is a data point being collected, processed and sold.

How the profile gets built

Data brokers scrape publicly available information about you — your name, address, age, relatives, social media activity, court records, property records — and combine it into a single profile. AI then enriches that profile by predicting your income, health status, political views, relationship status and purchase intent based on your behaviour patterns.

Who buys it

Advertisers. Insurance companies. Employers. Landlords. Law enforcement. The market for personal data is worth hundreds of billions of dollars annually and your profile is being sold without your knowledge or consent.

What you can do right now

  • Search your name on Google and see what comes up — that is the surface of your public profile
  • Use a service like DeleteMe to request removal from the major data broker databases
  • Use a VPN like NordVPN to stop your internet provider from selling your browsing history
  • Switch to ProtonMail so your emails are not being scanned and used for targeting

None of these steps makes you invisible but together they make you significantly harder and more expensive to profile — which means most data brokers will move on to easier targets.

5 Settings on Your Phone That Are Leaking Your Location Right Now
Most people have no idea their phone is broadcasting their location to dozens of apps even when GPS is off. These five settings take two minutes to fix and stop the bleeding immediately.

Turning off GPS does not stop your phone from tracking your location. Your phone can still triangulate your position using WiFi networks, Bluetooth beacons, cell towers, and even the barometric pressure sensor. Here are the five settings you need to change today.

1. App location permissions

Go to Settings → Privacy → Location Services and audit every single app. Most apps should be set to "Never" or "While Using." Almost no app genuinely needs location access "Always."

2. WiFi scanning

Even with WiFi off, your phone scans for nearby networks to improve location accuracy. On Android go to Location → WiFi Scanning and turn it off. On iPhone disable Location Services for System Services → Networking and Wireless.

3. Bluetooth scanning

Same principle as WiFi scanning. Your phone uses nearby Bluetooth devices to pinpoint your location. Disable Bluetooth Scanning in Location settings on Android.

4. Google Location History / Apple Significant Locations

Both Google and Apple keep a detailed record of everywhere you go. On Android go to Google Account → Data and Privacy → Location History and turn it off and delete existing history. On iPhone go to Settings → Privacy → Location Services → System Services → Significant Locations.

5. Ad ID tracking

Your phone has an advertising ID that allows apps to link your behaviour across different apps and build a unified profile. On Android go to Settings → Privacy → Ads and select Delete Advertising ID. On iPhone go to Settings → Privacy → Tracking and disable Allow Apps to Request to Track.

Facial Recognition Is in More Places Than You Think
From supermarket entrances to sports stadiums, facial recognition cameras are being deployed faster than any regulation can keep up. Here is where they are and how to make yourself harder to identify.

You walk into a supermarket. Before you have picked up a basket, a camera above the entrance has captured your face, matched it against a database of known shoplifters and flagged your entry to security. You are innocent. You have never stolen anything. But you are in the database because your face was once captured near an incident three years ago.

Where facial recognition is currently deployed

  • Major retail chains including some of the largest supermarket groups in the US
  • Sports stadiums and music venues for ticketing and threat detection
  • Airports and border crossings
  • Police departments in over 30 US states
  • Public transport networks in major cities
  • Some schools and university campuses

The accuracy problem

Facial recognition systems have documented error rates that are significantly higher for darker skin tones and for women. Multiple people have been wrongfully arrested based solely on a facial recognition match. The technology is being deployed far faster than any legal framework to govern it.

What you can do

You cannot opt out of every camera but you can make yourself significantly harder to identify accurately. IR-blocking glasses like Reflectacles disrupt the infrared sensors that most facial recognition cameras rely on while looking completely normal to the human eye. Combined with being mindful of where you walk and what you carry, you can substantially reduce the number of accurate matches generated against your face.

The goal is not invisibility — it is raising the cost of surveillance high enough that you become not worth the effort.

Things You Should Never Share With AI — And What Happens When You Do
People are telling AI chatbots things they would not tell their closest friends — their real name, their location, their health problems, their family photos. Here is exactly what happens to that data and why it is far more dangerous than you think.

On March 19, 2026, Senator Bernie Sanders sat down with a phone propped on a stand and asked Anthropic's Claude a direct question: what would surprise the American people about how their personal data is being collected? The three-minute conversation went viral with over 1.1 million views — not because the AI said anything new, but because it said it plainly enough that it became impossible to ignore. Claude told Sanders that companies collect what you search, where you go, what you buy, and how long you pause on a webpage before deciding not to click. When Sanders pressed on why, the AI gave a one-word answer: money. You can watch the full exchange on Sanders' official channel here.

The irony was not lost on many observers: a senator using an AI to warn about AI. But the bigger question the video raised — and never quite answered — is this: if Claude knows exactly how personal data is exploited, what does that tell you about what happens when you share your personal data with Claude? That is what this post is about.

AI chatbots feel like a private conversation. They are not. Every message you type, every photo you upload, every detail you share is being logged, stored and — in most cases — used to train the very models you are talking to. The companies behind these tools are building the most intimate data sets ever assembled, and you are handing it to them voluntarily, one prompt at a time.

Your real name and location

The moment you tell an AI your full name and where you live, that information is tied to every other thing you have ever shared in that session. AI companies retain conversation logs — sometimes indefinitely. If that data is ever breached, sold, subpoenaed by law enforcement, or used as training data, your name is now attached to every personal question you asked: your health concerns, your financial worries, your relationship problems. A name and a city is enough for a data broker to merge your AI conversation history with your existing public profile and sell it.

Photos of yourself and your family

Uploading photos to AI tools is one of the most dangerous things people do without thinking about it. When you upload an image, the AI company receives not just the pixels but the metadata embedded in the file — which can include the exact GPS coordinates of where the photo was taken, the device you used and the timestamp. Beyond metadata, companies use uploaded images to train facial recognition and image generation models. Your face — and the faces of your children — can become part of a dataset used to generate synthetic images, train surveillance tools, or build biometric profiles without your knowledge or consent.

Health and medical information

People routinely describe symptoms, medications, mental health struggles and diagnoses to AI chatbots. This feels safer than Googling because it feels like a conversation rather than a search. It is not safer. Unlike a doctor or therapist, AI companies have no legal obligation to keep your health information confidential under HIPAA in most contexts. Health data is among the most valuable and most exploited categories of personal information. It is used to set insurance premiums, deny employment, and target advertising. Describing your depression, your chronic illness or your prescription history to an AI chatbot is handing that information to a corporation with no meaningful legal protection.

Financial details

Sharing your salary, your debt situation, your account balances or your investment details with an AI is not like telling a financial advisor — it is like posting it in a semi-public forum. People ask AI tools to help them with budgets and investing decisions and in doing so reveal their complete financial picture. This data, once collected, can be used to target you with predatory financial products, sold to lenders to inform credit decisions, or accessed by bad actors in the event of a data breach. The AI does not need to know your real numbers to help you — and you should never give them.

Personal details about other people

When you tell an AI about a conflict with a friend, describe a colleague's behaviour, or share details about a family member's situation, you are submitting that person's private information without their consent. They did not agree to be profiled by an AI company. Their information is now part of a corporate data set alongside yours. This is especially serious when it involves children — describing your child's behaviour, struggles or routines to an AI is creating a data profile of a minor who has no ability to consent or protect themselves.

Passwords, account details and private documents

It happens more often than you would expect. People paste a document into an AI to summarise it and that document contains social security numbers, account credentials, confidential business information or client data. Some people paste their passwords directly asking for advice on password strength. Every piece of text you submit is processed on external servers. Treat the AI input box the same way you would treat a public forum — never paste anything you would not be comfortable seeing on a billboard.

What to do instead

  • Use AI tools on a separate, anonymous account with no real name, no profile photo and no identifying information attached
  • Before uploading any photo, strip its metadata using MetaClean (metaclean.app) — it runs entirely in your browser, processes files locally using WebAssembly, and never uploads your images to any server. You can verify this yourself by opening browser DevTools and watching the Network tab show zero upload requests
  • For health questions, use symptom checkers that are HIPAA-compliant and do not retain your data — or speak to an actual medical professional
  • Never paste real financial figures — use made-up but realistic numbers when asking AI for budgeting help
  • Review the privacy settings of any AI tool you use and opt out of data being used for training wherever that option exists — on ChatGPT this is under Settings → Data Controls → Improve the model for everyone
  • If you are using AI for work, assume your employer and the AI company can both see everything you type
  • Use locally-run AI models like Ollama or LM Studio for sensitive questions — these run entirely on your own device and never send your data to external servers

The most important mindset shift is this: an AI chatbot is not a confidant. It is a product. And like every product built by a corporation, the data you give it has value to people other than you. The less of your real self you hand over, the less there is to exploit.