What Happens to Your Data When You Use AI Tools
Where your data goes when you use ChatGPT or Claude. Training policies, retention, and how to protect sensitive project info.
Difficulty: Apprentice
You paste a subcontractor proposal into ChatGPT and ask it to summarize the key terms. You upload a bid spreadsheet and ask for a sanity check on your numbers. You describe a contract dispute and ask for advice on how to respond.
Where does all that information go?
If you're using the free version of ChatGPT or a basic paid plan, the answer might surprise you. This article covers what actually happens to your data when you use AI tools, and what you can do about it.
The Short Version
Free and Plus plans: Your conversations are stored. By default, they can be used to train future AI models. OpenAI employees may read them.
Team and Enterprise plans: Your data is not used for training. You get more control over retention and deletion.
API access: Data is not used for training by default. Shorter retention periods.
The same general pattern applies to Claude (Anthropic), Gemini (Google), and most other AI tools. Free and cheap tiers have looser privacy. Business tiers lock things down.
What Gets Collected
When you use ChatGPT, OpenAI stores:
- Everything you type (prompts)
- Everything the AI responds with (outputs)
- Any files you upload
- Your account info (email, payment details if applicable)
- Metadata like timestamps, IP address, device info
This isn't unusual for a cloud service. The difference is what happens next.
Training on Your Data
By default, OpenAI uses conversations from free and Plus users to improve future models. Your prompts and the AI's responses become training data.
What does that mean practically? The AI learns from patterns in real conversations. If thousands of users ask similar questions, the model gets better at answering them. Your specific words probably won't appear verbatim in outputs, but the information influences how the model develops.
You can opt out. In ChatGPT settings, go to Data Controls and turn off "Improve the model for everyone." This stops new conversations from being used for training.
But here's the catch: opting out doesn't delete what's already been collected. And even with training disabled, OpenAI still stores your conversations for up to 30 days for "abuse monitoring."
Who Can See Your Conversations
OpenAI employees and contractors can access conversations for:
- Reviewing outputs to improve the model
- Investigating policy violations
- Responding to legal requests
Your conversations aren't broadcast publicly, but they're not fully private either. Treat ChatGPT like an email you're sending to a company that might read it.
Claude and Other Tools
Anthropic (Claude) recently updated its privacy policy. As of late 2025:
- Free, Pro, and Max users can opt in or out of training
- If you opt in, data is retained for up to 5 years
- If you opt out, retention drops to 30 days
- Business and Enterprise accounts are excluded from training entirely
Google's Gemini, Microsoft Copilot, and other tools have similar structures. The pattern holds: consumer plans have fewer protections than business plans.
What This Means for Contractors
If you're pasting bid numbers, client names, project details, or contract terms into ChatGPT, that information is sitting on someone else's servers. It might be used to train AI models. It might be read by employees.
For general questions—"how do I write an RFI response" or "explain lien waivers"—this probably doesn't matter.
For anything sensitive—pricing, client disputes, personnel issues, proprietary methods—think twice.
Practical Steps
1. Don't paste anything you wouldn't email to a stranger.
If you wouldn't put it in an email to someone outside your company, don't put it in ChatGPT.
2. Turn off training.
In ChatGPT: Settings → Data Controls → Turn off "Improve the model for everyone."
In Claude: Settings → Privacy → Turn off "Help improve Claude."
This doesn't make your conversations private, but it keeps them out of training data.
3. Use Temporary Chat mode.
ChatGPT has a "Temporary Chat" option that doesn't save to your history and isn't used for training. The conversation still exists on OpenAI's servers for 30 days, but it's more ephemeral than a normal chat.
4. Consider business tiers for sensitive work.
If your company regularly handles confidential information with AI tools, look at Team or Enterprise plans. They cost more but come with actual privacy commitments: no training on your data, better access controls, audit logs.
5. Scrub sensitive details before pasting.
Change client names to "Client A." Replace real dollar amounts with round numbers. Remove project addresses. You can still get useful AI help without exposing specifics.
Bottom Line
AI tools are useful. They're also cloud services that store and analyze what you send them.
For most construction work, the risk is low. For anything confidential, either upgrade to a business plan or keep the sensitive parts out of the prompt.
Know what you're sharing. Adjust accordingly.