← Back to Blog

Your Employees Used ChatGPT 47 Times Today. You Have No Idea What They Pasted.

Last month I talked to a DPO at a 40-person consultancy. She was frustrated.

“We have a policy. No client data in ChatGPT. It’s in the handbook. Everyone signed it.”

“Is it working?”

“Of course not.”

She knew her team was using ChatGPT daily. For drafting emails. Summarising documents. Writing code. Reviewing contracts. All the things that make AI useful—and all the things that might involve sensitive data.

She couldn’t stop it. She couldn’t monitor it. She couldn’t even measure it.

Welcome to shadow AI.

Shadow IT, But Worse

Shadow IT was the old problem: employees using Dropbox when the company mandated SharePoint. Installing Slack when IT wanted everyone on Teams. Annoying for IT governance, but manageable. The tools were visible. The data flows were traceable.

Shadow AI is different. There’s no software to install. No app appearing on a device. Someone opens a browser tab, pastes text, gets a response, closes the tab. It’s invisible.

And the data involved isn’t “which file-sharing tool are we using.” It’s the actual contents of customer contracts, employee records, source code, legal strategies, financial projections.

When someone pastes a client contract into ChatGPT, that data is now:

  • On OpenAI’s servers (US jurisdiction)
  • Potentially retained for 30 days (API) or indefinitely (free tier)
  • Possibly used for training (unless explicitly opted out, which your employee didn’t do)

Your policy says “don’t do this.” Your employee did it anyway because they had a deadline and AI is helpful.

Banning Doesn’t Work

Every CISO knows this, even if they won’t say it publicly: prohibition doesn’t work for useful tools.

You can block the ChatGPT domain. They’ll use Claude. Block Claude. They’ll use Gemini. Block all the major providers. They’ll use their phone. Or a personal laptop. Or a random AI tool you’ve never heard of.

The harder you make it to use AI at work, the more likely employees are to use worse alternatives in less controlled ways.

This isn’t hypothetical. A Samsung semiconductor division banned ChatGPT after employees leaked source code. Three weeks later, employees were caught using it again.

You cannot policy your way out of human behaviour. If AI helps people do their jobs, they will use AI.

The Actual Risk

Let’s be specific about what can go wrong.

Scenario 1: The Helpful Employee

A junior HR person needs to write a termination letter. They paste the employee’s performance review into ChatGPT for help with phrasing. That performance review—name, role, specific incidents, disciplinary history—is now on OpenAI’s servers.

Scenario 2: The Efficient Lawyer

A solicitor is reviewing a contract. They paste the whole thing into Claude to summarise the key terms. That contract contains deal values, party names, confidential obligations. It’s now on Anthropic’s servers.

Scenario 3: The Clever Developer

A software engineer is debugging production code. They paste an error log into ChatGPT. That error log contains customer IDs, IP addresses, and database query strings. You see where this is going.

None of these people intended to violate policy. They were just trying to do their jobs well.

What a Regulator Sees

When a DPA investigates your company, they’ll ask about AI usage. Increasingly, this is a standard question in audits.

“What AI tools do your employees use?” “How is personal data protected in AI workflows?” “What technical and organisational measures prevent inappropriate data sharing with AI providers?”

“We have a policy” is not a technical measure. It’s not even a particularly strong organisational measure. It’s a hope.

If you can’t demonstrate control—actual, architectural control—over AI data flows, you’re exposed. Not theoretically. Practically.

What Actually Helps

Banning doesn’t work. So what does?

1. Provide a sanctioned alternative

If employees need AI, give them AI. But give them a version that’s compliant by design.

This means:

  • EU-only data processing
  • Zero retention (architectural, not just policy)
  • Clear data handling documented for audits
  • No training on inputs

If the compliant option is nearly as good as ChatGPT, most employees will use it. Not because they care about GDPR, but because it’s easier than sneaking around.

2. Accept the convenience trade-off

The sanctioned alternative might be slightly worse. Slightly slower. Slightly less capable. That’s the trade-off.

What you’re buying is auditability. The ability to say: “Our employees use this tool. This tool has these data handling properties. Here’s the DPA. Here’s the technical documentation.”

That’s worth a small capability gap.

3. Make the default behaviour safe

Don’t rely on employees remembering rules. Design the system so the default is compliant.

If chat history isn’t stored on a server, it can’t be leaked from a server. If data doesn’t leave the EU, there’s no cross-border transfer to justify. If the tool forgets everything when the browser closes, there’s no retention to worry about.

Architecture beats policy. Every time.

4. Reduce friction, not capability

The worst outcome is a “compliant” AI tool that’s so annoying to use that everyone ignores it.

If your sanctioned tool requires VPN access, a separate login, IT ticket approval, and only runs on managed devices—people will use ChatGPT on their phones instead.

The tool needs to be almost as easy as the shadow alternative. Otherwise you’re just creating sophisticated non-compliance.

The Uncomfortable Middle Ground

Here’s what I tell DPOs who ask:

You probably can’t eliminate shadow AI. You can reduce it. You can provide alternatives. You can create audit trails for the usage you do control. You can make compliant behaviour the path of least resistance.

But if someone is determined to paste sensitive data into a random AI tool, they will. The goal isn’t perfection. The goal is demonstrable due diligence.

“We provided a compliant alternative. We trained staff on appropriate use. We monitor for obvious violations. We document our AI data flows.”

That’s defensible. “We hoped everyone followed the policy” is not.


I built Limbo Chat because I was tired of watching this problem from the security side. Browser-based storage. EU-only processing. Nothing to leak because nothing is stored. It’s the alternative I wished existed when I was the one writing policies nobody followed.