Your Employees Are Using AI Tools at Work. Your Business Might Not Survive It

  • Home
  • IT Solution
  • Your Employees Are Using AI Tools at Work. Your Business Might Not Survive It

I was doing an IT review for a law firm earlier this year — 18 people, good team, solid reputation. About 40 minutes into the conversation, one of the partners mentioned almost in passing that a few of the junior associates had started using ChatGPT to help draft contract summaries.

I asked him how long that had been happening.

He wasn’t sure. “A few months, maybe?”

I asked if they had any policy around what could and couldn’t be pasted into the tool.

He looked at me the way people look when they’ve just realised something they probably should have thought about earlier.

Client names. Matter details. Confidential contract terms. All of it had been going into a third-party AI tool, for months, without any framework around data handling, storage, or terms of use. This is happening in businesses everywhere right now. And most owners have no idea.

The Shadow AI Problem

We’ve talked about shadow IT for years — employees using unsanctioned tools and apps without IT knowledge or approval. Shadow AI is the same concept, except the stakes are higher and the adoption curve is steeper.

ChatGPT reached 100 million users in two months. It took Instagram two and a half years to hit the same number. AI tools are being adopted faster than any technology in history — and your employees are part of that wave, whether you’ve thought about it or not.

A recent survey found that 75% of workers are already using AI tools at work. Only 22% of organizations have a formal AI usage policy in place. That gap — between usage and governance — is where the risk lives. Your employees aren’t trying to create a security incident. They’re trying to do their jobs faster. That’s exactly what makes this so dangerous. The intent is completely benign. The outcome can be anything but.

What Actually Happens to the Data

This is where I need to get specific, because there’s a lot of confusion about what AI tools actually do with the information you feed them. Most consumer AI tools, by default, use your inputs to train and improve their models. That means the contract terms your associate pasted in, the client financials your accountant summarised, the proprietary process your operations manager described — all of it may have become training data.

Even setting aside model training, there are questions about data residency, retention periods, access by the AI provider’s employees, and what happens in the event of a breach of the AI platform itself.

For businesses in regulated industries — legal, financial services, healthcare, accounting — the implications go further. Data handling obligations don’t pause because an employee found a useful tool. If client data ends up in a third-party system without appropriate data processing agreements in place, you may have a compliance problem regardless of whether anything goes wrong. And then there’s the accuracy issue. AI tools hallucinate. They present incorrect information with complete confidence. A junior employee who trusts AI output without verification can produce work that contains factual errors, incorrect legal references, or fabricated statistics — and if that work goes to a client, the reputational and legal exposure is yours.

The Right Response Isn’t to Ban It

I want to be clear about something: the answer here isn’t to prohibit AI tools across the board. That approach doesn’t work, it damages morale, and it puts you at a competitive disadvantage as AI genuinely does make people more productive.

The right response is governance. A framework that lets your team use AI tools effectively while managing the risks appropriately.

That starts with understanding what’s actually being used. We do AI usage audits for clients — looking at what tools are active across the environment, what data is being fed into them, and what the actual risk exposure looks like. Most business owners are surprised by what we find.

From there, it’s about building a policy that matches your risk profile. Which tools are approved for which use cases? What categories of data can and can’t be used as inputs? What verification processes should apply to AI-generated outputs? How should employees handle uncertainty about whether something is appropriate?

For some clients, we also help implement enterprise versions of AI tools that offer better data privacy controls, don’t use inputs for model training, and sit within a properly governed environment. The goal isn’t to slow your team down. It’s to make sure the speed gains from AI don’t come with hidden costs attached.

The Conversation Most Business Owners Haven’t Had Yet

The law firm I mentioned at the start did eventually put a proper AI policy in place. It took a few weeks of work — auditing current usage, drafting guidelines, training the team, and implementing some technical controls.

The managing partner told me afterward that the process itself had been valuable, separate from the risk management piece. For the first time, the leadership team had a clear picture of how AI was actually being used across the firm — and a framework for thinking about where to lean into it further.

That’s the conversation most business owners haven’t had yet. Not whether to allow AI — that ship has already sailed. But how to allow it in a way that serves the business without exposing it. The businesses that work this out now will be ahead. The ones that don’t will find out the hard way.

Comments are closed