You Have Hired an Intern With No Sense of Discretion and Handed It the Filing Cabinet: The AI Governance Disaster Hiding in Plain Sight

Opinion

You Have Hired an Intern With No Sense of Discretion and Handed It the Filing Cabinet: The AI Governance Disaster Hiding in Plain Sight

I have seen this pattern before. Technology arrives. Businesses adopt it because it is useful, affordable, and everyone else is doing it. Nobody asks the security question until something goes wrong. Then there is a scramble, a post-incident review, and a rueful conversation about how we should have thought about this earlier.

Cloud migration. Bring your own device. Remote working tools during the pandemic. Microsoft Teams rollouts with no tenant configuration. Every single time, the adoption curve outran the governance conversation.

Now it is AI’s turn.

The Number

The Cyber Security Breaches Survey 2025/2026 puts a figure on it. Thirty-one percent of UK businesses are using AI, in the process of adopting it, or actively considering it. Of that group, only 24% have any process or practice in place to manage the cyber security risks from AI technology.

Run the arithmetic. Three quarters of UK businesses that are engaging with AI are doing so with no house rules. No guidance on what staff can paste into these tools. No checks on what data is being exposed. No assessment of whether the cheerful AI assistant now has access to customer records, financial data, contracts, or HR files.

This is, by some distance, the most 2026 number in the entire survey. It captures the exact dynamic that runs through every other finding: curiosity moves faster than control. Enthusiasm outruns governance. Something useful gets adopted before anyone asks “what could go wrong?”

What Is Actually Happening

In practical terms, here is what “AI adoption without governance” looks like in a twenty-person business.

Someone in accounts is using ChatGPT to clean up spreadsheet data. The spreadsheet contains supplier payment details and bank account numbers. Those details are now in a third-party system the business has never evaluated.

Someone in the sales team is using an AI tool to draft tender responses. The tender documents contain pricing structures, margins, and confidential client requirements. Those documents are now inputs to a model whose data handling practices nobody has reviewed.

Someone in HR is using AI to summarise job applications. The applications contain names, addresses, employment history, and sometimes health information. That is special category data under GDPR, being processed by a tool that nobody in the business has assessed for data protection compliance.

None of these people are acting maliciously. They are being efficient. They are using tools that are freely available, genuinely useful, and enthusiastically promoted by the technology industry. The problem is not the technology. The problem is the absence of boundaries.

The Filing Cabinet Metaphor

On the podcast this week, we described this as hiring an intern and handing them the filing cabinet. It landed because it is accurate. An AI tool is eager, fast, confident, and completely indifferent to confidentiality unless you build confidentiality into the rules.

A responsible employer would not give a new intern unsupervised access to every customer file, every contract, and every financial record on their first day. They would establish boundaries: these files are accessible, those are not; this information can be shared, that cannot; if you are unsure, ask.

AI tools need the same treatment. Not because they are malicious, but because they process whatever they are given without judgment about whether that information should have been shared. The responsibility for setting boundaries sits entirely with the business.

The GDPR Dimension Nobody Is Discussing

The 24% figure becomes considerably more alarming when you consider it through the lens of data protection regulation.

Under the UK GDPR, businesses are required to implement appropriate technical and organisational measures to protect personal data. When an employee pastes customer personal data into a public AI tool, that constitutes processing. The business is the data controller. It is responsible for ensuring that the processing is lawful, that the data is adequately protected, and that any third party involved in processing meets the required standards.

The survey separately reports that 14% of businesses and 22% of charities said they held personal data that was not protected by techniques such as anonymisation or encryption. But this figure covers known, acknowledged unprotected data. It does not capture data that is being routinely fed into AI tools without anyone classifying it as a processing activity.

If the ICO were to investigate a data breach involving personal data exposed through an AI tool, the first question would be whether the organisation had appropriate policies governing AI use. For 76% of businesses currently adopting AI, the answer would be no. That is not a defensible position.

Why “We Will Get to It” Does Not Work

The survey documents a recurring pattern across multiple domains: businesses know something matters but fail to convert knowledge into documented practice. This is the same dynamic behind declining risk assessments, lapsing continuity plans, and static training percentages.

AI governance is following the same trajectory. Businesses recognise that AI creates risks. They intend to address those risks. They keep pushing the governance work to next quarter because the current quarter is too busy.

The difference is speed. Cloud migration took years. BYOD crept in gradually. AI adoption is happening much faster, because the tools are free, require no procurement process, no installation, and no IT involvement. An employee can sign up for an AI service during a lunch break and be processing sensitive data before the afternoon. By the time the business gets round to setting rules, the data has already left.

What Three Rules on a Page Looks Like

I am not calling for a comprehensive AI governance framework from a twenty-person business. I am calling for three sentences on a piece of paper that every employee has seen.

Rule one: Do not paste customer personal data into any AI tool without explicit approval from management. This includes names, addresses, email addresses, phone numbers, health information, and financial details.

Rule two: Do not paste contracts, financial documents, internal strategy papers, board minutes, or confidential correspondence into any AI tool that the business has not formally approved and assessed.

Rule three: If AI generates something that will be sent to a client, submitted to a regulator, or published externally, a human being must review and approve it before it goes out.

That is not a governance programme. It is a starting position. And it is, as we said on the podcast, worlds better than vibes.

Print it. Pin it next to the kitchen kettle. Include it in the next team email. Add it to the staff handbook. Reference it during onboarding. Three rules. One page. Done.

The Approved Tools Question

Once the basic rules are in place, the next practical step is deciding which AI tools the business formally approves for use.

This does not require an exhaustive evaluation process. It requires answering four questions for each tool.

Where does the data go when it is entered? Is it stored, and for how long? Does the provider use input data to train their models? Does the provider’s data processing agreement meet UK GDPR requirements? What happens to the data if the business stops using the tool?

For most small businesses, the simplest approach is to approve one or two AI tools that the business has evaluated and restrict use to those tools only. Microsoft 365 Copilot, for businesses already on the platform, processes data within the existing Microsoft tenant and is covered by existing data processing agreements. Google Gemini within Google Workspace operates similarly. Using these in-platform tools reduces the data exposure risk compared with staff signing up for arbitrary external services.

The Pattern We Keep Repeating

Cloud migration happened because it was cheaper and more convenient than on-premises servers. Businesses moved first and secured later. The result was a wave of misconfigurations, exposed storage buckets, and credential theft through cloud platforms.

BYOD happened because employees wanted to use their own phones and laptops. Businesses allowed it because saying no was impractical. The result was business data on unmanaged devices with no encryption, no remote wipe, and no separation between personal and work applications.

AI is following the same curve. The technology is useful. Adoption is organic. Governance is deferred. The only question is whether this time, enough businesses will set the boundaries before the first wave of data exposure incidents forces them to.

The survey suggests not. Seventy-six percent of AI-adopting businesses have no security practices around it. That is the gap. And it is, right now, one of the most actionable risks any small business can close.

How to Turn This Into a Competitive Advantage

If 76% of AI-adopting businesses have no governance, having even basic rules in place makes you an outlier in the best sense. During procurement conversations, being able to say “we have a documented AI acceptable use policy, we use only approved tools, and all AI-generated outputs are reviewed before external release” is a statement of maturity that most competitors cannot match.

For businesses handling client data, this is particularly powerful. A law firm, accountancy practice, or financial services business that can demonstrate controlled AI use positions itself as responsible and trustworthy at a time when client confidence in data handling is fragile.

How to Sell This to Your Board

The regulatory risk is concrete. UK GDPR requires appropriate measures for data processing. If staff are pasting personal data into unassessed AI tools, the business has a compliance gap that the ICO could examine.

The implementation cost is negligible. Three rules on a page. An email to staff. A line item in the acceptable use policy. This is not a technology project; it is a management decision.

The reputational risk is growing. AI data exposure incidents are beginning to appear in the media. Being the business that loses client data through an unapproved AI tool is a story nobody wants attached to their name.

Clients will start asking. If supply chain security reviews are increasing, and they are, AI governance will feature in those reviews within the next twelve months. Having the answer ready positions the business ahead of the question.

What This Means for Your Business

  1. Set three basic AI rules this week. No customer data. No confidential documents. Human review of external outputs. Communicate them to every member of staff.

  2. Decide which AI tools are approved. Start with in-platform options such as Microsoft 365 Copilot or Google Gemini within Workspace, where data handling is covered by existing agreements.

  3. Add AI acceptable use to your staff handbook or existing acceptable use policy. This does not need to be a separate document. A paragraph within the existing policy is sufficient.

  4. Check what your team is already using. Ask, without blame. The goal is to understand current practice so you can set sensible boundaries, not to punish people for being efficient.

  5. Review in six months. AI tools and capabilities are changing rapidly. The rules you set today may need updating. Put a review date in the calendar.

SourceArticle
DSIT / Home OfficeCyber Security Breaches Survey 2025/2026
ICOSecurity (GDPR Guidance)
ICOArtificial Intelligence and Data Protection
NCSCSmall Business Guide to Cyber Security
NCSCThinking About the Security of AI Systems
GOV.UKCyber Governance Code of Practice
GOV.UKAI Regulation: A Pro-Innovation Approach
MicrosoftMicrosoft 365 Copilot Data Privacy

Filed under

  • smb-security
  • uk-business
  • business-risk
  • compliance-failure
  • executive-security
  • vendor-risk