We Have Made This Exact Mistake Before. Every. Single. Time.
I need to tell you a story. Not because it is new, but because it is exactly the same bloody story I have been telling for four decades, and nobody ever listens until the damage is done.
In 1981, employees at Bank of America started expensing personal computers as office supplies. Management had not approved them. IT had not vetted them. Nobody had written a policy about them. Staff just went out, bought the things, plugged them in, and started working.
The response from the technology industry was predictable. Excitement first. Then concern. Then panic. Then, after the inevitable security incidents, belated policies and governance frameworks that should have existed before the first machine was unboxed.
That was 45 years ago. And we have learned absolutely nothing.
The Cycle That Nobody Breaks
I have watched this pattern repeat itself five times now, each iteration faster and more damaging than the last.
The 1980s: personal computers. Staff bought them, connected them to networks, stored business data on floppy disks they carried home in their briefcase. Nobody thought about access controls because nobody had written any. The industry spent the next decade catching up.
The 2000s: USB drives and portable storage. A two-quid stick from Tesco could hold an entire client database. People lost them on trains, left them in pub car parks, posted them in the wrong envelope. The MOD alone reported losing hundreds. Encryption policies arrived years after the first breach made the papers.
The 2010s: bring your own device. Smartphones went from novelty to necessity in about 18 months. Staff wanted to check work email on their personal phones. Companies saved money by not issuing hardware. By the time anyone wrote a BYOD policy, half the workforce was already accessing company data on devices that IT had never seen, could not manage, and could not wipe.
The late 2010s: cloud and SaaS. Departments started buying their own software. Marketing signed up for a project management tool. Sales adopted a CRM. Finance found a cheaper invoicing platform. None of them asked IT. None of them checked the data residency. None of them read the terms of service. Gartner estimates that 30 to 40 per cent of enterprise IT spending now happens outside official oversight.
2026: AI agents. And here we are again. Same pattern. Same excuses. Same cycle. Except this time the tool does not just store your data. It reads it, acts on it, and sends it wherever it is told to.
Every single time, the sequence is identical. A new technology appears. It is genuinely useful. Employees adopt it because it makes their working life easier. Security is not considered because the technology is new and the risks are not obvious. Policies do not exist because nobody anticipated the adoption speed. By the time governance catches up, the damage is already embedded in the organisation.
The only thing that changes is the speed. The PC cycle took a decade. BYOD took a few years. Cloud shadow IT took months. OpenClaw went from launch to 135,000 exposed internet-facing instances in three weeks.
Three weeks.
OpenClaw Is Not the Problem
Let me be very clear about something before the vendor marketing machines start spinning. OpenClaw is not uniquely dangerous because it is a bad product built by irresponsible people. Peter Steinberger, its creator, is a successful developer who built something genuinely impressive. The 720,000 people who downloaded it did so because it is legitimately useful. It automates tedious tasks. It manages communications across platforms. It executes work that would otherwise eat hours of someone's day.
The problem is not the tool. The problem is us.
The problem is that 22% of enterprise employees installed it without telling anyone in IT, according to Token Security. The problem is that 53% of Noma Security's enterprise customers had staff who gave it privileged access over a single weekend. The problem is that when security researchers found 512 vulnerabilities in the codebase (eight of them critical), users had already handed it access to their email, their files, their messaging platforms, their credentials, and their command line.
OpenClaw did not break into anyone's systems. People opened the door, handed it the keys, and went to lunch.
That is not a technology failure. That is a governance failure. And it is the same governance failure we have been committing since someone at Bank of America first charged an Apple II to their expense account.
The Speed Problem
Here is what is different this time, and it is the thing that keeps me awake at night.
Every previous cycle gave us time. Not enough time, granted. We were always late. But the adoption curves of PCs, BYOD, and cloud services were measured in months and years. That gave the security industry enough runway to develop frameworks, publish guidance, build tools, and write policies before the majority of organisations were exposed.
AI agents have compressed that runway to nothing.
OpenClaw launched in November 2025. By late January 2026, it had over 100,000 GitHub stars. By early February, SecurityScorecard's STRIKE team was tracking over 135,000 exposed instances across 82 countries, with more than 15,000 vulnerable to remote code execution. Malicious plugins appeared on the marketplace within days of launch. Prompt injection attacks were demonstrated within hours of researchers getting access.
The security industry responded with an unprecedented wave of simultaneous advisories. CrowdStrike, Kaspersky, Palo Alto Networks, Cisco, Tenable, Bitdefender, Trend Micro, and Jamf all published warnings within the same fortnight. Gartner told enterprises to block it immediately.
But here is the uncomfortable truth: by the time those advisories were published, the adoption had already happened. The horse had not just bolted. It had cleared three counties and was making friends in the next time zone.
And OpenClaw will not be the last. It will not even be the most dangerous. It is the first wave of a new category of tool that will proliferate faster than any technology we have seen before. Because unlike a cloud SaaS product that requires a procurement decision, or a BYOD device that physically exists in someone's pocket, an AI agent is a piece of software that installs in minutes, requires no hardware, connects to everything, and starts acting immediately.
The next one will be slicker, better marketed, and harder to detect.
Why "Ban Everything" Does Not Work
I can already hear the objection forming. "Just ban all AI tools. Problem solved."
I have the same respect for that strategy as I have for telling teenagers not to use social media. It is technically correct and practically useless.
Here is why. The people who installed OpenClaw were not being malicious. They were not trying to steal data or compromise the business. They were trying to be more productive. They saw a tool that could automate their inbox, manage their calendar, handle routine messages, and free up time for work that actually matters. From their perspective, they were doing the company a favour.
If your security policy consists entirely of saying no to everything that might be useful, you will lose. Your staff will either ignore the policy, find workarounds, or leave for a competitor that lets them work with modern tools. Shadow IT did not emerge because employees are stupid. It emerged because official IT procurement is slow, risk-averse, and often out of touch with what people actually need to do their jobs.
The answer is not prohibition. The answer is governance.
You need a policy that says: "We actively support the use of AI tools that help you work better. Here is the process for getting one approved. Here is what we check before we say yes. Here is what is already approved and available for you to use. And here is why installing something without going through this process puts the entire business at risk."
That is not the department of no. That is the department of "yes, safely."
The Governance Gap Is a Leadership Failure
Let me say something that will make a few people uncomfortable.
If your employees are installing AI agents on company devices without your knowledge, the failure is not theirs. It is yours.
It means you have not provided them with an AI tool policy. It means you have not given them a channel to request new tools. It means you have not explained, in plain language, why unvetted software with access to company data is a business risk. And it means you have not offered them an approved alternative that does what they need.
We went through this exact conversation with BYOD. The organisations that tried to ban personal devices lost. The organisations that wrote sensible BYOD policies, deployed mobile device management, and gave staff a way to use their own hardware safely? They won. They got the productivity benefits without the unmanaged risk.
The same principle applies to AI tools. You are not going to stop your employees using AI. The genie is out of the bottle, and it is not going back in. What you can do is make sure the AI tools they use have been vetted, the access they grant is proportionate, and the risks are understood.
If you are a business owner reading this and you do not have an AI tool policy, you have a gap in your governance that is getting wider by the day. Every week you delay is another week where staff are making their own decisions about what software gets access to your business data.
What 40 Years of Mistakes Has Taught Me
I have been in this industry since before most of today's "thought leaders" were born. I have watched mainframes give way to PCs, PCs give way to the internet, the internet give way to cloud, and cloud give way to whatever we are calling this era. And in every single transition, the organisations that came through intact were not the ones with the biggest security budgets or the most sophisticated tools.
They were the ones that treated technology adoption as a governance issue, not just a technical one.
They asked three questions before anything new got plugged in, installed, or signed up for:
What data does this access? If the answer is "client information, financial records, or credentials," then it needs to go through a formal assessment. No exceptions.
Who controls it? If the answer is "a free open-source project maintained by one person and 350 contributors using AI agents to write code," then you need to think very carefully about your risk tolerance.
What happens if it goes wrong? If the answer is "an attacker gets access to everything the tool can access, which includes email, files, messaging, and system commands," then you need to be damn sure the security model is sound before it goes anywhere near your business.
Those three questions would have stopped every shadow IT disaster I have witnessed in four decades. They are not complicated. They do not require a CISO. They require a business owner who gives a toss about protecting what they have built.
The Next 12 Months
Here is what is going to happen. I am not guessing. I am reading the pattern.
More AI agent tools will appear. Some will be better built than OpenClaw. Some will be worse. All of them will promise to make your business more productive, and most of them will deliver on that promise. The temptation to adopt them will be enormous.
The UK government will issue guidance. It will be measured, considered, and about 18 months too late. The NCSC will publish a framework that is sensible but that most small businesses will never read.
Insurance providers will start adding AI tool clauses to cyber insurance policies. If your claim stems from an unmanaged AI agent that was never approved by the business, do not be surprised when the insurer points to the exclusion clause and walks away.
Larger companies will start requiring their suppliers to demonstrate AI governance. If you are in a supply chain that handles any form of sensitive data, the question "do you have an AI tool policy?" is coming. Probably within the year.
And some businesses will get breached because an employee installed an AI agent that nobody knew about, which processed a malicious email, which executed instructions it should never have followed, which exfiltrated data that should never have been accessible in the first place. The attack chain will be new. The underlying cause will be as old as banking.
What I Want You to Do This Week
I do not care if you read this and forget half of it. I care if you read this and do one thing.
Write an AI tool policy. One page. It does not need to be perfect. It needs to exist. Something like:
"No AI agents, assistants, plugins, or automation tools may be installed on company devices or connected to company accounts without written approval from [name]. If you have already installed an AI tool, please let us know. This is not a disciplinary issue. We want to help you use AI safely, not stop you using it entirely."
Print it. Email it. Put it on the noticeboard next to the fire evacuation procedures. Talk to your staff about it.
If you want to go further, apply Simon Willison's Lethal Trifecta test to every AI tool in your business. Three questions: does it access private data, does it process untrusted content, and can it communicate externally? If the answer to all three is yes, that tool needs formal assessment before anyone uses it for work.
If you want to go even further, talk to whoever manages your IT and ask them to audit what is currently running. Check for OpenClaw specifically (port 18789, .clawdbot directories in user home folders), but also ask the broader question: what software is on our machines that we did not put there?
I have watched this cycle five times. Each time, the businesses that survived were the ones that moved before the damage. Not the ones with the best technology. The ones with the best governance.
Be that business.
How to Turn This Into Competitive Advantage
You are ahead of the curve simply by reading this. Most UK small businesses have no AI tool policy whatsoever. By having one, you are already differentiated. Here is how to make it count.
Include your AI governance in tender responses. When a prospective client asks about your security posture, mention your AI tool policy alongside your Cyber Essentials certification and data handling procedures. It demonstrates awareness of emerging risks that most competitors have not even thought about.
Use it as a supply chain conversation. The next time you are talking to a key client, ask them: "Do you have an AI tool policy for your suppliers?" If they do not, you have just positioned yourself as more security-conscious than they are. If they do, you can demonstrate compliance immediately while their other suppliers scramble.
Turn the policy into a client-facing asset. A one-page "How We Handle AI Tools" document on your website or in your proposals signals professionalism and trustworthiness. In sectors like legal, financial, and healthcare, this is becoming table stakes.
Recruit better talent. Professionals who care about doing good work also care about working with modern tools safely. A policy that says "we support AI adoption through a proper governance process" is more attractive than either "we ban everything" or "we have no policy and hope for the best."
How to Sell This to Your Board
Opening line: "22% of enterprise employees have already installed AI agents without IT approval. We need a one-page policy to make sure we are not exposed."
The cost argument: Writing an AI tool policy costs nothing. Not having one could cost the business its data, its reputation, and its insurance coverage in a single incident.
The speed argument: Previous technology cycles (BYOD, cloud) gave us years to respond. AI agent adoption is measured in weeks. Waiting for government guidance or industry standards is waiting too long.
The insurance argument: Cyber insurance exclusions for unapproved software are tightening. An AI agent breach traced to shadow IT gives insurers grounds to deny the claim.
The regulatory argument: Under UK GDPR, the ICO does not distinguish between a breach caused by approved software and one caused by something an employee installed on their own initiative. The fine lands on the business either way.
The competitive argument: Your competitors either do not have this policy yet (giving you a head start) or they do (meaning you are falling behind). Either way, the status quo is not an option.
Board resolution to propose: "The board approves the implementation of an AI tool governance policy requiring written approval for any AI agents, assistants, or automation tools connected to company devices or accounts, effective immediately."
One sentence. One vote. Done.
Related Posts
Stolen Credentials Are the New Normal: Why Your Authentication Is Already Broken - The credential theft pipeline that OpenClaw accelerates
Your Cloud Migration Just Handed Hackers the Keys to Everything You Own - The previous iteration of this exact shadow IT pattern
Cyber Insurance Claims Are Being Denied: And It's Your Fault - What happens when your claim traces back to unapproved software
You've Got a Flood Plan, But No Cyber Plan? Here's Why That's a Business Killer - Governance gaps that mirror the AI tool policy gap