Your AI Chatbot Just Became a Backdoor: What UK Small Businesses Need to Know About Promptware

Pull up a chair. This one is going to sting a bit.

If your business has added an AI chatbot to its website, plugged an AI assistant into your email, or let your developers use a coding copilot, you need to understand what I’m about to tell you. Because the threat model beneath your feet just shifted, and most of the security industry hasn’t caught up yet.

A research paper published in January 2026 (and revised this month) fundamentally changes how we should think about AI security. It is called “The Promptware Kill Chain,” and it was co-authored by Oleg Brodt (Ben-Gurion University), Elad Feldman (Tel Aviv University), Bruce Schneier (Harvard Kennedy School and the University of Toronto’s Munk School), and Ben Nassi (Tel Aviv University). If Schneier’s name doesn’t ring a bell, he’s one of the most respected voices in information security. Full stop. When he puts his name on a paper, the smart people pay attention.

The paper’s central argument is simple and alarming: prompt injection, the technique where someone tricks an AI into doing something it shouldn’t, is no longer a one-off trick. It has evolved into a fully formed malware delivery system. The researchers call this new class of threat promptware.

What Exactly Is Promptware?

You’ve probably heard of the traditional “kill chain” in cybersecurity. It’s a concept borrowed from military planning by Lockheed Martin back in 2011, and it describes the stages an attacker moves through to compromise a target. It has been a staple of enterprise threat modelling for over a decade.

The researchers have now mapped an equivalent kill chain for AI systems, with seven distinct stages:

  1. Initial Access: the attacker injects malicious instructions into an AI system’s input. This can be text, images, or even audio files. A poisoned email, a compromised webpage, a dodgy calendar invitation. Any of these can serve as the way in.

  2. Privilege Escalation: the AI’s safety controls get bypassed through jailbreaking techniques. Think of it as picking the lock on the AI’s rule book.

  3. Reconnaissance: the malicious prompt probes its host environment. What data can the AI access? What tools does it have? What permissions has it been granted? The prompt itself does the scouting.

  4. Persistence: the attack survives beyond a single session. This is where it gets properly nasty. Attackers can poison an AI’s long-term memory, so the malicious instructions get loaded into every future conversation. Every. Single. One.

  5. Command and Control: the compromised AI becomes remotely operated. An attacker can update the malicious payload and change the AI’s behaviour over time, turning it from a static infection into a live, controllable trojan.

  6. Lateral Movement: the attack spreads to other users, other systems, other applications. Self-replicating AI worms that embed copies of themselves in outgoing emails have already been demonstrated in research.

  7. Actions on Objective: the endgame. Data theft, financial fraud, ransomware deployment, remote code execution, or, in one documented case, activating a victim’s camera through a Zoom call triggered by a poisoned calendar invitation.

Read that list again. Now consider that the researchers analysed thirty-six documented studies and real-world incidents and found that at least twenty-one already crossed four or more of those stages. In live, production systems. Affecting real businesses and real users.

The UK’s Own NCSC Agrees: This Is Being Dangerously Underestimated

Here’s something that should sharpen the attention of every UK business owner reading this.

The Promptware Kill Chain paper explicitly cites the UK’s National Cyber Security Centre, which published a blog in December 2025 titled “Prompt injection is not SQL injection (it may be worse).” In it, the NCSC argued that comparing prompt injection to SQL injection is dangerous, and that the technique needs to be approached fundamentally differently to mitigate the risks.

That matters. The NCSC does not use language like that casually. They’re telling the industry, point blank, that the current way we categorise and defend against these attacks is inadequate.

And the evidence backs them up. The paper’s own analysis of existing defences (Table IV for anyone who wants to read the original) reveals a troubling gap. The security industry has poured enormous effort into defending against the first two stages of the kill chain, prompt injection and jailbreaking, while stages like lateral movement, command and control, and reconnaissance have almost no dedicated defences at all. It’s the equivalent of fitting a brilliant front door lock while leaving every window in the building wide open.

Real Attacks, Real Damage, Real Cheap

Let me give you three examples from the paper that should keep you awake tonight.

The £4 Salesforce Attack. The ForcedLeak attack against Salesforce Agentforce showed that an attacker could buy an expired whitelisted domain for roughly $5 (about £4) and use it to exfiltrate an entire CRM database full of customer data. Four quid. That’s less than a flat white at most London coffee shops. The vulnerability, rated CVSS 9.4 (critical), was disclosed by Noma Security and subsequently patched by Salesforce, but it demonstrated something fundamental: autonomous AI agents introduce attack surfaces that traditional security controls simply do not cover.

The Calendar Invitation That Became Spyware. The “Invitation Is All You Need” research by Nassi, Cohen, and Yair demonstrated fourteen attacks across five threat classes against Google’s Gemini-powered assistants. One attack used a poisoned Google Calendar invitation to hijack a victim’s phone. When the victim interacted with Gemini about their calendar, common phrases like “thanks” triggered the compromised AI to launch Zoom and start streaming video of the victim. A calendar invitation. That is the attack vector. Seventy-three per cent of the threats analysed were rated High to Critical risk.

The AI Worm. The Morris II worm demonstrated self-replicating promptware that embedded copies of itself into outgoing emails. Every recipient running a similar AI email assistant got infected automatically. One successful injection, unlimited spread. We haven’t seen self-replicating worm behaviour at this scale since the early 2000s, and here it is again, reborn in AI.

Breaking the Lethal Trifecta

The paper references security researcher Simon Willison’s concept of the “Lethal Trifecta,” published in June 2025. Willison identified that when an AI tool combines three specific capabilities, you have created the perfect conditions for data theft:

  1. Access to sensitive data: the AI can read your emails, documents, or databases.

  2. Exposure to untrusted content: the AI processes external inputs like emails, web pages, or shared documents.

  3. External communication ability: the AI can send data outside your organisation, whether through API calls, rendered images, or generated links.

    If your AI deployment combines all three, you are sitting on a ticking time bomb. Willison himself has stated that “we still don’t know how to 100% reliably prevent this from happening.” The only guaranteed mitigation is to break at least one of those three connections. Ideally two.

This is not theoretical hand-wringing. Between January 7th and 15th, 2026, security researchers publicly disclosed critical vulnerabilities in four major AI-powered productivity tools, all exploiting this exact pattern. These were production exploits against tools trusted by major organisations.

What Your Business Should Do About This Right Now

Right, practical steps. Because I have spent four decades in this industry and I have zero patience for people who point at fires without handing out extinguishers.

Audit every AI tool touching your business. Every chatbot, every AI plugin, every coding assistant, every “helpful” AI summariser someone quietly installed in Slack. List them all. You cannot protect what you don’t know exists.

Strip permissions back to the bare minimum. This is the principle of least privilege, and it’s older than I am. Your AI chatbot does not need access to your customer database. It does not need email sending capability. It does not need internet browsing access. Give each AI tool only the permissions it strictly needs to do its specific job, and nothing more.

Treat all AI inputs as untrusted. If you’ve built web applications, you know the golden rule: never trust user input. The same rule now applies to anything processed by an AI system. Documents, emails, web pages, images. All of it can carry a payload.

Break the Lethal Trifecta. If your AI tool can access sensitive data AND process untrusted content AND communicate externally, you have created the perfect conditions for data theft. Break at least one of those three connections.

Monitor AI behaviour like you monitor network traffic. Is your chatbot making requests it has never made before? Accessing data outside its normal scope? Generating outputs that look different from its usual patterns? These are early warning signs of compromise.

Ask your AI vendors direct questions. What protections do you have against prompt injection? What about memory poisoning? What about lateral movement between users? If they stare blankly at you, that tells you everything. If they can’t answer clearly, think very carefully about whether that tool should remain connected to your business data.

Review AI memory features. If your AI tools have a “memory” or “saved information” feature, check what’s stored in there. The ChatGPT ZombAI attack demonstrated that a compromised webpage could inject malicious instructions directly into ChatGPT’s long-term memory, turning it into a remotely controlled trojan. Regular memory audits are now a security hygiene requirement.

How to Turn This Into Competitive Advantage

Here’s where the smart business owners separate themselves from the pack. While your competitors are blindly adopting AI tools without a second thought about security, you can position your organisation as the trustworthy partner that actually understands AI risk.

Lead with AI governance in your proposals. When pitching to larger clients, include your AI security policy as part of your tender documentation. Show that you have audited your AI tools, documented their permissions, and implemented the principle of least privilege. Your competitors won’t have done this because they haven’t even thought about it yet.

Use AI security as a client retention tool. If you provide services to other businesses, particularly if you handle their data, demonstrating that your AI tools are secured against promptware attacks builds trust that no marketing budget can buy. Create a simple one-page “AI Security Posture” document that outlines how you protect client data from AI-related threats.

Get ahead of incoming regulation. The UK Cyber Security and Resilience Bill is already working its way through Parliament, and AI security requirements will follow. Businesses that proactively implement AI governance frameworks now will have a significant head start when compliance requirements land.

Position your business as the secure alternative. In every industry, there will be businesses that suffer AI-related data breaches in the next twelve to eighteen months. When prospects come looking for a more secure alternative, be the company that already has its house in order.

How to Sell This to Your Board

If you need to get budget approval or executive buy-in for an AI security review, here are your talking points:

The financial argument: The ForcedLeak attack demonstrated that a £4 investment could compromise an entire CRM database. The average cost of a data breach for UK SMBs is £3,400 according to the 2025 Government Cyber Security Breaches Survey, but for businesses with customer data obligations under UK GDPR, ICO fines can reach £17.5 million or 4% of global annual turnover. An AI security audit costs a fraction of that.

The liability argument: Directors have personal legal obligations under the Data Protection Act 2018. If your business suffers a data breach because an AI tool was deployed without adequate security assessment, that’s a governance failure, not just a technical one. The ICO has shown it will pursue organisations that fail to conduct proper risk assessments before deploying new technology.

The competitive argument: Your clients and prospects are starting to ask about AI security. Having a documented AI security policy and a completed AI tool audit is a competitive differentiator that costs relatively little to implement but demonstrates professional maturity.

The practical argument: An AI tool inventory and permission audit can typically be completed in two to three days for a small business. Implementing least privilege controls takes another week. For an investment of perhaps £2,000 to £5,000 in consultancy time, you eliminate the most dangerous attack vectors identified in the Promptware Kill Chain research.

The Bigger Picture

I’ve spent four decades watching attack methods evolve. I watched viruses move from floppy disks to email attachments. I watched ransomware go from novelty to business killer. I watched social engineering graduate from phone calls to deepfakes. Every single time, the pattern is identical: something starts small, the industry dismisses it, and by the time people wake up, the damage is already catastrophic.

Promptware is following that exact trajectory. The difference this time? Speed. AI tools are being adopted faster than any technology I’ve seen in 40 years. Small businesses are plugging them into critical operations without a second thought about the attack surface they’re creating. And the attackers are already three stages ahead.

The kill chain coverage tells the story. In 2023, documented attacks typically involved only two to three stages. By 2025 and 2026, fifteen out of twenty-one documented incidents demonstrated four or more stages. The attacks are getting more sophisticated, faster.

Good security doesn’t have to be expensive, but stupidity always is. Right now, the most reckless thing any business can do is adopt AI tools while assuming the old threat models still apply. They don’t. The ground has moved.

What This Means for Your Business

This week: Create a complete inventory of every AI tool connected to your business systems. Include chatbots, plugins, browser extensions, and coding assistants. If you don’t know what’s connected, you can’t protect it.

This month: Review the permissions of every AI tool on your list. Apply the principle of least privilege ruthlessly. If the tool doesn’t need access to customer data, remove it. If it doesn’t need email capability, disable it.

This quarter: Assess each tool against the Lethal Trifecta. If any tool combines access to sensitive data, exposure to untrusted content, and external communication capability, either break one of those connections or replace the tool entirely.

Ongoing: Establish regular AI memory audits and behavioural monitoring. Treat your AI tools with the same suspicion you would apply to any third-party software with access to your crown jewels.

Board level: Add AI security to your next board agenda. Document your AI governance position and ensure directors understand their personal liability for data protection failures.


Listen to The Small Business Cyber Security Guy podcast for weekly practical cybersecurity advice. New episodes every Monday. Search for it wherever you get your podcasts, or visit thesmallbusinesscybersecurityguy.co.uk

Noel Bradford

Noel Bradford – Head of Technology at Equate Group, Professional Bullshit Detector, and Full-Time IT Cynic

As Head of Technology at Equate Group, my job description is technically “keeping the lights on,” but in reality, it’s more like “stopping people from setting their own house on fire.” With over 40 years in tech, I’ve seen every IT horror story imaginable—most of them self-inflicted by people who think cybersecurity is just installing antivirus and praying to Saint Norton.

I specialise in cybersecurity for UK businesses, which usually means explaining the difference between ‘MFA’ and ‘WTF’ to directors who still write their passwords on Post-it notes. On Tuesdays, I also help further education colleges navigate Cyber Essentials certification, a process so unnecessarily painful it makes root canal surgery look fun.

My natural habitat? Server rooms held together with zip ties and misplaced optimism, where every cable run is a “temporary fix” from 2012. My mortal enemies? Unmanaged switches, backups that only exist in someone’s imagination, and users who think clicking “Enable Macros” is just fine because it makes the spreadsheet work.

I’m blunt, sarcastic, and genuinely allergic to bullshit. If you want gentle hand-holding and reassuring corporate waffle, you’re in the wrong place. If you want someone who’ll fix your IT, tell you exactly why it broke, and throw in some unsolicited life advice, I’m your man.

Technology isn’t hard. People make it hard. And they make me drink.

https://noelbradford.com
Next
Next

Six Zero-Days, One Tuesday, and Your Approval Process Is Still Broken