Suspect a Breach? Act Now: A Practical UK SMB Playbook

The First Hour Decides Everything

Here is the uncomfortable truth about data breaches: most of the real damage does not happen during the initial compromise. It happens in the scramble afterwards. Someone panics and wipes a server. Someone else emails the whole company from an account that is already compromised. A well-meaning manager posts on social media before anyone understands what happened. The first hour of your response will largely determine whether this becomes a bad day you recover from or a business-ending week you do not.

Before proceeding, it is important to clarify what we mean by a breach. You do not need to see a ransom note or catch a hacker in the act to justify immediate action. A breach exists on a spectrum, and even the suspicion of one is enough to initiate your response. For example, if staff receive unexpected MFA prompts they did not request, that is a trigger. If someone is suddenly locked out of Microsoft 365 or Google Workspace, that is also a trigger. Other warning signs include unfamiliar inbox rules or forwarding addresses, unexplained changes to invoice details, supplier emails that seem just a bit unusual, alerts from endpoint security tools, the appearance of unknown admin accounts, backups failing without warning, or files beginning to encrypt. Any of these situations should prompt you to act.

The principle that should guide everything from this point forward is simple: treat it like a fire and a crime scene at the same time. Your job is to stop bleeding without destroying the evidence. The rest of this playbook will walk you through exactly how to do that, starting with the first fifteen minutes.

The 15-Minute Triage: Stop, Breathe, Start the Timer

So you have a trigger. Something looks wrong. The urge to start clicking, resetting, and fixing will be overwhelming. Resist it. Before you touch a single system, you need to establish three things: control, documentation, and safe communications.

Name an incident lead. This is one person who coordinates decisions, keeps a running log, and acts as the single point of truth. It does not have to be your most technical person. It has to be someone calm and organised.

Start an incident log immediately. Timestamp everything: what was observed, who observed it, what actions were taken, which accounts were touched, and which systems were changed. This log is not optional. Your MSP, your insurer, and potentially the ICO will all want to see it later.

Choose out-of-band communications. If your email is suspected of being compromised, do not coordinate your response through that email. Switch to phone calls, a separate messaging platform, or any channel you trust has not been touched. This sounds obvious, but under pressure people default to the tools they know, even when those tools are the problem.

With those three things in place, you are ready to start containing the incident.

Containment: Stop the Bleeding Without Destroying the Scene

Now that you have a lead, a log, and a safe way to talk, it is time to start isolating the threat. The goal here is to halt attacker activity while preserving enough evidence to understand what happened. Think of it as applying a tourniquet, not performing surgery.

Start by isolating the obvious targets: suspected endpoints (especially anything used by administrators), remote access paths like VPN, RDP, and remote management tools, identity systems such as Microsoft 365, Entra ID, or Google Workspace, and any device showing active encryption or suspicious outbound traffic. If it looks like it is talking to someone it should not be, cut it off.

From there, move to containment actions that help without making things worse. Disable or reset credentials for any account you suspect has been compromised, prioritising admin and finance accounts first. Force sign-outs and revoke active sessions for affected cloud accounts. Look for and remove suspicious email forwarding rules, mailbox delegates, and OAuth app consents. These are a favourite persistence mechanism for attackers. If you suspect third-party integrations have been abused, temporarily pause risky API keys, webhooks, and connectors. Block any known malicious indicators if you have them, but log every single change you make. That log will matter later.

A critical note on containment: do not reboot or wipe systems without first preserving evidence. Do not reset passwords from a device you suspect is compromised. Do not involve more people than necessary. The more hands on compromised systems, the more evidence disappears.

Evidence Preservation: Do Not Destroy the Scene You Are Standing In

While containment is happening, evidence is already slipping away. Logs rotate, sessions expire, and every change you make overwrites something. This is why evidence preservation runs in parallel with containment, not after it. You do not need a forensics lab. You need discipline.

Start capturing what matters: screenshots of alerts, ransom notes, suspicious sign-in prompts, admin changes, and newly created rules. If a phishing email triggered the incident, preserve the original with its headers intact and do not forward it around the organisation. Forwarding strips the metadata you actually need. Pull sign-in logs, admin audit logs, endpoint security logs, firewall logs, and backup logs. If your tools provide file hashes or suspicious filenames, record those too.

The practical handling rules are straightforward. Preserve originals whenever possible and work from copies. Keep a basic chain of custody: who collected what, when they collected it, where it is stored, and who has had access. If you have a vendor, MSP, or MDR provider, contact them before making further changes and ask them what they need you to collect. Their guidance at this stage can save hours of duplicated effort and prevent accidental evidence destruction later.

Scoping: Answer Five Questions Before You Fix Anything

This is the stage where most small businesses go wrong. You have contained the immediate threat, you have started collecting evidence, and now every instinct is telling you to repair, restore, and get back to normal as quickly as possible. Resist that urge. If you fix things before you understand what is broken, you risk reintroducing the attacker, missing compromised accounts, and losing the evidence that tells you what data was affected.

Before you rebuild a single system, work through these five questions:

1. What happened? Was this a phishing attack, stolen credentials, an exploited vulnerability, or a supplier compromise?

2. What is affected? Which devices, accounts, cloud tenants, and SaaS tools are involved?

3. Is the attacker still active? Look for new logins, new rules, ongoing encryption, or strange outbound traffic.

4. What data is involved? Personal data, payroll records, customer lists, bank details, credentials, intellectual property?

5. What is your operational risk? Can you still invoice clients, fulfil orders, provide services, and pay your staff?

Even with minimal tooling, you can make real progress on these. Check your cloud admin portal for sign-in logs showing impossible travel, unusual locations, or unfamiliar devices. Look for newly created admin accounts or changes to MFA and security settings. Search for inbox rules, forwarding addresses, and unexpected mailbox permissions. Try to identify patient zero: the earliest affected user or system, and the earliest timestamp you can trust. Produce a short list of impacted systems, even if it is incomplete. You can always refine it later.

The answers to these five questions will shape everything that follows: who you call, what you report, and how you recover.

Make the Right Calls Early

Armed with your scoping answers, you now know enough to bring in the right help. Getting the right people involved quickly is one of the most impactful things you can do, and one of the most commonly delayed.

Internally, your first call should be to a senior decision-maker who has risk and spending authority. Then engage your IT team, MSP, or MDR provider for hands-on containment and investigation. Your finance lead needs to know immediately so they can monitor for payment diversions, contact the bank, and verify any recent vendor invoices.

Externally, your insurer should hear from you early. In many policies, early notification is not just helpful, it is a condition of cover, and it often unlocks access to specialist incident response teams at no additional cost. The NCSC can help route you to appropriate support and reporting options. If fraud, extortion, or financial diversion is suspected, report through Action Fraud or your local law enforcement channels.

UK GDPR: The 72-Hour Reality Without the Legal Fog

With your response team in place and containment underway, there is one clock you cannot afford to ignore. Under UK GDPR, if personal data has been involved in your incident and it is likely to result in a risk to people's rights and freedoms, you may need to notify the ICO within 72 hours of becoming aware. If the risk to individuals is high, you may also need to tell the affected people directly, without undue delay.

That phrase "becoming aware" is important, and it is exactly why your incident log matters so much. You are typically considered aware when you have a reasonable level of confidence that a security incident has impacted the confidentiality, integrity, or availability of personal data. You do not need absolute certainty, but you do need more than a vague hunch.

Encourage a simple documentation pattern throughout your response: what you know, what you suspect, and what you are doing to confirm. This creates the defensible timeline that regulators look for. It shows you took the situation seriously and acted proportionately, even if your initial assessment turned out to be incomplete.

A lightweight risk assessment for ICO notification purposes asks three questions: how sensitive is the data involved, how many people are affected, and what is the likely impact on those individuals? Higher sensitivity, larger numbers, and more serious potential harm all push you towards notification. If you are uncertain, the ICO's guidance for small organisations is clear and practically written. Use it.

Communications: Control the Narrative Before It Controls You

Whether or not you need to notify the ICO, you will need to communicate internally, and potentially externally. How you handle this will shape whether your staff, customers, and suppliers trust you through the recovery or lose confidence entirely.

Designate one spokesperson and one source of truth. Tell staff immediately what to do: do not approve MFA prompts they did not initiate, report any suspicious emails, and stop using affected devices or services until they are cleared. If customer communications become necessary, keep them factual, action-oriented, and non-speculative. Cover what happened as far as you know, what you are doing about it, what they should do, and where they can get updates.

Resist the urge to say more than you know. Silence feels uncomfortable, but speculation feels worse when it turns out to be wrong.

Recovery: Restore Without Reintroducing the Attacker

When you are confident the threat is contained and your scoping is solid, it is time to restore operations. The key word here is controlled. Rushing back to normal is how attackers get a second chance.

Restore in a deliberate order. Start with identity and admin access, because if the attacker still has a foothold in your identity layer, nothing else matters. Then move to clean endpoints, followed by core services, and finally backups and data restoration with validation. Before you declare all clear, confirm that persistence mechanisms have been removed, rotate all keys, secrets, and passwords, and verify that logging is enabled and being retained.

This is also the moment to capture lessons learned and begin hardening. Enforce MFA properly across all accounts. Separate admin accounts from daily-use accounts. Tighten remote access controls. Establish a patching cadence. Isolate your backups and test your restore process regularly. The breach taught you where the gaps are. Use that knowledge before you forget it.

The Mistakes That Will Cost You

These are the mistakes that turn manageable incidents into catastrophic ones. Every single one of them happens regularly to real businesses.

Wiping or rebooting systems immediately. You destroy the forensic evidence you need to understand what happened and prove what data was affected.

Using compromised communication channels to coordinate the response. If your email is compromised and you coordinate through it, the attacker reads your playbook in real time.

Telling too many people too soon. Every additional person who knows increases the chance of an uncoordinated external communication, a social media post, or a panicked action that makes things worse.

Not notifying the insurer early enough. Many policies have notification windows. Miss them and you may invalidate your cover at the moment you need it most.

Restoring from backup before verifying the backup is clean. If the attacker has been in your environment for weeks, your backup may contain the malware. Restore it and you are back to square one within hours.

Declaring it over before confirming persistence mechanisms are gone. Attackers routinely plant backdoors, OAuth consents, and forwarding rules designed to survive a password reset. Check for all of them before you close the incident.

Incident Response Decision Tree

The playbook above covers a lot of ground, so we have distilled the key decision points into a single flowchart you can follow in real time. Use it from the moment you suspect a breach. Screenshot it, print it, and keep it somewhere your team can find it before they need it.

Your One-Page Breach Response Checklist

Save this. Print it. Put it somewhere your team can find it before they need it.


Mauven's Take: How to Turn This Into a Competitive Advantage

The following section is an editorial addition from the platform.

Kathryn's playbook is the response framework. What I want to add is the framing that turns incident preparedness from a cost into a commercial signal.

For your board: the single most useful thing to take from this article is that the ICO and your insurer both make decisions based on what you did and when, not just on what happened to you. An organisation that can produce a timestamped incident log, demonstrate it followed a documented response process, and show it notified within the 72-hour window is in a fundamentally different regulatory position to one that cannot. The investment in having this playbook ready before you need it is not a security cost. It is an insurance cost. And it is dramatically cheaper than the alternative.

For your customers: businesses that can demonstrate tested incident response capability are increasingly attractive to larger clients and public sector buyers. Supply chain security questionnaires are now routinely asking whether suppliers have a documented breach response process. Having one, and being able to describe it concisely, is a differentiator that most UK SMBs cannot currently claim.

The practical ask for this week: print Kathryn's checklist. Put it in a physical location your team can access without using any system that might be compromised. Tell your incident lead who they are before an incident makes the decision for you. These two actions cost nothing and materially change your response capability from the moment you complete them.

Sources

Kathryn Renaud

Kathryn ("Kat") Renaud is a cybersecurity graduate from Kennesaw State University and an IT technician in higher education supporting identity and access workflows, MFA troubleshooting, account access, and enterprise service operations.

She writes practitioner-focused cybersecurity analysis for small and medium-sized businesses, translating threat activity, control effectiveness, and governance requirements into practical security roadmaps.

Her work emphasizes risk-based prioritization, incident-driven lessons learned, and defensible decision-making for SMB leaders and lean IT/security teams operating under real budget and staffing constraints.

Next
Next

UK Data Enforcement Is Structurally Broken. The Currys’ Case Proves It. Let's Stop Pretending Otherwise.