Enough. It Is Time to Send Negligent Directors to Prison for Cyber Failures.

I have had enough.

A patient died on 3 June 2024 because ransomware shut down blood testing at Synnovis. The security control that would have stopped this attack was multi-factor authentication. It was free. It took hours to implement. It was not enabled.

Nobody will face criminal prosecution. Nobody will go to prison. The executives responsible will face no personal consequences whatsoever.

This is not justice. This is not accountability. This is permission to fail.

If a construction director decided that hard hats were too expensive and a worker died as a result, that director would face criminal prosecution under the Health and Safety at Work Act. They would likely go to prison. This happens regularly, and it has created a culture where workplace safety is taken seriously because executives know they will personally face consequences for negligence.

Yet when healthcare executives fail to enable free security controls and a patient dies as a result, nothing happens. The worst consequence is usually a fine paid by the company while executives collect their bonuses and move on to their next role.

I am arguing, without reservation or equivocation, that directors should face criminal prosecution, including prison sentences, for gross cybersecurity negligence that causes serious harm. Here is why.

The Health and Safety Precedent Already Exists

We do not need to invent a new legal framework. We already have one that works. It is called the Health and Safety at Work Act 1974.

Under this Act:

  • Employers have a duty to ensure the health, safety, and welfare of employees and others affected by their operations

  • Directors and senior managers can be personally prosecuted if their negligence causes death or serious injury

  • Penalties include unlimited fines and prison sentences up to two years

  • The Health and Safety Executive actively prosecutes directors when safety failures cause harm

This is not theoretical. The HSE prosecutes directors regularly. In 2023/24, there were over 500 prosecutions for health and safety offences. Directors go to prison. Companies are fined heavily. The message is clear: if your negligence kills someone, you will face personal consequences.

Why do we not apply the same principle to cybersecurity?

What Constitutes Gross Negligence?

I am not arguing that every data breach should result in prosecution. That would be insane and counterproductive.

I am arguing that gross negligence causing serious harm should be criminal. Let me define what I mean by gross negligence:

Gross negligence exists when:

  1. Directors knew, or should have known, about a significant cybersecurity risk

  2. Effective controls existed to mitigate that risk

  3. Those controls were affordable and reasonable to implement

  4. Directors chose not to implement those controls

  5. This failure directly led to serious harm (death, major injury, catastrophic data breach affecting millions)

The Synnovis case ticks every single box.

Did executives know about ransomware risks? Yes. Ransomware attacks on healthcare have been headline news for years. The WannaCry NHS attack was in 2017. There is no possible way Synnovis executives were unaware of ransomware threats.

Did effective controls exist? Yes. Multi-factor authentication blocks credential-based attacks. This is not controversial or debatable. It is documented technical fact.

Were controls affordable and reasonable? Yes. MFA is free. It is built into Microsoft 365, Google Workspace, and most platforms. Implementation takes hours. There is no cost barrier.

Did they choose not to implement controls? Yes. The absence of MFA was a deliberate organisational decision, even if nobody explicitly said "we are choosing not to enable MFA." Inaction is a choice.

Did this lead to serious harm? Yes. A patient died. Others suffered severe harm. Nearly 600 patient safety incidents occurred. Over 10,000 appointments were cancelled. This is catastrophic harm.

This is textbook gross negligence. And nobody will face prosecution.

The Cost-Benefit Reality

Some will argue that we cannot prosecute directors for cybersecurity failures because determining negligence is too complex.

Rubbish.

Determining negligence in health and safety cases is equally complex. Was the scaffolding installed correctly? Were proper procedures followed? Was the equipment properly maintained? These are technical questions requiring expert testimony, yet the HSE successfully prosecutes hundreds of cases annually.

Cybersecurity negligence is no more complex. Expert witnesses can testify whether reasonable controls were in place. Technical analysis can determine whether the breach was preventable. Standard of care can be established based on industry best practices and regulatory guidance.

The argument that cybersecurity is "too technical" for criminal prosecution is an excuse used by people who do not want executives held accountable.

The Current System Enables Negligence

The absence of criminal liability creates perverse incentives.

From a director's perspective:

Option A: Implement MFA

  • Cost: Staff training time, minor operational disruption

  • Benefit: Reduced security risk

  • Downside: No visible return, no credit for preventing attacks that do not happen

  • Liability: None

Option B: Do Not Implement MFA

  • Cost: Zero

  • Benefit: Avoid operational disruption and staff complaints

  • Downside: Security risk

  • Liability: Company might get fined, but directors face no personal consequences

Which option would a rational, self-interested executive choose?

Without personal liability, the rational choice is negligence. The costs of implementing security are immediate and certain. The benefits are abstract and uncertain. The consequences of failure are borne by the organisation and its victims, not by the decision-makers.

Criminal liability changes this calculus dramatically.

With criminal liability:

Option B: Do Not Implement MFA

  • Cost: Zero

  • Benefit: Avoid operational disruption

  • Downside: Security risk plus personal risk of prison if something goes wrong

  • Liability: Potential criminal prosecution if breach causes serious harm

Suddenly, the rational choice is to implement security. The personal consequences of negligence outweigh the operational convenience of ignoring security.

The "But My Business Is Different" Objection

Small business owners reading this might panic: "Does this mean I could go to prison if I get hacked?"

No. Here is why.

Gross negligence requires that you:

  1. Knew about the risk

  2. Had affordable, effective controls available

  3. Chose not to implement them

  4. Caused serious harm as a result

If you run a five-person business and you genuinely did not know about MFA, or you implemented the controls that were reasonable for your size and resources, you would not meet the threshold for gross negligence.

The standard is proportionate. We do not expect a corner shop to have the same security infrastructure as a multinational bank. But we do expect both to implement basic, affordable controls appropriate to their size, resources, and the sensitivity of data they handle.

Think of health and safety law. A construction site and a retail shop have different safety requirements, but both must implement reasonable safety measures. The law scales appropriately.

The same principle applies to cybersecurity. A village surgery and Synnovis have different security requirements, but both should have MFA enabled on critical systems. The control is the same, but the consequences of failure scale with the criticality of the organisation.

The Objection About Sophisticated Attacks

Some will argue that sophisticated attackers can bypass even good security, so prosecuting directors is unfair.

This argument fails to distinguish between "sophisticated attack that bypassed good security" and "criminals walking through an unlocked door."

If an organisation implements MFA, maintains proper backups, segments their network, trains staff, and still gets breached by a sophisticated attack using zero-day exploits, that is not negligence. That is the reality of operating in a threat environment with well-resourced adversaries.

But if an organisation does not bother with basic free controls and gets breached through compromised credentials, that is negligence. The attack method does not need to be sophisticated when the defences do not exist.

Synnovis was not breached by a sophisticated attack. They were breached because basic authentication controls were not enabled. This is no different from leaving the front door unlocked and being surprised when criminals walk in.

The Insurance Industry Objection

Cyber insurance companies will hate this proposal because it increases their exposure. If directors face criminal prosecution, they will demand better security from organisations they insure. This will require more rigorous audits, stricter policy conditions, and probably higher premiums for organisations with poor security.

Good. That is exactly what should happen.

Insurance should incentivise risk reduction, not subsidise negligence. If the threat of criminal prosecution forces organisations to implement better security to satisfy insurance requirements, the system is working as intended.

The Board Composition Problem

One reason directors are not held accountable for cybersecurity failures is that most boards lack members with genuine cybersecurity expertise.

When the Health and Safety Executive investigates a construction death, board members cannot claim ignorance because every board includes people who understand construction safety. They know what proper scaffolding looks like. They know what safety equipment is required. They cannot hide behind "we are not technical experts."

But when the ICO investigates a data breach, boards can claim they relied on IT teams and assumed security was adequate. Nobody on the board has the expertise to challenge reassurances or identify obvious gaps.

This needs to change. If we are going to hold directors criminally liable for cybersecurity negligence, we need to mandate that boards of critical organisations include members with genuine cybersecurity expertise.

Not "someone who once took a cybersecurity awareness course." Not "our finance director who is good with computers." Actual cybersecurity professionals who can identify when an organisation lacks basic controls and challenge executive complacency.

The Regulatory Capture Problem

The UK's approach to cybersecurity regulation has been captured by the compliance industry. The focus is on frameworks, certifications, and box-ticking exercises rather than genuine security.

Organisations achieve Cyber Essentials certification while running systems without MFA. They complete Data Security and Protection Toolkit assessments while operating with terrible security. They tick all the compliance boxes while being fundamentally insecure.

This is because compliance frameworks measure process, not outcomes. They ask "do you have a security policy?" not "are your systems actually secure?"

Criminal liability for negligence cuts through this compliance theatre. It does not matter if you have Cyber Essentials certification. If you failed to implement basic free controls and someone died as a result, you are liable.

This would force a fundamental shift from compliance-focused security to outcome-focused security. The question becomes not "can we demonstrate compliance?" but "have we implemented reasonable security measures to prevent serious harm?"

What This Would Look Like in Practice

Let me sketch out what a Corporate Cyber Negligence Act might contain:

Offence: It is an offence for a director, senior manager, or person with responsibility for cybersecurity to fail to implement reasonable cybersecurity measures where:

  • That failure constitutes gross negligence

  • The failure directly contributes to serious harm (death, major injury, or catastrophic data breach)

Threshold for Gross Negligence:

  • Director knew or should have known about the cybersecurity risk

  • Effective controls existed and were reasonable to implement given the organisation's size and resources

  • Director failed to ensure controls were implemented

  • Failure directly contributed to serious harm

Defences:

  • Reasonable security measures were implemented but failed due to sophisticated attack

  • Control was not technically feasible or available

  • Decision not to implement control was documented with reasonable justification

  • Harm was not foreseeable result of the security failure

Penalties:

  • Up to 10 years imprisonment for failures causing death

  • Up to 5 years imprisonment for failures causing major injury or catastrophic data breach

  • Unlimited fines

  • Director disqualification

Proportionality:

  • Standards scaled to organisation size, resources, and criticality

  • Small businesses held to different standard than critical infrastructure

  • Industry best practices and regulatory guidance used to establish reasonable standard of care

Would this be perfect? No. Would it be better than the current system where nobody faces consequences? Absolutely.

The Political Reality

This will never happen through voluntary industry action. The cybersecurity establishment is too invested in the current compliance-focused approach. Vendors profit from selling certifications and frameworks. Consultants profit from advising on compliance. Executives benefit from a system with no personal liability.

Change will require political intervention. It will require MPs willing to introduce legislation in the face of fierce industry opposition. It will require a public demand for accountability when preventable cybersecurity failures cause harm.

The Synnovis case provides the perfect catalyst. A patient is dead. The security control was free. Nobody will face prosecution. This is indefensible.

If there was ever a moment to push for criminal liability for cybersecurity negligence, this is it.

The Personal Element

I am writing this with barely contained fury because I am tired of watching preventable disasters kill people while executives walk away unscathed.

I have worked in cybersecurity for long enough to know that most breaches are preventable. They are not sophisticated attacks by nation-states. They are criminals exploiting basic security failures: missing patches, absent MFA, untrained staff, terrible passwords.

And yet, every time a preventable breach causes serious harm, the response is the same:

  • Company issues a statement expressing deep concern

  • Executives promise to take security seriously going forward

  • ICO investigates and maybe issues a fine

  • Nobody faces personal consequences

  • Executives move to their next role and the cycle repeats

I am tired of it. I am tired of the excuses. I am tired of the compliance theatre. I am tired of watching people die because somebody could not be bothered to enable free security controls.

A patient is dead. Not because of an unpreventable tragedy. Not because of a sophisticated attack against good defences. Not because security was technically infeasible.

A patient is dead because nobody enabled multi-factor authentication. Free. Hours to implement. Completely preventable.

And nobody will go to prison.

This is not acceptable. This needs to change. And the only way it will change is if we create real personal consequences for executives whose negligence kills people.

Your Role in This

If you agree that directors should face criminal prosecution for gross cybersecurity negligence:

Write to your MP. Reference the Synnovis case. Ask what they are doing to ensure executives face consequences for preventable security failures that cause harm.

Share this article. The more people who understand that preventable cybersecurity failures are killing people, the more political pressure for change.

Demand accountability. When the next preventable breach happens, do not let executives hide behind corporate statements and ICO fines. Ask why nobody is facing prison.

Implement security in your own organisation. Do not be the next case study. Enable MFA. Train your staff. Implement reasonable security measures. Document your decisions.

If you disagree with criminal liability for cybersecurity negligence, I invite you to explain why to the family of the patient who died at Synnovis.

Explain why the executives who failed to enable free MFA should face no personal consequences while their family member is dead.

I cannot think of a justification that does not boil down to "executive convenience is more important than patient safety."

And I am done accepting that argument.

The Bottom Line

A patient is dead. Free controls would have prevented it. Nobody will face prosecution.

This is not a functioning accountability system. This is permission to fail.

It is time to send negligent directors to prison for cybersecurity failures that cause serious harm. The legal framework exists. The technical expertise exists. The precedent exists in health and safety law.

The only thing missing is political will.

The Synnovis case makes the argument undeniable. Now we need to act.

 
Noel Bradford

Noel Bradford – Head of Technology at Equate Group, Professional Bullshit Detector, and Full-Time IT Cynic

As Head of Technology at Equate Group, my job description is technically “keeping the lights on,” but in reality, it’s more like “stopping people from setting their own house on fire.” With over 40 years in tech, I’ve seen every IT horror story imaginable—most of them self-inflicted by people who think cybersecurity is just installing antivirus and praying to Saint Norton.

I specialise in cybersecurity for UK businesses, which usually means explaining the difference between ‘MFA’ and ‘WTF’ to directors who still write their passwords on Post-it notes. On Tuesdays, I also help further education colleges navigate Cyber Essentials certification, a process so unnecessarily painful it makes root canal surgery look fun.

My natural habitat? Server rooms held together with zip ties and misplaced optimism, where every cable run is a “temporary fix” from 2012. My mortal enemies? Unmanaged switches, backups that only exist in someone’s imagination, and users who think clicking “Enable Macros” is just fine because it makes the spreadsheet work.

I’m blunt, sarcastic, and genuinely allergic to bullshit. If you want gentle hand-holding and reassuring corporate waffle, you’re in the wrong place. If you want someone who’ll fix your IT, tell you exactly why it broke, and throw in some unsolicited life advice, I’m your man.

Technology isn’t hard. People make it hard. And they make me drink.

https://noelbradford.com
Previous
Previous

Designing the Corporate Cyber Negligence Act (What Accountability Looks Like)

Next
Next

The Synnovis Ransomware Disaster: Complete Timeline and Technical Analysis