The Psychology of Cybersecurity Negligence: Why Smart People Make Fatal Decisions

The Psychology of Cybersecurity Negligence: Why Smart People Make Fatal Decisions

By Mauven MacLeod

Nobody decides to kill a patient through cybersecurity negligence. Yet on 3 June 2024, executives at Synnovis made precisely that choice when they failed to enable multi-factor authentication on critical healthcare systems. A patient died waiting for blood test results that never arrived because ransomware had shut down the pathology lab.

These were not incompetent people. They were educated professionals running a healthcare organisation responsible for diagnostic services across southeast London. They had access to free security controls. They had been warned about ransomware threats. They had every resource needed to prevent exactly what happened.

So why did they make a decision that, in retrospect, seems obviously catastrophic?

As someone with a background in government cyber analysis and a professional interest in human decision-making under uncertainty, I want to examine the psychological and systemic factors that enable intelligent people to rationalise fatal security negligence. Because until we understand why these decisions happen, we cannot prevent the next preventable death.

The Optimism Bias: "It Will Not Happen to Us"

The most fundamental psychological factor is optimism bias, the documented tendency for people to believe that negative events are less likely to happen to them than to others.

When Synnovis executives looked at ransomware statistics, they likely thought:

  • "That happens to organisations with poor security, not us"

  • "We have not been attacked before, so we are probably not a target"

  • "We have other security measures in place"

  • "Implementing MFA would disrupt operations for something that probably will not happen"

This is not stupidity. It is how human brains process risk. We are psychologically designed to discount threats that have not yet materialised in our immediate experience. It is the same mechanism that lets people smoke despite knowing the cancer statistics, or drive without seatbelts because they have never been in an accident.

The problem is that cybersecurity threats do not care about our psychological biases. Ransomware gangs do not check whether you feel personally vulnerable before attacking. The Qilin group looked at Synnovis and saw exactly what was there: valuable healthcare data, critical infrastructure, and missing authentication controls.

The Abstraction Gap: When Cybersecurity Feels Theoretical

Physical safety is intuitive. If I tell you that missing railings on a fifth-floor balcony could cause someone to fall and die, you immediately grasp the causal chain. You can visualise the sequence of events.

Cybersecurity threats exist in an abstraction layer that most humans struggle to conceptualise.

When told that "missing multi-factor authentication creates vulnerability to credential-based attacks that could lead to ransomware deployment disrupting critical services," most executives hear technical jargon, not "someone might die."

This abstraction gap allows decision-makers to mentally categorise cybersecurity as an IT problem rather than a safety issue. It gets bundled with server maintenance and software updates, not with the critical safety controls that keep people alive.

The Synnovis executives likely saw MFA implementation as:

  • An IT project requiring budget approval

  • A change management issue with staff training implications

  • A potential source of user complaints about authentication inconvenience

  • Something to do eventually, when time and resources permitted

They did not see it as:

  • The difference between a patient receiving life-saving blood test results or dying in a hospital corridor

The causal chain between "not enabling MFA" and "patient death" was too abstract, too separated by intermediate technical steps, to trigger the visceral risk assessment that physical safety threats provoke.

The Normalisation of Deviance: When Bad Practices Become Standard

Diane Vaughan's research on the Challenger disaster identified a phenomenon called "normalisation of deviance," where organisations gradually accept practices that deviate from safety standards because nothing bad has happened yet.

Every day that Synnovis operated without MFA and did not get attacked reinforced the belief that MFA was not essential. Each successful day became evidence that the current security posture was adequate. The absence of disaster became proof that disaster was unlikely.

This creates a perverse incentive structure where:

  • Good security practices that prevent attacks receive no recognition (because nothing visible happens)

  • Risky practices that have not yet caused problems appear validated by continued operation

  • The decision not to implement security controls seems justified by the lack of immediate consequences

Synnovis had probably been operating without MFA for months or years. During that entire period, nothing catastrophic happened. This success validated the decision not to implement MFA, right up until the moment Qilin breached their systems.

The Diffusion of Responsibility: Nobody's Job Is Everyone's Problem

In large organisations, security responsibilities are often fragmented across multiple roles and departments:

  • IT teams manage technical infrastructure

  • Security teams (if they exist) advise on threats

  • Business units own operational systems

  • Executive teams approve budgets

  • Boards provide governance oversight

When security failures occur, it becomes remarkably difficult to identify who was actually responsible for the decision not to implement basic controls.

At Synnovis:

  • Did the IT team recommend MFA but fail to get budget approval?

  • Did the security team raise concerns that were ignored?

  • Did executives never receive proper briefings on the risks?

  • Did the board lack cybersecurity expertise to ask the right questions?

This diffusion of responsibility creates situations where everyone assumes someone else is handling security, while nobody actually owns the decision. It also makes post-incident accountability nearly impossible, because you cannot prosecute "organisational failure" when nobody specifically made the fatal choice.

The Cost-Benefit Fallacy: Measuring the Unmeasurable

Standard business decision-making relies on cost-benefit analysis. Rational executives are trained to weigh costs against benefits and choose options that maximise value.

The problem is that cybersecurity benefits are inherently unmeasurable until after the disaster they prevented.

What is the ROI of MFA? It is:

  • Zero pounds in visible returns every day that you are not attacked

  • Potentially millions of pounds in prevented losses on the day you are attacked

  • The value of human lives that never appear on balance sheets

This creates a situation where the cost of MFA implementation is concrete, measurable, and feels significant (staff training time, minor operational disruption, potential user complaints), while the benefit is abstract, theoretical, and unmeasurable right up until the moment it becomes catastrophically obvious.

Rational cost-benefit analysis would suggest not implementing MFA because the measurable costs exceed the measurable benefits. This is rational right up until the day patients die, at which point everyone asks why nobody implemented the obvious control that would have prevented the disaster.

The Disconnect Between Decision and Consequence

Perhaps the most fundamental problem is the temporal and spatial disconnect between security decisions and their consequences.

When a construction company director decides not to provide safety equipment, and a worker dies on that same construction site days later, the causal connection is immediate and obvious. The director sees the consequence of their decision.

When Synnovis executives decided not to enable MFA, they:

  • Made the decision in a board room or office, disconnected from patient care

  • Faced no immediate consequences

  • Never met the patient who would eventually die

  • Had no visceral connection to the potential human cost

The patient died months or years after the decision not to implement MFA. The executives never knew this person. They never had to look at the family and explain why basic security controls were not implemented. They were insulated from the consequence of their decision by layers of abstraction, distance, and time.

This psychological distance makes it easier to prioritise operational convenience over security. It is not that executives do not care about patients dying. It is that the connection between "not enabling MFA" and "patient death" is so abstract and distant that it does not trigger appropriate emotional weight in the decision-making process.

Systemic Failures That Enable Individual Bad Decisions

Individual psychology explains part of the problem, but systemic factors create the environment where these psychological weaknesses become fatal:

Lack of Security Expertise in Leadership

Most boards and executive teams lack members with genuine cybersecurity expertise. This creates situations where nobody can effectively challenge reassurances that "we take security seriously" or identify when obvious controls are missing.

The Synnovis board likely included medical professionals, business executives, and financial experts. Did it include anyone who could have looked at the security posture and immediately identified that missing MFA was a critical vulnerability? Probably not.

Regulatory Frameworks That Focus on Fines, Not Consequences

The ICO can fine organisations for data breaches. These fines are paid by the organisation, not by the individuals who made the decisions. This creates moral hazard where executives make risky choices knowing that they personally will not face consequences if things go wrong.

Compare this to health and safety law, where directors personally face prosecution if their decisions kill workers. That personal liability changes decision-making calculus dramatically.

Insurance That Socialises Risk

Cyber insurance allows organisations to transfer the financial consequences of breaches to insurers. While this provides valuable financial protection, it also reduces the direct cost of poor security decisions. If the insurance pays for breach recovery, where is the incentive for executives to invest in prevention?

Cultural Acceptance of Inevitable Breaches

The cybersecurity industry has, unfortunately, created a culture where breaches are seen as inevitable. "It is not if, but when" has become such a standard phrase that it inadvertently normalises negligence. If breaches are inevitable, why invest heavily in prevention?

This fatalism ignores the crucial distinction between "sophisticated attack that bypassed good security" and "criminals walking through an unlocked door because nobody could be bothered with the free lock."

Breaking the Cycle: What Needs to Change

Understanding these psychological and systemic factors suggests several interventions:

1. Make Consequences Personal

As Noel argued in Monday's podcast, criminal prosecution of executives for gross negligence would create the personal liability that changes decision-making. When executives know they could personally go to prison if basic controls are not implemented and someone dies, the psychological calculus shifts dramatically.

2. Close the Abstraction Gap

Security briefings need to explicitly connect technical controls to human consequences. Instead of "implementing MFA reduces credential-based attack vectors," boards need to hear "without MFA, ransomware could shut down our blood testing for weeks, and patients could die waiting for results." Make it visceral. Make it real.

3. Mandate Security Expertise in Governance

Require boards of critical organisations to include members with genuine cybersecurity expertise who can identify obvious gaps and challenge complacent assurances. You would not run a hospital without medical expertise on the board. Why run a healthcare IT organisation without security expertise?

4. Eliminate the Cost-Benefit Fallacy

Stop treating basic security controls as optional investments requiring ROI justification. MFA is not an investment any more than smoke alarms in hospitals are investments. They are fundamental requirements for operating safely. The cost-benefit analysis is irrelevant because the alternative is unacceptable.

5. Create Accountability Structures

Establish clear chains of responsibility for security decisions. Who specifically owns the decision to implement or not implement MFA? When something goes wrong, who is accountable? Eliminate diffusion of responsibility by making security decisions explicit and documented.

The Uncomfortable Truth

The uncomfortable truth is that the Synnovis executives were probably normal people doing what seemed reasonable at the time. They were not uniquely evil or incompetent. They were human beings subject to the same psychological biases and systemic pressures that affect all of us.

But a patient is still dead.

Understanding why intelligent people make fatal security decisions does not excuse those decisions. It explains them. And that explanation points towards systemic changes that could prevent the next preventable death.

We cannot fix human psychology. Optimism bias, abstraction gaps, and normalisation of deviance are features of how human brains work, not bugs we can patch. But we can design systems that account for these psychological realities.

We can create regulatory frameworks that make executives personally liable for gross negligence. We can mandate security expertise in governance. We can close the gap between decisions and consequences. We can stop treating preventable disasters as inevitable costs of doing business.

The Synnovis case demonstrates that our current approach is not working. Intelligent, educated professionals made decisions that killed a patient. They will face no consequences. And somewhere else, right now, other executives are making the same decisions that will kill the next patient.

Until we address both the psychology and the systems that enable negligent security decisions, we will keep seeing preventable disasters. The only question is who dies next, and whether we will finally decide that enough is enough.

Next week's podcast episode will design the practical legal framework for corporate cyber negligence legislation. What would accountability actually look like?

Research Sources

Source Reference
Sharot, T. (2011). The optimism bias. Current Biology Research Paper
Vaughan, D. (1996). The Challenger Launch Decision University of Chicago Press
Kahneman, D. (2011). Thinking, Fast and Slow Penguin Books
Heath, C., & Heath, D. (2010). Switch: How to Change When Change Is Hard Random House
Next
Next

Why Multi-Factor Authentication Could Have Prevented the Synnovis Death