The Accountancy Firm That Blamed DNS for Three Weeks While a Compromised Router Rewrote Their Network Map

Case Study

The Accountancy Firm That Blamed DNS for Three Weeks While a Compromised Router Rewrote Their Network Map

This is a composite case study constructed from real incidents reported to security professionals. Company and individual names have been fictionalised, but the technical details, the mistakes, and the consequences reflect patterns observed across multiple UK small businesses.

The trouble at Aldworth & Crane started on a Monday morning in February. Two members of staff could not reach the firm’s cloud-based practice management software. Email worked. Most websites loaded. But the portal they used to manage client accounts, tax filings, and sensitive financial documents would not connect.

By Tuesday, someone had said the word “DNS.” By Wednesday, the entire firm believed it.

Week One: The Wrong Diagnosis

Aldworth & Crane is a fifteen-person accountancy practice based in the West Midlands. They handle personal tax returns, small business accounts, and payroll services for about two hundred clients. Their IT was managed by a sole-trader IT consultant who visited once a month and was available by phone for emergencies.

When the portal connectivity issues appeared, the office manager called the IT consultant. He was unavailable until Thursday. The firm’s most technically confident staff member, a senior accountant who had once configured a home wifi extender, decided to investigate.

He typed “website not loading DNS fix” into Google and followed the first result. Twenty minutes later, he had changed the firm’s router DNS settings from the ISP default to Google’s 8.8.8.8. The portal still did not work. He then changed them to Cloudflare’s 1.1.1.1. It appeared to work briefly, then failed again.

By Thursday, when the IT consultant arrived, the DNS settings had been changed twice without documentation, the portal access was intermittent, and three staff members were using mobile hotspots to work. The IT consultant, under pressure to fix the issue quickly, focused entirely on DNS. He flushed caches, checked DNS records for the portal’s domain, and found nothing wrong with the records themselves.

He concluded it was a “propagation issue” and said it would resolve itself.

It did not.

Week Two: The Pattern Nobody Questioned

The following week, a new symptom appeared. Two specific cloud applications, the practice management portal and a secure document sharing service, were unreliable. But the firm’s email, hosted with Microsoft 365, worked consistently. General web browsing was unaffected. BBC, HMRC, Companies House: all loaded without issue.

This selective failure was the clue that should have triggered a different investigation. In a genuine DNS failure at the resolver level, all lookups through that resolver are affected. If some domains resolve correctly and others do not, the problem is either with the specific domains’ DNS records, or something is selectively interfering with certain queries.

Nobody at Aldworth & Crane made that distinction. The phrase “must be DNS” had achieved the status of holy scripture. The IT consultant changed the resolver settings again, this time to Quad9’s 9.9.9.9. He also recommended the firm contact their ISP to check for upstream DNS issues. The ISP confirmed their DNS service was operating normally.

Meanwhile, the two affected cloud services continued to fail intermittently. Staff adapted. They used mobile hotspots. They accessed the portal from home. Nobody asked the question that mattered: why do these specific services fail from the office network, but work perfectly from every other network?

Week Three: The Real Problem

The breakthrough came not from the IT consultant but from the portal vendor’s support team. After the firm reported persistent connectivity issues, the vendor’s engineer asked the firm to run a specific test: resolve the portal’s domain from within the office network and compare the result to a public DNS lookup.

Using the steps from what we now describe as the five step troubleshooting sequence, the engineer guided a staff member through an nslookup from an office machine compared to the result from her phone on mobile data.

The office machine returned an IP address that did not match the portal’s legitimate server. The phone, on mobile data, returned the correct IP.

The router was returning wrong answers for specific domains.

The IT consultant was called back. This time, he logged into the router’s admin panel. The credentials were the factory defaults printed on the sticker underneath the device. They had never been changed.

The router’s DNS settings had been modified. Not to 8.8.8.8 or 1.1.1.1. Those changes had been overwritten. The primary DNS was now set to an IP address nobody recognised: an external server that was selectively intercepting queries for specific domains and returning different results.

The router firmware had not been updated since installation three years earlier. A known vulnerability in that firmware version allowed remote administrative access when combined with default credentials. The router had been compromised, and the attacker had configured it to redirect DNS queries for specific high-value services through their own resolver.

What the Attacker Gained

The investigation that followed, conducted by a specialist incident response firm, revealed the scope of the compromise.

The attacker’s rogue DNS resolver had been selectively redirecting traffic for two services: the practice management portal and the secure document sharing platform. For the portal, redirection was intermittent, suggesting either testing or capacity limitations on the attacker’s infrastructure. For the document sharing service, redirection had been more consistent.

The incident response team could not confirm whether credentials had been captured, because the firm had no logging in place. No DNS query logs. No network traffic records. No endpoint detection. The absence of evidence was not evidence of absence. The firm was advised to treat all credentials used during the three-week period as compromised and to notify affected clients under their GDPR obligations.

For a fifteen-person accountancy firm handling sensitive financial data for two hundred clients, the potential exposure was significant. The cost of the incident response engagement, the mandatory client notifications, the reputational damage, and the ICO reporting obligation far exceeded what it would have cost to change the router’s default credentials and update its firmware.

Where the Diagnosis Failed

Looking back at the three-week misdiagnosis, the failure points are painfully clear.

No structured troubleshooting process. At no point did anyone follow a systematic sequence. The response was reactive: change a setting, see if it works, change another setting.

No baseline documentation. Nobody knew what the router’s DNS settings should be. There was no record of the original configuration. When the senior accountant changed the settings on day one, the original state was lost.

No device comparison. The selective failure pattern, specific services failing while others worked, was visible from day one. Testing another device on a different network would have revealed the discrepancy immediately.

No IP comparison. The test that ultimately identified the problem, comparing the resolved IP from the office network against the result from mobile data, could have been performed on day one. It took three weeks because nobody thought to do it.

Default router credentials. The router had been installed three years earlier and never had its admin password changed. The factory credentials were printed on a sticker on the device itself.

No firmware updates. The router was running firmware with known vulnerabilities. Updates were available but had never been applied.

How to Turn This Into a Competitive Advantage

If you are an accountancy firm, a solicitor’s practice, or any professional services business that handles sensitive client data, your ability to demonstrate network security hygiene is increasingly relevant to client confidence and regulatory compliance.

Being able to show that you use protective DNS, that your router credentials are changed and firmware is current, and that you have a documented troubleshooting process tells clients something concrete about how seriously you take their data.

The firms that get this right will win mandates from clients who are starting to ask the right questions. The firms that are still running three-year-old router firmware with factory passwords will eventually feature in case studies like this one.

How to Sell This to Your Board

Three weeks of disruption had a direct revenue cost. Staff using mobile hotspots, unable to access core systems, working inefficiently: that is billable time lost. Quantify it. Multiply the number of affected staff by their daily rate by the number of days impacted.

The incident response cost exceeded the prevention cost by orders of magnitude. Changing router credentials and updating firmware takes an hour. The incident response, client notifications, and regulatory reporting cost the firm the equivalent of months of IT budget.

Client trust is the core asset. For a professional services firm, trust is not abstract. It is the reason clients share sensitive financial information with you. A breach that compromises that information damages the asset that generates all your revenue.

What This Means for Your Business

  1. Change your router’s admin credentials. If they are still the factory defaults, change them today. Use a strong, unique password stored in a password manager. This is the single highest-impact action in this entire article.

  2. Update your router firmware. Log in and check for available updates. Set a calendar reminder to check quarterly. If your router is too old to receive updates, replace it.

  3. Document your baseline DNS configuration. Record what DNS servers your router uses, when they were set, and why. Store this somewhere accessible to anyone who troubleshoots network issues.

  4. Follow the five step sequence before changing anything. Another device, another network, check resolver, clear cache, compare IPs. Print it. Pin it up. Make it the first response to every “the internet is broken” complaint.

  5. Treat unexpected IP results as a security incident. If your office resolver returns a different IP from public resolvers for the same domain, investigate immediately. Check router settings. Check endpoint configurations. If anything has changed without documentation, engage professional incident response.

SourceArticle
NCSCProtective DNS for the private sector
NCSCManaging Public Domain Names
NCSCSmall Business Guide
ICOReport a Breach
DSIT / GOV.UKCyber Security Breaches Survey 2025
ISCBIND 9 CVE-2025-40778 Advisory
Cloudflare1.1.1.1 DNS Resolver
Quad9Quad9 DNS Security and Privacy

Filed under

  • smb-security
  • uk-business
  • business-risk
  • incident-response
  • vendor-risk
  • compliance-failure
  • supply-chain-risk