Don’t Be Fooled by Blame Game: WannaCry Shows Gross Unpreparedness

| Doug Shepherd
Doug Shepherd
Director of Operations, Nisos Group

Despite the insistence of many pundits and technical experts, the recent WannaCry outbreak was – mercifully – a poorly organized attack with a poorly constructed tool. It was, in fact, the best of all worst-case scenarios. This salvo – and the attendant global reaction – only highlights the degree of gross unpreparedness within U.S. and global infrastructure against what could have been – but was not – a sophisticated and truly devastating incident.

While much attention is paid to governmental “zero-day hoarding” or of the security of Microsoft’s operating system, this energy is misdirected. In fact, WannaCry exploited a vulnerability that was patched months ago, in a service that US-CERT, the U.S. government’s cyber response team, recommended be disabled earlier this year. So, it is far more a damning instance of the victims failing to understand, and secure, their own risk surface and environment.

The WannaCry malware itself was neither novel nor complex, and utilized the same scheme of encrypting files and demanding Bitcoin payment as common and innumerable ransomware variants. Mistakes made by WannaCry’s authors – such as not properly obfuscating the addresses to pay ransoms and not registering the “kill-switch” domain – suggest that it was not a well-orchestrated campaign, as evidenced from the paltry ransom earnings of WannaCry compared to major long-running malware campaigns. Like most modern malware campaigns, the attack was opportunistic, and the attacker correctly assumed that hundreds of thousands of IT assets in critical organizations would be unpatched. They also correctly assumed that a decade and a half after large outbreaks targeting similar vulnerabilities – see, for example, Blaster, Nimda, and Sobig – people would still leave file shares open to the internet. 

The extraordinary attention that WannaCry continues to receive – despite its lack of sophistication and true impact – present strong evidence that the commercial sector is grossly unprepared for a truly damaging and systemic incident. What would have resulted had the attack been strategic – for example, by spending weeks infecting file shares and critical network servers before demanding a ransom? What if the authors had built in more resilience against being shut down or “sink-holed?” What if the authors had exfiltrated the encrypted files and offered them for sale to competitors? What if hacktivists had simply dumped sensitive patient data, trade secrets and other critical files for all to see?

With such a robust vulnerability coupled with a target-rich environment, the recurring theme of WannaCry is that impacted organizations, providing critical services like the UK’s National Health Service, got lucky that such a relatively-benign piece of malware was the wake-up call – if, in fact, organizational leadership has awoken from their slumber.

While it is a normal reaction to shift blame – in this case various tech writers and Microsoft blaming intelligence agencies – the attacks succeeded because the victims were running unpatched and outdated operating systems with easy access to the internet. Blaming the government is convenient, but consider that US-CERT issued a critical warning to disable the avenue of the tool exploited in January 2017, which would have completely negated the propagation mechanism of WannaCry. All this outrage at “zero days” and nation-state intrigue unproductively shifts the conversation away from the real issue: it is critical for organizations to accurately understand and mitigate cyber risk.

In response to WannaCry, Microsoft made the bold move of reneging on their announcement to stop supporting Windows XP and issued an out-of-band patch to address the vulnerability that WannaCry exploits. But, despite this concession, networks will continue being infected by WannaCry variants for years. US-CERT reiterated that disabling this vulnerable avenue will halt all variants of WannaCry, and yet those variants are still actively infecting systems. For the last 15 years, it has been a commonly accepted best practice to not expose file servers and sharing to the internet, and yet it appears “Patient Zero” and a host of other victims were doing just that. Security experts in the legal and healthcare industries are frantically deploying these months-old remediation activities with little warning and unplanned outages, because cyber risk was not properly quantified and addressed, and formal best practices of change control and planned outage windows were tossed out in the panic.

Here’s the unfortunate reality: organizations across all sectors are regularly infected by malware that is upwards of five years old, which is propagating with vulnerabilities that have been patched for years. Corporations are still running intranet applications that require old and vulnerable browser versions to properly access. Institutions are still set up with their intellectual property crown jewels residing on a server in the same network to which vendors and contractors have access. Likewise, hackers can always find the one person in an organization willing to click on a phishing e-mail they send, or can be enticed to “allow” or “install” or respond to some other dubious call to action.

These are not symptoms of some shortcoming of the public-private partnership – these are indicators of a fundamental lack of understanding, preparation, and mitigation of cyber risk. That’s the conversation that should really matter to companies.

Institutions across the board certainly understand cyber risk in generalities – in terms like “if our customer database with social security numbers gets hacked, we’ll lose everything” – but frequently fail to accurately and adequately quantify that risk and assess what factors could contribute to a catastrophic loss. Boxes checked in columns like “antivirus” and “firewall” factor into the risk calculus but oftentimes engender a false sense of security; these technologies do nothing to protect against disaffected insider threats who intentionally deploy malicious code, or hedge against insufficiently educated users who happily click everything.

Similarly, “compliance appliance” devices check a few more of these boxes, but hardly account for a real-world threat that is moving so fast within a network that network defenders cannot keep up. Commoditized “turnkey solutions” are increasingly ineffective against modern malware campaigns that easily repack and refactor multiple times a day, which are the kind of threats in the modern cyber threat landscape that are targeting most companies and institutions.

Putting aside the speculation on who is ultimately responsible for various leaks, what are the true takeaways for industry at-large? It is time to properly account for cyber risk as seriously as other business-critical functions like accounting audits or succession plans. This should include periodic audits, proper access control, intensive and aggregated monitoring at critical choke points for critical data and assets, discussions of proper segmentation of security technologies and data that does not impede business continuity, and updating internal best practices to match the evolving threat landscape.

Failure to do so will not only have significant business and financial consequences, it may also invite unwanted compliance or regulatory statutes to be imposed. 

The Author is Doug Shepherd

Doug Shepherd is the Director of Operations at Nisos Group. He is a former network operator for the National Security Agency and previously owned an offensive security consultancy and worked in the Middle East. Shepard also worked in Symantec’s incident response group and has a passion for novel malware.

Learn more about The Cipher's Network here