According to press reports, the White House has considered and rejected four options to address the so-called “going dark” problem where the growing ubiquity of encryption is making it harder for law enforcement agencies to collect evidence and investigate crimes. Options considered include adding an encrypted port to devices, using software updates to compromise devices, splitting encryption keys, and uploading encrypted data to an unencrypted backup location.
All these options have significant drawbacks and none have been put forward as administration proposals. Yet one idea seemingly didn’t make the cut: law enforcement exploitation of vulnerabilities, or what Steven M. Bellovin, Matt Blaze, Sandy Clark, and Susan Landau, dub “lawful hacking.”
In their 2014 paper, “Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet” the team of respected researchers concludes that the approach can allow investigations to move forward on a targeted basis without creating the ability for mass exploitation by law enforcement or easy exploitation by third parties as in the Athens Affair.
Yet, the downsides to the approach are also clear. The US government has an interest in improving the security of the software and hardware that US companies and US government agencies rely on. When the Federal government discovers a vulnerability, it is run through the Vulnerabilities Equities Process, which tries to balance US interests in exploiting vulnerabilities vs. sharing them with software and hardware makers so that they can be fixed.
In an April 2014 blogpost, White House Cyber Coordinator Michael Daniel explained some of the factors that go into a decision to release or exploit a vulnerability. Among the criteria he lists is whether “…the vulnerable system [is] used in the core internet infrastructure, in other critical infrastructure systems, in the U.S. economy, and/or in national security systems.”
Daniel put out the post after accusations surfaced that NSA had known about and exploited the Heartbleed vulnerability, something NSA denied publicly in a tweet. Exploiting heartbleed, a vulnerability found in a core protocol that makes web-browsing secure clearly wouldn’t pass the above test.
But Daniel went on to defend the fact that the US government does use vulnerabilities to collect intelligence: “Disclosing a vulnerability can mean that we forego an opportunity to collect crucial intelligence that could thwart a terrorist attack, stop the theft of our nation’s intellectual property, or even discover more dangerous vulnerabilities that are being used by hackers or other adversaries to exploit our networks.”
That calculus might be easy in a circumstance in which a vulnerability is discovered, for instance, in a foreign-made router with zero penetration of the US or allied markets. But try applying that calculus to the iPhone. Used in critical infrastructure systems? Check. In national security systems? Check. In the US economy? Check.
Withholding a vulnerability like that sounds like a bad idea. Unfortunately, if the idea that law enforcement investigators armed with a warrant should be able to obtain digital communications from companies is untenable, finding, maintaining, and exploiting vulnerabilities may be the only viable alternative.
Having the US government work in an adversarial relationship with US companies is disquieting. In a certain sense, a built-in, purposeful backdoor that can only be accessed following due process is a lot less scary than the idea that law enforcement might have a way to access all smart phones that nobody knows about. If the government needs Apple to decrypt a phone, the warrant provides a built in check.
But for US companies competing abroad, building-in such a capability would kill their oversees market share. After all, Apple sells more of those “designed in California” iPhones in China than it does in the United States. From a technical perspective, the idea that such a known back door would in fact only be accessible by law enforcement with a warrant is laughable.
Many in the privacy and technology communities have taken aim at the idea that the US government should ever exploit a vulnerability, arguing that any discovered should immediately be turned over so they can be fixed. Such a position misses the reason why the US government invests time and money in discovering them in the first place.
There’s a basic question anyone who’s against both creating a lawful intercept program and government exploitation of vulnerabilities needs to answer: does the United states government need to be able to investigate crimes and collect intelligence in the digital age?
If your answer is no, at least your point is clear. But terrorists and child molesters aren’t imagined specters conjured up by government officials to justify an Orwellian system of domestic surveillance bent on political control. Stopping people who want to commit crimes and harm our nation is a legitimate and worthy goal, and it often requires gaining access to digital evidence. Finding and exploiting vulnerabilities is a terrible solution. But it may be better than the alternatives.