Zero-day vulnerabilities — security flaws in commercial software or hardware for which developers haven’t devised a patch — have existed since the dawn of the Digital Age, but today, former NSA and CIA director Michael Hayden said at a meeting of cyber security experts convened by The Cipher Brief, they’re easily discoverable and used by both nation-state actors and malicious hackers.
In the nineties, “it wasn’t really a vulnerability if you didn’t have eight acres of Cray supercomputers in the basement,” Hayden said at The Cipher Brief’s Cyber Advisory Board Meeting. “We had this concept called NOBUS which meant “nobody but us” could exploit it. It required something only we had, like massive computer power, to exploit it.”
But now, Hayden says, “that premise is getting more and more narrow as other governments and even private companies have the capacity to exploit [vulnerabilities]. The former exclusivity for NSA to do certain things has eroded. In the modern world, there are very competent [private sector] actors that might do it.”
This new reality is forcing intelligence agency leaders to make tough calls. When should they tell developers, manufacturers or businesses that their software or hardware has a security hole?
If they disclose, the intelligence agencies may lose the ability to gain access to the system for legitimate national security purposes, if the vendor quickly issues a patch. If they keep the private sector in the dark, businesses and innocent people could suffer invasions of privacy, theft and financial losses.
“I don’t think there’s very many clean lines at all,” an executive for a major electronics manufacturer said at the Cipher Brief’s Cyber Advisory Board Meeting.
Much of the public was unaware of this dilemma until last month, when WikiLeaks triggered a major security-versus-privacy debate by making public a trove of purported CIA hacking tools devised to find, preserve and exploit these flaws and other security gaps to gather intelligence. Earlier this month, a hacker group called Shadow Brokers made public a trove of data showing how the NSA was accessing certain private sector systems.
Some analysts contend that the American intelligence community has an obligation to notify companies of previously unrecognized cyber vulnerabilities immediately, on grounds that if U.S. government specialists could find cracks in their systems, so could hostile governments, tech-savvy militant groups, hacktivists and criminals bent on intellectual property and financial theft, extortion or blackmail. Others insist that the intelligence community and law enforcement needs to see that certain systems remain unpatched – essentially, to keep unlocked doors open – to conduct intelligence gathering in the overriding interest of national security.
Some members of The Cipher Brief’s Cyber Advisory Board cautioned that the zero-day issue has been oversimplified and overblown by the press. The factors involved in cybersecurity are far more complex and multi-faceted than what one participant called a “binary” discussion about disclosure versus secrecy.
“Post-Snowden, after the President launches his study commission, there is almost a theological tsunami about zero days and what the agency should do,” said one former senior intelligence official.
“You’re almost asking the spies to give up the ability to spy,” said another.
“The number one vulnerability that gets exploited is human,” said a cyber security expert in the private sector, adding that the zero-day debate is merely a symptom of a larger problem of cyber insecurity.
The number of zero-day vulnerabilities discovered by U.S. intelligence is highly classified. But last month, the RAND Corporation offered the public a rare glimpse into the arcane world of so-called “zero days” with a study of 200 security holes after they were discovered by U.S. agencies. The study, entitled “Zero Days, Thousands of Nights —The Life and Times of Zero-Day Vulnerabilities and Their Exploits,” concluded that zero-day vulnerabilities had an average life span, defined as the time between discovery and public disclosure, of 6.9 years. The “collision rate,” meaning, instances in which two people or entities discovered the same vulnerability, was just 5.7 percent per year, according to the RAND study.
What any given actor, be it a friendly or unfriendly intelligence agency, or a non-state hacker – unless restrained by national law and policy, as many intelligence agencies are - can do once it has found a zero-day hole is, to put it simply, whatever it wants. In theory, an actor can save its knowledge of the flaw for another day, or, use it and wander broadly through the compromised system.
“There is almost no defense against a zero-day attack,” Leyla Bilge and Tudor Dumitras of Symantec Research Labs wrote in an article for the 2012 ACM Conference on Computer and Communications Security. “While the vulnerability remains unknown, the software affected cannot be patched and anti-virus products cannot detect the attack through signature-based scanning. For cyber criminals, unpatched vulnerabilities in popular software, such as Microsoft Office or Adobe Flash, represent a free pass to any target they might wish to attack, from Fortune 500 companies to millions of consumer PCs around the world. For this reason, the market value of a new vulnerability ranges between $5,000– $250,000.”
On the other hand, sometimes intelligence agencies can use zero-day vulnerabilities to accomplish what diplomacy and sanctions can’t. A well-known example is the Stuxnet worm, which used four zero-day vulnerabilities to sabotage the centrifuges in Iran's nuclear program.
The New York Times reported in 2012 that the National Security Agency and Israeli Intelligence developed a digital “worm” that burrowed into the Iranian nuclear program and sabotaged it. Independent security experts dubbed the technique the Stuxnet worm.
One issue that’s unlikely to be resolved is who decides when an intelligence agency discovers a security hole in a corporate system whether that vulnerability should be publicly disclosed. Former NSA officials at the Cyber Advisory Board Meeting emphasized that given the NSA’s dual mission of both defense and foreign intelligence, the agency has taken the disclosure process very seriously for decades. Nonetheless, the administration of President George W. Bush recognized that an independent body inside the government was necessary, and the Obama White House set up the Vulnerabilities Equities Process to weigh the decision.
Ari Schwartz, former Special Assistant to the President and senior Director for Cybersecurity Policy at the White House National Security Council, told Cipher Brief that the VEP panel should consider a list of questions, including, “How much is the vulnerable system used in the core internet infrastructure, in other critical infrastructure systems, in the U.S. economy, and/or in national security systems? Does the vulnerability, if left unpatched, impose significant risk? How much harm could an adversary nation or criminal group do with knowledge of this vulnerability? How likely is it that we would know if someone else were exploiting it? How badly do we need the intelligence we think we can get from exploiting the vulnerability?”
“Offense needs to inform defense,” a senior executive for a major company that develops and sells communications hardware and software, said at The Cipher Brief’s Cyber Advisory Board session. “If you allow known vulnerabilities to exist in those critical infrastructure industries … a risk to one is a risk to all. That doesn’t mean broadcast them to the world.”
But a chief executive officer of a cyber security firm argued that zero day vulnerabilities are so seldom discovered that revealing their existence could invite unwelcome attention from malicious hackers and adversaries, who will exploit them before vendors can patch the hole. “You could make the argument that disclosing is making us less safe,” he said.
Elaine Shannon is a contributing national security editor at The Cipher Brief.