Connectivity continues to enmesh businesses, governments, societies and people – a trend that will only accelerate with the growth of public cloud services and devices linked together in the Internet of Things. But some of the most sensitive sectors are attempting to cordon off their networks from the outside. Highly sensitive information, including that held by military units, intelligence agencies, and companies responsible for transportation, energy, finance and other critical infrastructure elements, is often held within networks disconnected, or “air gapped,” from the global internet. Unlike common hacks that use the internet to reach directly into an organization’s data, a breach into an air-gapped system often requires some level of physical access.
How can hackers jump the air gap and exfiltrate, or extract, data? If an air gap doesn’t truly insulate a highly sensitive network, why use it?
The scenarios that can play out if a sensitive network is breached are limited only by the imagination. For example, if an isolated military or critical infrastructure network is compromised, consequences can range from your run-of-the-mill cyber espionage to blackmail, sabotaging weapons systems, causing power outages, or subtly undermining nuclear weapons development projects. As the recent hack of the Democratic National Committee shows, consequences of breaches are less about the hacks themselves, but rather how the access is operationalized as part of a larger strategy.
Oren Falkowitz, CEO of Area 1 Security and former member of the NSA’s Tailored Access Operations unit, says that air gaps shouldn’t be abandoned because they are imperfect. They “can make life hard for hackers,” he says. For example, a recent breach of the Singaporean Ministry of Defense was contained to unclassified networks because the attackers were unable to reach the air-gapped networks that held more sensitive information.
But when it comes to cybersecurity, there is no single solution – air gaps included. Isolating networks from the public internet should be considered just one precaution within a broader strategy of deterrence by denial. An air gap can deflect penurious, unmotivated freelance hackers to move on to lower-hanging fruit.
Who does that leave? Nation-states – which have the motive and can marshal the persistence, resources, and capability – will be the primary threats to sensitive networks beyond the reach of the internet.
“When hackers imagine how to breach these systems,” Falkowitz argues “they look for a method that will allow: regular, even if intermittent, access; a vehicle to execute code and exfiltrate data; and an opportunity to proliferate throughout a network.”
Possibly the riskiest method used to jump an air gap onto a secure network is what the NSA would call a “close access” operation, in which an operator physically infiltrates a secure facility to insert a removable drive that delivers prepackaged malware onto the isolated computer. For such a mission, the NSA integrates its target exploitation (TAREX) units with the CIA to conduct “off-net” physical break-ins. The Vault 7 cache of documents recently released by WikiLeaks made reference to the CIA’s use of this type of air gap jumping malware, known as HammerDrill v2.0.
As an example, the Stuxnet worm, discovered in 2010 to have sabotaged Iranian nuclear centrifuges, moved undetected in the Natanz networks, thanks to digital certificates – similar to software passports – stolen from two isolated servers in Taiwan. Experts believe that the certificates were purloined during physical break-ins. The Stuxnet worm, and two other affiliated pieces of malware called Flame and Fanny that conducted preliminary reconnaissance to map out Iranian networks, appear to have been designed to operate within an air-gapped network as well. Yet, it is unclear whether the person who introduced the drive carrying the eventual Stuxnet playload into the Natanz network acted wittingly or not.
Falkowitz points out that what an intelligence agency really wants is “a persistent method of gaining access, such as a backdoor you open and walk through again and again.” To that end, he says, “social engineering, or manipulating people, is the best way into an air-gapped network.”
One such method is through supply chain interdiction. For example, in 2008 Russian intelligence allegedly implanted a worm on thumb drives en route to retail kiosks near NATO headquarters in Kabul. One of the drives was inserted into a computer connected to the isolated network of the U.S. Central Command, which was overseeing combat zone operations in both Iraq and Afghanistan. While the direct impact is uncertain, the hackers could have threatened the United States’ global logistics network, stolen operational plans in active war zones, undermined the integrity of intelligence, revealed the identities of confidential local informants, or sabotaged the Pentagon’s ability to accurately deliver targeted strikes.
Another incredibly simple approach could be sprinkling infected USB drives around the targeted organization’s parking lot and hoping an employee would pick one up, take it inside and use it.
Both the NSA and CIA are thought to engage in supply chain interdiction, for instance, by intercepting devices ordered through the mail, implanting them with beacons, or monitoring hardware, and then forwarding them to the intended recipient, an unwitting insider with access to the target network.
Once hackers successfully find their way into an air-gapped network, how do they extract the data? Mordechai Guri, head of research and development at the Ben Gurion Cyber Security Research Center in Israel, argues that “while infiltrating air-gapped systems has been shown feasible, the exfiltration of data from systems without internet connectivity is a challenging task.”
Still, it can be done, he says. For instance, air-gapped networks must occasionally connect to the outside in order to update and patch software. These moments present opportunities for hackers. Hackers can also extract data the same way they infiltrated a network, with an worm that aggressively jumps into removable drives allowing it to spread through a network gathering data and eventually infects a device – such as an employee’s home computer – that is connected to the internet. The malware would be pre-programmed to “phone home” once it reaches the public internet. This process – colloquially known as a sneakernet – was how the Stuxnet worm allegedly relayed information back to NSA-controlled command and control servers as it navigated Iran’s isolated network at the Natanz nuclear facility.
Guri, however, says that it is also possible to exfiltrate data from air-gapped networks without internet connectivity, either by electromagnetic, acoustic, thermal, or optical transmissions. For example, he says, an optical method called LED-it-GO, “enables data leakage from air-gapped networks via the hard-drive indicator LED, which exists in almost every computer. The injected malware blinks the LED at high frequencies of thousands of clicks per second. Outside of the building, a small quadcopter drone with a camera could receive the Morse code-like transmission through a window.”
Each approach, however, has pros and cons. Computers can be kept away from windows. Guri also acknowledges that “while leaking data across the air gap has been demonstrated in research labs, it is largely considered a theoretical and academic topic. It is important to note that in real world situations, most data leakage is taking place over the internet, using emails, compromised media, or malicious mobile applications.”
“The bottom line,” according to Falkowitz, is that “super high tech snooping and black bag intrusions are overkill as long as you can manipulate the weakest part of the system – people.”
Levi Maxey is a cyber and technology producer at The Cipher Brief. Follow him on Twitter @lmaxey13.