The Attacker Has the Advantage in Cyberspace. Can We Fix That?

By Jason Healey

Jason Healey is a Cipher Brief Cyber Advisor and Senior Research Scholar at Columbia University’s School for International and Public Affairs, and Visiting Scholar at the Hoover Institution at Stanford University, specializing in cyber conflict and risk. He started his career as a U.S. Air Force intelligence officer, before moving to cyber response and policy jobs at the White House and Goldman Sachs. Healey was founding director for cyber issues at the Atlantic Council where he remains a Senior Fellow and is the editor of the first history of conflict in cyberspace, A Fierce Domain: Cyber Conflict, 1986 to 2012. He is on the DEF CON review board and served on the Defense Science Board task force on cyber deterrence.

It is not news that cyberspace is insecure. Attackers have had the advantage over defenders for not just years, but decades. Quotes from decades ago make it clear that cyber defenders then faced the same challenges we do today (and with a similar lack of success).

When was the last presentation you heard that had anything as smart as the following?

“The system designer must be aware of the points of vulnerability, which may be thought of as leakage points, and he must provide adequate mechanisms to counteract both accidental and deliberate events. The specific leakage points include physical surroundings, hardware, software, communication links, and organizational (personnel and procedures).

“A combination of hardware, software, communications, physical, personnel and administrative-procedural safeguards is required for comprehensive security. In particular, software safeguards are not sufficient.”

That was part of the Ware Report, published in 1970.

Defenders have not gained any lasting advantage from four decades’ worth of innovation, tens or hundreds of billions of dollars spent on security, or the tens of thousands of certified cyber defenders. Cyberspace remains “attacker advantage.”

Recently, a group of 30 leading experts with backgrounds as cybersecurity executives, technologists, former government officials and academics have come together to form the New York Cyber Task Force and together, they launched a new report on how to create a more defensible cyberspace.

Keeping cyber attackers from gaining a foothold in computers—and kicking them out once they do—remains easy to imagine but difficult to accomplish in practice. Why has this been so challenging? Every cyber defender has their own favorite reason. The New York Cyber Task Force identified the following as some of the most important:

Internet architecture: “The internet is not insecure because it is buggy, but because of specific design decisions” to make it more open, explains pioneer computer scientist David Clark.

Software weaknesses: Not only is it impossible to write bug-free code, but, “There are no real consequences for having bad security or having low-quality software…. Even worse, the market-place often rewards low quality,” said security expert Bruce Schneier in 2003.

Attacker initiative: An “attacker must find but one of possibly multiple vulnerabilities in order to succeed; the security specialist must develop countermeasures for all,” according to the 1991 report Computers at Risk.

Incremental solutions: Fixes typically target symptoms rather than underlying problems. To paraphrase Phil Venables, New York Cyber Task Force co-chair, the uninterrupted production of insecure IT products forces companies to buy ever more IT security products.

Attacker incentives: Cyber crimes, warfare and espionage can seem risk-free because of the often difficult process of attribution, ease of crossing borders to stymie law enforcement, sanctuary certain nations offer cyber criminals, and differing national laws.

Impact to convenience: Improved security often imposes costs on ease of use. As a result, it is frequently bypassed, or never even implemented, by individual users and beleaguered IT staff.

Arcane security and opaque products: “Most consumers have no real-world understanding of [cybersecurity] and cannot choose products wisely or make sound decisions about how to use them.” This is as true today as when it was written in the 1991 Computers at Risk report. Cybersecurity has gotten so complex that even IT staff struggle to understand the products.

Longevity of attack methods: Attacker innovation in cyberspace is often unnecessary because older, simpler tools remain effective against most targets.

Troublesome humans: People can be tricked or grow disgruntled and, in the words of one expert, “are always the weakest link.… You can deploy all the technology you want, but people simply cannot be programmed and can’t be anticipated.”

Rapid pace of technological change: The accelerating pace produces ever-larger potential attack surfaces and ever-more skills, education and certifications necessary for successful defense.

Complexity: Defending this attack surface has required a profusion of new tools. It has been known since at least 1980 that “increasing complexity increases cost” and “decreases the predictability of new costs.”

Sentient opponents: According to expert Dan Geer, “the one thing that may make cybersecurity different … is that we have sentient opponents … [so the] puzzles we have to solve are not drawn from some generally diminishing store of unsolved puzzles,” as in physics or economics. Those opponents fight for access to our systems in pursuit of profit, intelligence, military advantage or curiosity.

Lack of coherent strategy: Few, if any, of the various reports or cyber strategies lay out an overall approach to bind the work or guide between competing priorities. They are instead lists of critical tasks with no underlying theory of how these tasks will lead to success.

The factors which have made cyberspace less defensible do not have to be iron-clad rules. Most of these are not “physics” as such. They are based in design choices, emergent behavior and specific decisions by key stakeholders. Individually and collectively, they can be mitigated, but only if defenders leverage the massive scale of the Internet (and the universe of interconnected devices) at least as well as the attackers.

Even with the multitude of challenges to making cyberspace more defensible, the NY Cyber Task Force believes a more defensible Internet can be within reach. New game-changing technologies, such as the secure architectures permitted by cloud technologies, can radically alter cyberspace with advantage and scale in favor of defenders. But so too can operational and policy innovations, which are often overlooked or discounted.

Most of the items on this list are decades old are mostly well understood. The cybersecurity community will only solve the problem when it moves past Band-Aid cybersecurity solutions to addressing the underlying problems with solutions that give the defender the most advantage over attackers at the least cost and greatest scale. To quote the New York Cyber Task Force: leverage.

Jason Healey is Senior Research Scholar at Columbia University’s School of International and Public Affairs and is the former executive director of the New York Cyber Task Force. You can follow his tweets on cyber conflict and cyber risk at @Jason_Healey.

Tagged with:

Related Articles

Search

Close