Cyberspace has grown into a vital domain of everyday life—we work, socialize, play, and conduct financial transactions online. Our lives now have a digital touch, and much like in the physical world, our identities are at the core of our virtual experience.
We rely on passwords to verify that we are who we say we are and to regulate who has access to an email or Facebook account, a computer or a phone, or even a classified database. But passwords have long been the bane of cybersecurity—people resort to weak default passwords easily discovered with automated guesses, they write them down so as to not forget them, replicate them across many platforms, or give them up to socially engineered cons. Many high-profile breaches have been enabled by the weak link of password-protected identity authentication systems.
Most recently, the private communications of the Democratic National Committee (DNC) and former Clinton campaign chair John Podesta were breached by Russian-sponsored hackers with spear-phishing emails posing as Google alerts that led to passwords being stolen. In 2014, the Office of Personnel Management (OPM) was breached and the records of some 22 million current and former federal employees stolen. The hackers—likely Chinese—gained access by stealing login credentials from an OPM contractor and pivoting to the Interior Department data center to exfiltrate data over a period of 10 months.
In short, humans, due to their dependence on passwords for identity authentication, are often the weakest link in cyber defenses. Attempts to make passwords more secure have primarily focused on two-factor authentication—for example, by confirming a login attempt to an email account from a computer with another device, such as a phone. But while this can increase security to a degree, it has proven inconvenient enough that people are regularly willing to forgo the potential security added for the sake of ease-of-access.
What if identity authentication became more automated, cutting out at least some vulnerabilities of human error? The solution to the antiquated password could possibly be found in biometric identity authentication—using uniquely personal features like eyes, face, fingerprints, or voice to identify individuals for access.
But as Jeremy Grant, a Managing Director at The Chertoff Group and architect of the Obama administration’s National Program Office for the National Strategy for Trusted Identities in Cyberspace (NSTIC), suggests, “all biometrics are not the same.” In his view, there are “some solutions that are highly reliable and others that are volatile.”
Biometrics, it seems, can be “spoofed,” or tricked into giving access to those it shouldn’t. Colleen Dunlap, CEO and Co-Founder of Stone Lock Global, notes that “biometric systems that have a high degree of spoofability include fingerprint, image-based facial recognition, and some iris systems.” This is largely because the biometrics can be taken from photos of people—something that there is no shortage of in the age of social media—and used to imitate those people for access to their systems.
But while image-based biometrics present a security concern, they do the same for privacy. U.S. law enforcement now has access to the facial recognition information of some 117 million Americans—half the U.S. adult population—and can identify individuals with no previous criminal record by sifting through driver’s license and state ID photos. A turn toward further image-based biometrics could continue to facilitate these kinds of encroachments on personal privacy.
Biometrics are also permanent—unlike a password, you can’t simply change your face, fingerprints, or eyes after your biometrics have been stolen. Breaches leading to the theft of biometrics data could be even more devastating than passwords, begging the question of how biometrics data should be stored.
The OPM breach of 2014 is a perfect example of how not to store biometrics. Some 5.6 million fingerprints were easily stolen in that hack because they were held in plain, unencrypted image form—all centrally stored—making it possible for one breach to have maximum impact.
Instead, Grant argues biometrics should be stored as templates, or “a mathematical ‘abstract’ of the biometric,” which mitigates “the risk to the consumer if the biometric information is breached.” Organizations should also “limit storage of biometrics to the device that they are collected on [to mitigate] the risk of scalable attacks.” This is why Apple stores scanned fingerprints for its iPhones and iPads on each individual device, not in a centralized database, so if compromised, the damage is limited to one device.
While biometric systems are likely superior technology to passwords for identity authentication, should they make the password obsolete or simply augment it? Grant reasons that “given that biometrics are not secrets, as well as the risks that an adversary may look to spoof a biometric system, it is best to use biometrics as simply one layer of a multi-factor authentication solution.”
And, the privacy conscious note that U.S. law can force individuals to provide biometrics—but not the passcode—necessary to access devices like an iPhone. The reasoning is that the Fifth Amendment protects Americans from incriminating themselves, therefore preventing law enforcement from compelling someone to provide a memorized passcode. Biometrics, however, are not stored within a person’s memory and are therefore fair game.
Grant notes “no authentication system is 100 percent hack-proof, but the design choices that are made in architecting biometric systems can greatly mitigate the risks that they are compromised.” So, it seems biometrics are necessary, but not sufficient; they can be complemented by another identification factor, such as a memorized password. The future of biometric identity authentication systems is bright, but the road there must be deliberative.
Levi Maxey is a cyber and technology producer at The Cipher Brief. Follow him on Twitter @lemax13.