The ability to confirm a user’s identity with facial recognition may seem like the most effective and reliable method of authentication in today’s world. But researchers in China have proven that even biometric authentication is not infallible, or un-hackable.
Through a course of experiments, researchers from Tencent Security’s Zuanwu Lab in China determined that the ability for facial recognition technology, such as Apple’s FaceID, to reliably authenticate that the actual user is not only present but conscious or even alive can be faked. (Tencent Security is a unit of Tencent Holdings Ltd., Shenzhen-based conglomerate with a market value of more than $500 billion, which includes subsidiaries that sell online services and products, create entertainment, and develop technology all over the world.) At last month’s Black Hat USA conference in Las Vegas, a computer security symposium for hackers, corporations, and government agencies around the world, Zhuo (HC) Ma, a researcher for Tencent Security, described how he and the rest of his team were able to trick the biometric protocols which are designed to determine that the legitimate and live user is accessing their device or data.
So-called “liveness detection” is necessary to ensure that the biometric authentication is not validating a picture or video of the actual legitimate user or even a user who might be dead, unconscious or otherwise incapacitated. Hence, biometric liveness detection often incorporate checking the body temperature in fingerprint scans or potential playback reverberation in voice recognition, and looks for blurriness or distortion and feature-matching in facial recognition. In China, facial recognition is an increasingly popular form of authentication with various uses. And, while previous studies “mainly focused on how to generate fake audio or video, bypassing the liveness algorithm is necessary in a real attack,” Ma points out.
Ma and his team at Tencent Security tried a variety of approaches to cracking this aspect of biometric facial recognition, from using a high-resolution photo to generating animation to injecting a video stream signal into the authentication application between a device’s camera and its processor. Many of these more technically complicated hacks either created too much latency in authentication, which in turn would alert the system to potential compromise, or they simply were not practical since they required having access to the device in the first place.
However, at least when it comes to facial recognition, Ma and his fellow Tencent Security researchers found that they could easily and inexpensively re-create a live-looking authentication scenario with a pair of glasses and black and white tape. While facial liveness validation does indeed check that the party being authenticated is not three-dimensional object, and not a 2D picture or video, the vulnerability in this system is that the eyes of the person being authenticated are scanned basically as white dots on a dark background. Hence, if a wily hacker can slip a basic pair of reading glasses – bearing a white dot on a dark piece of tape over the eyes – on an unsuspecting victim, facial recognition will see this unconscious user as live and conscious and authenticate their access. The main trick, says Ma, is “you don’t want to wake up the victim.”
While this may seem far-fetched, it does in fact represent what hackers most often seek out: a low-cost way in to the system, with a potentially high success rate. And it also points to a larger trend, which is that even supposedly cutting-edge biometric authentication is vulnerable.