Facial-recognition technology is no longer a gimmick in dystopian science fiction movies or CSI-style cop shows: It's increasingly used in more pedestrian ways. Your face can unlock your iPhone X, for example. Or, if you're flying with Jetblue from Boston to Aruba or the Dominican Republic, you have the option of using your visage as your boarding pass, a system that involves an offsite U.S. Customs and Border Protection algorithm making the matches. And now, the tech—featuring a camera attached to sunglasses— is being used by police officers in crowds in China, The Wall Street Journal reported on Wednesday.
In addition to the glasses, the Chinese system involves a connected mobile device that the police officers carry that contains offline face data, allowing the system to work quickly. According to the Journal, at one city's railway station, they've nabbed seven people associated with crimes using this method, as well as others traveling under false identities.
Here's how artificial-intelligence-powered technology like this works in general—and what one potential pitfall of it is. (Besides, you know, the whole surveillance-state thing.)
First, look for faces. Then, matches.
Software that powers facial recognition generally uses a two-step process, says David Alexander Forsyth, the chair of the computer science department at the University of Illinois at Urbana-Champaign and an artificial intelligence expert. Step one is to figure out where the faces are in the image in question; the system is looking for a window-like section of the image that also has someone's countenance in it, and not the other stuff of modern life, like stop signs and cars.
Step two: it needs to see if it can match the face to any in its database. “Turns out, that's a harder problem,” Forsyth says, in comparison to step one. “People tend to look like each other.” (At least to algorithms.)
The system isn't just eyeballing the image the way a human would—it's looking at a representation of it in the form of data, which consists of numbers, Forsyth says. “That representation has to emphasize things that make people look different from each other,” he notes—like details involving the shape of features like lips, noses, and eyes. The representation also needs to make sure it is unaffected by variables that might throw it off, like light on someone's face. The software then examines that representation to see if it has a match with a face it has on file.
“The last 10 years or so have seen amazing advances and changes in classifier technologies,” he adds. “The procedure of building that representation of the image has become extremely sophisticated and very effective.”
Artificial intelligence systems need oceans of data in order to learn how to do their jobs well, and facial recognition technology is no different. “Right now, the best way we know, by a long, long way, is to have an immense number of pictures of faces,” to build and train these systems, Forsyth explains. Algorithms need to learn what subtle details to focus on to accurately differentiate people.
The false-match problem
But despite the sophistication of the technology, it remains a difficult field. “The consequence for a mixup can be truly terrible,” he adds. In short: it can have false positives, and think that it has flagged someone who is a person of interest but who is, in fact, not.
There's a key difference between using the technology in this way, as China is, and the way you engage with it on an iPhone X, for example. In the case of the smartphone, you are purposely presenting yourself to it so it can unlock the device; it is a low-stakes interaction. That's because if it fails to recognize you, you simply use your passcode, while Apple says the odds of someone else unlocking it with their face are one in a million. After all, your iPhone only needs to learn the details of your own face, which it considers in three-dimensional form.
But using technology like this to scan the multitudes of faces in crowds in settings like airports or train stations presents unique challenges, because of the false-match problem—an outcome that doesn't just affect that individual, but also other travelers who could be delayed by it. “Actually using it can be quite tricky,” Forsythe warns.