Facial Recognition and Bias: The Price of Progress

We can remember facial recognition seeming like a sci-fi wonder. There was something alluring about the idea of machines so smart they’d know us, and tech so advanced it was like a friend. But the more sinister applications were always simmering under the surface.

Even in 1990, Total Recall envisioned a world in which you were tracked at every second by machines smarter than people, and where your attempts at subterfuge were useless in the face of robot cabs and nasal implants.

Honestly that movie was more prescient than a Paul Verhoeven vehicle has any right to be.

There are incredible advances that ease the pace of daily life; Apple has refined its Face ID technology to know when you’re wearing a mask and present your passcode lock screen accordingly. Ticketmaster is letting people access tickets with their faces. These are cool things.

Facial recognition has revolutionized advertising, eased communication, and enhanced security. But it has also become a tool of oppression in the hands of governments, and manipulation in the hands of corporations. China has employed it to target, track, and control the Uighur community; Microsoft amassed data about 100,000 people — without their consent.

Just this week, Facebook was hit with a class-action lawsuit that claims Instagram’s facial recognition and tagging tool violates the privacy of those who didn’t agree to the terms of service. Facebook says the suit is baseless, and claims they use no facial recognition at all.

Maybe.

Last month, Facebook raised its settlement offer to $650 million in a similar suit alleging the illegal harvesting of biometric data.

Facial recognition has been wielded like a cudgel by law enforcement, and whether intentional or not it creates a dangerous precedent of reinforcing the already troubling, unconscious bias, not only in ourselves, but in the technology we create. Critics say the biases we build into recognition software all but guarantee marginalized groups will be disproportionately impacted — either targeted by law enforcement, or ignored elsewhere.

In Wales, an appellate court ruled that police use of facial recognition violates individual rights to privacy and is fraught with “fundamental deficiencies.”

You don’t hear much about these scandals, not only because large-scale lawsuits are a feature of the modern landscape, but because we are immersed in the world of machines telling us who we are. We open our phones with our faces; we count on iPhoto to know the difference between “Steve” and “Mom.” We have so embraced identification by AI, it’s difficult to walk it back just because we now realize how problematic it can be.

If law enforcement agencies and major corporations agreed to suspend use of this technology, it wouldn’t really help. This is a commodity tech. If Google steadfastly refuses to use their technology to assist the government, another contractor can do it instead.

And speaking of…today, ICE signed a contract with notorious facial recognition company Clearview AI.

It’s going to get worse before it gets better.

We can’t un-learn the ability to track humans with machines. And the worry now as always is that the most skilled engineers and the most advanced AI will be used in the service of oppression.

Masks up.

You Might Also Like…

Signal vs WhatsApp: Chat App Encryption

If you spend any amount of time on social media these days (and heaven knows I do), you’ll see people talking about encrypted chat apps. In the days following the Capitol riots, both Twitter and Facebook cracked down hard on violent speech and anything that could be considered a conspiracy theory. The result was a …

Signal vs WhatsApp: Chat App Encryption Read More »