Society and Culture

Facial recognition technology, and implications for free speech


“There is two kinds of music, the good and the bad. I play the good kind”, Louis Armstrong once said. Like with music, facial recognition technology has good and bad applications.

Consider this example: it is early on a Sunday morning when you hear a loud banging on your front door. It is FBI agents, shouting “Open up. We have a search warrant!” Frightened and confused, you get out of bed, pull on some clothes, and answer the door.

The federal agents explain that they have obtained their warrant based on a witness’s identification of you as one of the comedians who wrote an anonymous political satire pamphlet, Punch, published in Austin, Texas throughout 2019. They do not seem to care when you insist you have never been to Austin and know nothing about Punch.

So who is this witness falsely implicating you? It is facial recognition technology, built into a camera in Austin, that supposedly caught you distributing the pamphlet on several occasions throughout 2019. Even though you are confident in your alibi ultimately proving your innocence, it is likely going to cost you a lot of time and money to do so.

Think this scenario is fiction? Think again. Previous laws in the United States have prohibited the distribution of anonymous political pamphlets including lawful speech or otherwise censored protected speech. What is more, not long ago, an 18-year-old student was mistakenly arrested due to Apple’s Face ID. Similar technology—deployed by companies like ResolutionView and Amazon, as well as Apple—is now available throughout much of the world.

As the market for AI-powered facial recognition grows to a projected value of over $9 billion by 2024, so too have well-founded concerns about the use of the technology to censor free speech. Yet all is not bleak in the realm of facial recognition. As with so much of technology, there are some good uses that deserve consideration.

How it works

Here is an extremely basic breakdown of the steps involved in facial recognition: (1) image capture, (2) the distance between the eyes and other prominent facial features of the subject are mapped, (3) the image is converted to grayscale and cropped, (4) the image is converted to a template used by a search engine for facial comparison results, and (5) an algorithm searches for a match to the image by comparing the template to others on file.  Algorithms can be non-trainable or trainable. Non-trainable algorithms use fixed common feature representations to characterise face images. Similarities between faces are measured within these set parameters. The trainable ones, like the one used by online retailer Zappos, are not fixed. They learn and ideally improve over time what particular customers are searching for, bit by bit modifying itself with each new lesson in order to improve accuracy. So when converting an image to a searchable template, a trainable algorithm would make changes to itself so as to, in theory, avoid repeating mistakes. Regardless of which algorithm is used, facial recognition systems generally compare the image taken in step (1) with a database in step (5).

Good applications

While there has been much press about the negatives of AI-powered facial recognition technology, there are many good applications. In 1996, for example, Lynn Cozart disappeared just days before he was to be sentenced by a Pennsylvania court to spend years in prison for molesting three children. For years, investigators searched for him. However, the case went frigid. Then, in 2015, the Pennsylvania state police sent Mr. Cozart’s mug shot to the FBI ‘s Next Generation Identification database, which contains more than 30-million face recognition records. FBI’s team responsible for face recognition search, called the Facial Analysis, Comparison and Evaluation Services, matched the mug shot to the face of one “David Stone” who lived in Muskogee, Oklahoma, and who worked at a local Wal-Mart. “After 19 years,” FBI program analyst Doug Sprouse says, “[Cozart] was brought to justice.” Other positive applications of the technology include increasing the efficiency of online shopping and also spotting – along with catching – shoplifters.

Bad applications

Notwithstanding, the technology can misidentify and chill protected speech. The algorithm running the London Metropolitan Police’s facial recognition technology, for example, is reported to have an error rate of up to 81%. What is more, a report by the Georgetown Center on Privacy and Technology reports that only one U.S. agency expressly prohibits the tracking of those engaged in protected speech. While the federal Privacy Act proscribes the government from keeping records “describing how any individual exercised rights guaranteed by the First Amendment,” the FBI obtained an exemption for the millions of facial images it stores. This is concerning because previous state laws in the U.S. have impermissibly impinged speech rights. In 1960, for example, the U.S. Supreme Court invalidated a Los Angeles ordinance prohibiting distribution of anonymous pamphlets. A similar Ohio law was invalidated in 1995 because “[a]nonymity is a shield from the tyranny of the majority” under the First Amendment. Consequently, you could be the wrong target of a criminal investigation involving what is otherwise protected speech without knowing it.

Regulate, don’t abolish

San Francisco, among other cities in the U.S., banned the use of AI powered facial recognition technology. But Cozart would not have been captured by the FBI without the use of such technology. At the same time, it can be either be used to enforce unlawful censorship statutes or enforce lawful ones against the wrong suspect. As a result, a middle ground wherein the technology is more tightly regulated is likely the best approach. The problem is not facial recognition technology, or indeed any technology per se. The problem is insufficient protection of free speech.

 

Ryan E. Long is a non-residential fellow of Stanford Law School’s Center for Internet and Society and Vice Chair of the CA Lawyers Association, IP Licensing Interest Group. His bio at Stanford can be found here.


1 thought on “Facial recognition technology, and implications for free speech”

  1. Posted 30/01/2024 at 12:05 | Permalink

    There is definately a lot to find out about this subject. I like all the points you made

Comments are closed.


Newsletter Signup