Today’s Technology

Today’s Technology

During the tumultuous summer of this year, law enforcement agencies were able to deftly use facial recognition software to make arrests in protests and robberies. One such case was of a protester in Indiana who was arrested when a picture of him resurfaced on Twitter and was then fed into a facial recognition system. This was the first time the public acknowledged that authorities used this technology. Though now regularly used, facial recognition remains a controversial issue.

Civil rights issues and tech researchers are fighting to ban the use of facial recognition software nationwide. They cite different reasons, but there are mainly two oppositions: it erodes personal privacy and it has been shown to reinforce bias against people with darker skin.

Co-director of the High Tech Law Institute at Santa Clara University Eric Goldman said, “The weaponization possibilities are endless… imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to black mail them or throw them in jail.” 

As of now, over 600 law enforcement agencies are using Clearview, a facial recognition software, without public knowledge. The company that produced the software has been successful in avoiding media and has allegedly spied on reporters that were digging into the software.

“I’ve come to the conclusion that because information constantly increases, there is never going to be privacy,” David Scalzo, an investor in a similar firm, said. However, many still feel unsettled by the idea of being watched and recognized by an artificial intelligence (AI) software everywhere they go.

The second worry concerning facial recognition is the bias against certain racial groups, as some studies confirmed. The National Institute of Standards and Technology published a report last year that showed the AI falsely identifying African-American and Asian faces 10 to 100 times more than Caucasian faces. The bias also drags to ageism, where the software also falsely identifies older adults 10 times more than middle-aged adults.

The dangers of false identification are grave. “One false match can lead to missed flights, lengthy interrogations, watch list placements, tense police encounters, false arrests or worse, policy analyst Jay Stanley reported.

San Francisco has banned the use of facial recognition in law enforcement, and many are protesting for other states to do the same. However, at the present time, this seems unlikely. Early this month, James Tate, a member of Detroit’s City Council, made the call to approve an extension for the use of facial recognition software in the Police Department. This decision comes after police in Detroit made a wrongful arrest earlier this year.

Others have also raised questions about infringements on the freedom of speech and peaceful protest. Being watched by law enforcement agencies may make citizens fear the repercussions of protesting or saying something that might be taken out of context. Others also fear being misidentified, which research has shown to be common when using AI facial recognition software.

Exploiting people’s privacy under the veil of maintaining safety will remain a matter of debate for long. Some may view giving up a few aspects of their privacy is a small price to pay to keep communities safe, but people opposing the software will want to look at the bigger picture. 

When asked if facial recognition could be used for security responsibly Google research scientist Timnit Gebru said, “It should be banned at the moment. I don’t know about the future.”