Project Panoptic launches new feature; A research tab to enhance your knowledge of facial recognition technology #ProjectPanoptic

The Project Panoptic website has just launched its newest feature! A research tab just for you to understand the basics of facial recognition technology, how it violates your privacy & freedoms and why we are calling for a ban.

Additionally, from today, we will be discussing one resource a week to ensure that public awareness around this harmful technology increases. Today’s resource is “Facial Recognition Technologies: A Primer” by Joy Buolamwini, Vicente Ordóñez, Jamie Morgenstern, and Erik Learned-Miller of the Algorithmic Justice League. Ms. Buolamwini was recently also the subject of the Netflix Documentary, Coded Bias. (Read our review of the documentary here)

The primer provides its reader with a basic understanding of what facial recognition technology is, how it is used and how accurate it is. Most importantly, the primer provides a basic technical understanding of how facial recognition technology works.

Go through the primer and let us know if you have any questions. Stay tuned for Monday’s resource which is “Privacy and the ‘nothing to hide’ argument” by Vrinda Bhandari & Renuka Sane.

5 Likes

Today’s resource is “Privacy and the ‘nothing to hide’ argument” by Vrinda Bhandari & Renuka Sane. In this article, the authors try to solve the misconception of privacy being valued only by those who have something to hide as they may have done something wrong. They examine the “I have got nothing to hide" argument which claims that if you have done nothing wrong then “no harm should be caused to you by the breach of your privacy”.

According to the authors, this argument wrongfully equates privacy with secrecy. According to them, privacy relates to the choice of withholding information that other people do not need to know, such as how we exist in a private space, while secrecy relates to withholding information that people may have a right to know.

According to me, the right to privacy is fundamental because it enables people to exist and hold opinions without the worry of persecution. This is especially important in the context of communities and groups that have been historically repressed and who need security to browse, read, develop and share ideas and opinions, and fully exercise their right to freedom of speech and expression without being intimidated. These repressed groups could include religious minorities as well people who face harassment for their sexual orientation or gender identity.

The authors also touch upon issues such as constant mass surveillance and the resulting chilling effect on free speech, the problematic nature of surveillance programs and the perils of unrestricted data collection.

Do you think the authors are correct is refuting the “nothing to hide” argument? Let us know your thoughts on this piece below and stay tuned for the next resource that will be discussed on Friday which is “Facial Recognition is Accurate, if You’re a White Guy” by Steve Lohr.

1 Like

Continuing on our research on Facial Recognition Technology, we will discuss how racial biases measure out the data used to develop the A.I. and face identification technologies by discussing 'Facial Recognition is Accurate, if You’re a White Guy" by Steve Lohr.

The author says that the accuracy of the facial recognition technologies may vary according to your gender and the colour of your skin. So much so, the margin of error for darker skinned women may rise upto 35%. The technology behind facial recognition systems are as good as we train them to be. So, if there are more white men constituting the sample data to train the A.I. behind the facial recognition technologies, then these systems will only be equipped at identifying white male faces.

To my mind, this can engender a major problem. Given how ubiquitous facial recognition technologies have become, an inaccurate information can lead to wrongful implication and undue harassment to a lot of people. Researchers at the Georgetown Law School estimated that 117 million American adults are in face recognition networks used by law enforcement. It would not be difficult to imagine that people belonging to gender and racial minorities are likely to be singled out by such faulty systems. This is a threat to not only our privacy but also our life.

After facing A.I. based discrimination due to the colour of her skin, Joy Buolamwini, a researcher at M.I.T. Media Lab has actively started fighting the biases built into digital technology. Corporate giants like Microsoft and IBM have adopted her research to make their use of facial recognition technology in a more inclusive manner.

Do you believe that facial recognition technology needs to be more ethical or is the very practice of use of such technology needs an overhaul?

1 Like

In today’s research on Facial Recognition Technology, we discuss how the fatal radioactive element, Plutonium becomes an apt material metaphor for the digital facial recognition technologies in the modern world, as discussed by [Luke Stark in Facial Recognition is the Plutonium of AI].

Given the ubiquity of facial recognition technologies in world, one might critique this comparison as “alarmist”. But, experts believe that the technology behind facial recognition is flawed when it comes to schematizing the human faces. It leads to reinforcing of gender and racial categorization. Secondly, the risks of these technologies outweigh the benefits they bring, almost reminding one of “hazardous nuclear technologies.” Scholars Woodrow Hartzog and Evan Selingar call facial recognition technologies “a menace disguised as a gift.” which jeopardise our freedom of movement to a large extend.

Do you agree that the intrinsic harm caused by Facial Recognition Technologies is as dangerous as the nuclear hazards? Do you think a call for ban on Facial Recognition Technologies is necessary to safeguard our privacy?

2 Likes