Project Panoptic launches new feature; A research tab to enhance your knowledge of facial recognition technology #ProjectPanoptic

The Project Panoptic website has just launched its newest feature! A research tab just for you to understand the basics of facial recognition technology, how it violates your privacy & freedoms and why we are calling for a ban.

Additionally, from today, we will be discussing one resource a week to ensure that public awareness around this harmful technology increases. Today’s resource is “Facial Recognition Technologies: A Primer” by Joy Buolamwini, Vicente Ordóñez, Jamie Morgenstern, and Erik Learned-Miller of the Algorithmic Justice League. Ms. Buolamwini was recently also the subject of the Netflix Documentary, Coded Bias. (Read our review of the documentary here)

The primer provides its reader with a basic understanding of what facial recognition technology is, how it is used and how accurate it is. Most importantly, the primer provides a basic technical understanding of how facial recognition technology works.

Go through the primer and let us know if you have any questions. Stay tuned for Monday’s resource which is “Privacy and the ‘nothing to hide’ argument” by Vrinda Bhandari & Renuka Sane.

5 Likes

Today’s resource is “Privacy and the ‘nothing to hide’ argument” by Vrinda Bhandari & Renuka Sane. In this article, the authors try to solve the misconception of privacy being valued only by those who have something to hide as they may have done something wrong. They examine the “I have got nothing to hide" argument which claims that if you have done nothing wrong then “no harm should be caused to you by the breach of your privacy”.

According to the authors, this argument wrongfully equates privacy with secrecy. According to them, privacy relates to the choice of withholding information that other people do not need to know, such as how we exist in a private space, while secrecy relates to withholding information that people may have a right to know.

According to me, the right to privacy is fundamental because it enables people to exist and hold opinions without the worry of persecution. This is especially important in the context of communities and groups that have been historically repressed and who need security to browse, read, develop and share ideas and opinions, and fully exercise their right to freedom of speech and expression without being intimidated. These repressed groups could include religious minorities as well people who face harassment for their sexual orientation or gender identity.

The authors also touch upon issues such as constant mass surveillance and the resulting chilling effect on free speech, the problematic nature of surveillance programs and the perils of unrestricted data collection.

Do you think the authors are correct is refuting the “nothing to hide” argument? Let us know your thoughts on this piece below and stay tuned for the next resource that will be discussed on Friday which is “Facial Recognition is Accurate, if You’re a White Guy” by Steve Lohr.

1 Like

Continuing on our research on Facial Recognition Technology, we will discuss how racial biases measure out the data used to develop the A.I. and face identification technologies by discussing 'Facial Recognition is Accurate, if You’re a White Guy" by Steve Lohr.

The author says that the accuracy of the facial recognition technologies may vary according to your gender and the colour of your skin. So much so, the margin of error for darker skinned women may rise upto 35%. The technology behind facial recognition systems are as good as we train them to be. So, if there are more white men constituting the sample data to train the A.I. behind the facial recognition technologies, then these systems will only be equipped at identifying white male faces.

To my mind, this can engender a major problem. Given how ubiquitous facial recognition technologies have become, an inaccurate information can lead to wrongful implication and undue harassment to a lot of people. Researchers at the Georgetown Law School estimated that 117 million American adults are in face recognition networks used by law enforcement. It would not be difficult to imagine that people belonging to gender and racial minorities are likely to be singled out by such faulty systems. This is a threat to not only our privacy but also our life.

After facing A.I. based discrimination due to the colour of her skin, Joy Buolamwini, a researcher at M.I.T. Media Lab has actively started fighting the biases built into digital technology. Corporate giants like Microsoft and IBM have adopted her research to make their use of facial recognition technology in a more inclusive manner.

Do you believe that facial recognition technology needs to be more ethical or is the very practice of use of such technology needs an overhaul?

1 Like

In today’s research on Facial Recognition Technology, we discuss how the fatal radioactive element, Plutonium becomes an apt material metaphor for the digital facial recognition technologies in the modern world, as discussed by [Luke Stark in Facial Recognition is the Plutonium of AI].

Given the ubiquity of facial recognition technologies in world, one might critique this comparison as “alarmist”. But, experts believe that the technology behind facial recognition is flawed when it comes to schematizing the human faces. It leads to reinforcing of gender and racial categorization. Secondly, the risks of these technologies outweigh the benefits they bring, almost reminding one of “hazardous nuclear technologies.” Scholars Woodrow Hartzog and Evan Selingar call facial recognition technologies “a menace disguised as a gift.” which jeopardise our freedom of movement to a large extend.

Do you agree that the intrinsic harm caused by Facial Recognition Technologies is as dangerous as the nuclear hazards? Do you think a call for ban on Facial Recognition Technologies is necessary to safeguard our privacy?

2 Likes

In today’s research, we discuss the burgeoning market for emotion recognition technologies (ERT) in China and its detrimental impact on individual freedoms and human rights. Emotional Entanglement:
China’s emotion recognition market and its implications for human rights
by Article 19 provides evidence based analysis on how ERT purports to infer a person’s inner emotional state and compromises the people’s right to privacy.

In the recent times, ERT has found widespread applications all across China, ranging from law enforcement authorities using it to identify ‘suspicious’ individuals to schools deploying ERT to measure the attentiveness of students. This dangerous ubiquity of the use of ERT is highly incompatible with international human rights standards. Further, there is little information available regarding the scale of the use of ERT in China. This opaque and unfettered design, development, design and use of ERT not only restricts the right to freedom of expression, right to privacy and the right to protest but also has pseudoscientific foundations.

For a timely pushback on the large-scale deployment of ERT, the need of the hour is informed interventions by civil society, focusing on debunking emotion recognition technology’s scientific foundations, demonstrating the futility of using it, and/or demonstrating its incompatibility with
human rights.

1 Like

In today’s resource, we discuss ‘Assisted’ Facial Recognition and the Reinvention of in Suspicion and Discretion in Digital Policing by Peter Fussey, Bethan Davies and Martin Innes. In the context of increased application of automated facial recognition (AFR) in policing, this article illuminates how technological capabilities are conditioned by police discretion which in turn are contingent on bureaucratic suspicion. AFR often traces individuals back to their bureaucratic records which in turn greatly influence the identification through AFR. Inclusion of AFR in policing as a part of ‘digital policing’ is especially intriguing since it blends the principles of both visual and biometric surveillance. There are evidences which suggest that the process of police surveillance and identification under AFR are influenced by multiple factors and are therefore not always accurate. In fact, in 2018, the UN Special Rapporteur for the Right to Privacy criticised the user of AFR technologies on the grounds of necessity and proportionality.

I have been recently thinking of what type of evidence do we need to demonstrate using FR for Emotion Recognition learns biases and should not be used for this purpose.

One idea I had is to do this ideally we would have a dataset of photos of real people in candid situations in different type of environments (like office, parks, home, etc.) and we ask them their emotion at that time (and ideally an unbiased dataset). Then evaluate it against the algorithm. It is then I came across this website, seems really cool.

2 Likes

Today we are discussing the role of judiciary in India in facilitating the much needed checks and balances to the State’s surveillance operations through a reading of ‘The Surveillance State, Privacy
and Criminal Investigation in India: Possible Futures in a Post-Puttaswamy World’
by Vrinda Bhandari and Karan Lahiri.
The existing legal framework in India gives the executive unfettered, discretionary power to deploy surveillance. As the State ominously expands its technological capacity to monitor its citizens, the lack of transparency and accountability has caused fear among many. To a large extent, this is fuelled by the fact that not only does the State have the ultimate authority to engage in surveillance action but also our laws allow for even illegally obtained evidence to be admissible in courts as long as it is relevant.
In today’s age of big data, increasing use of invasive surveillance technology by the State ranging from wiretapping; video-graphing; geolocation tracking; data mining; intercepting, decryption and monitoring of emails; to tracking internet and social media usage, jeopardise the constitutional rights of the citizens. The authors argue that it is an apposite moment to revisit the KS Puttaswamy v Union of
India judgment(s) of the Supreme Court of India to understand how it provides a framework for testing the existing system of laws governing surveillance. The judgment(s) uphold the test of proportionality and duly preserves the rule of law in countering mass surveillance by the State.

1 Like