IBM will Stop Developing or Selling Facial Recognition Technology

IBM will stop developing or selling facial recognition software due to concerns the technology is used to support racism. This was reported by MIT Technology Review on 9 June 2020. In a letter to Congress, IBM’s CEO Arvind Krishna wrote: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” (Letter to Congress, 8 June 2020) The extraordinary letter “also called for new federal rules to crack down on police misconduct, and more training and education for in-demand skills to improve economic opportunities for people of color” (MIT Technology Review, 9 June 2020). A talk at Stanford University in 2018 warned against the return of physiognomy in connection with face recognition. The paper is available here.

Towards an Anti Face

Face recognition in public spaces is a threat to freedom. You can defend yourself with masks or with counter-technologies. Even make-up is a possibility. Adam Harvey demonstrated this in the context of the CV Dazzle project at the hacker congress 36C3 in Leipzig. As Heise reports, he uses biological characteristics such as face color, symmetry and shadows and modifies them until they seem unnatural to algorithms. The result, according to Adam Harvey, is an “anti face”. The style tips for reclaiming privacy could be useful in Hong Kong, where face recognition is widespread and used against freedom fighters. Further information can be found on the CV Dazzle website. “CV Dazzle explores how fashion can be used as camouflage from face-detection technology, the first step in automated face recognition.” (Website CV Dazzle)

Permanent Record

The whistleblower Edward Snowden spoke to the Guardian about his new life and concerns for the future. The reason for the two-hour interview was his book “Permanent Record”, which will be published on 17 September 2019. “In his book, Snowden describes in detail for the first time his background, and what led him to leak details of the secret programmes being run by the US National Security Agency (NSA) and the UK’s secret communication headquarters, GCHQ.” (Guardian, 13 September 2019) According to the Guardian, Snowden said: “The greatest danger still lies ahead, with the refinement of artificial intelligence capabilities, such as facial and pattern recognition.” (Guardian, 13 September 2019) The number of public appearances by and interviews with him is rather manageable. On 7 September 2016, the movie “Snowden” was shown as a preview in the Cinéma Vendôme in Brussels. Jan Philipp Albrecht, Member of the European Parliament, invited Viviane Reding, the Luxembourg politician and journalist, and authors and scientists such as Yvonne Hofstetter and Oliver Bendel. After the preview, Edward Snowden was connected to the participants via videoconferencing for almost three quarters of an hour.

 

The New Dangers of Face Recognition

The dangers of face recognition are discussed more and more. A new initiative is aimed at banning the use of technology to monitor the American population. The AI Now Institute already warned of the risks in 2018, as did Oliver Bendel. The ethicist had a particular use in mind. In the 21st century, face recognition is increasingly attempted to connect to the pseudoscience of physiognomy, which has its origins in ancient times. From the appearance of persons, a conclusion is drawn to their inner self, and attempts are made to identify character traits, personality traits and temperament, or political and sexual orientation. Biometrics plays a role in this concept. It was founded in the eighteenth century, when physiognomy under the lead of Johann Caspar Lavater had its dubious climax. In his paper “The Uncanny Return of Physiognomy”, Oliver Bendel elaborates the basic principles of this topic; selected projects from research and practice are presented and, from an ethical perspective, the possibilities of face recognition are subjected to fundamental critique in this context, including the above examples. The philosopher presented his paper on 27 March 2018 at Stanford University (“AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents”, AAAI 2018 Spring Symposium Series). The PDF is available here.

With Low-Tech against Digital Mass Surveillance

The resistance movement in Hong Kong uses different means of defence, communication and information. This is reported by the German news portal Heise on 1 September 2019. Demonstrators direct laser beams at the video cams, which are installed everywhere, not least in intelligent street lamps. The pointers are intended to interfere with the facial recognition systems used by the police to analyse some of the video streams. On television, one could see how the civil rights activists were able to use chain saws and ropes to bring down the high-tech street lights. Services such as Telegram and Firechat are used for communication and coordination. According to Quartz, “Hong Kong’s protesters are using AirDrop, a file-sharing feature that allows Apple devices to send photos and videos over Bluetooth and Wi-Fi, to breach China’s Great Firewall in order to spread information to mainland Chinese visitors in the city” (Quartz, 8 July 2019). You could read on a digital sticker distributed by AirDrop at subway stations: “Don’t wait until [freedom] is gone to regret its loss. Freedom isn’t god-given; it is fought for by the people.” (Quartz, 8 July 2019)

The System that Detects Fear

Amazon Rekognition is a well-known software for facial recognition, including emotion detection. It is used in the BESTBOT, a moral machine that hides an immoral machine. The immoral is precisely caused by facial recognition, which endangers the privacy of the user and his or her informational autonomy. The project is intended not least to draw attention to this risk. Amazon announced on 12 August 2019 that it has improved and expanded its system: “Today, we are launching accuracy and functionality improvements to our face analysis features. Face analysis generates metadata about detected faces in the form of gender, age range, emotions, attributes such as ‘Smile’, face pose, face image quality and face landmarks. With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear’.” (Amazon, 12 August 2019) Because the BESTBOT accesses other systems such as MS Face API and Kairos, it can already recognize fear. So the change at Amazon means no change for this artifact of machine ethics.