The New Dangers of Face Recognition

The dangers of face recognition are discussed more and more. A new initiative is aimed at banning the use of technology to monitor the American population. The AI Now Institute already warned of the risks in 2018, as did Oliver Bendel. The ethicist had a particular use in mind. In the 21st century, face recognition is increasingly attempted to connect to the pseudoscience of physiognomy, which has its origins in ancient times. From the appearance of persons, a conclusion is drawn to their inner self, and attempts are made to identify character traits, personality traits and temperament, or political and sexual orientation. Biometrics plays a role in this concept. It was founded in the eighteenth century, when physiognomy under the lead of Johann Caspar Lavater had its dubious climax. In his paper “The Uncanny Return of Physiognomy”, Oliver Bendel elaborates the basic principles of this topic; selected projects from research and practice are presented and, from an ethical perspective, the possibilities of face recognition are subjected to fundamental critique in this context, including the above examples. The philosopher presented his paper on 27 March 2018 at Stanford University (“AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents”, AAAI 2018 Spring Symposium Series). The PDF is available here.

Fighting Deepfakes with Deepfakes

A deepfake (or deep fake) is a picture or video created with the help of artificial intelligence that looks authentic but is not. Also the methods and techniques in this context are labeled with the term. Machine learning and especially deep learning are used. With deepfakes one wants to create objects of art and visual objects or means for discreditation, manipulation and propaganda. Politics and pornography are therefore closely interwoven with the phenomenon. According to Futurism, Facebook is teaming up with a consortium of Microsoft researchers and several prominent universities for a “Deepfake Detection Challenge”. “The idea is to build a data set, with the help of human user input, that’ll help neural networks detect what is and isn’t a deepfake. The end result, if all goes well, will be a system that can reliably fake videos online. Similar data sets already exist for object or speech recognition, but there isn’t one specifically made for detecting deepfakes yet.” (Futurism, 5 September 2019) The winning team will get a prize – presumably a higher sum of money. Facebook is investing a total of 10 million dollars in the competition.

An AI System for Multiple-choice Tests

According to the New York Times, the Allen Institute for Artificial Intelligence unveiled a new system that correctly answered more than 90 percent of the questions on an eighth-grade science test and more than 80 percent on a 12th-grade exam. Is it really a breakthrough for AI technology, as the title of the article claims? This is quite controversial among experts. The newspaper is optimistic: “The system, called Aristo, is an indication that in just the past several months researchers have made significant progress in developing A.I. that can understand languages and mimic the logic and decision-making of humans.” (NYT, 4 September 2019) Aristo was built for multiple-choice tests. “It took standard exams written for students in New York, though the Allen Institute removed all questions that included pictures and diagrams.” (NYT, 4 September 2019) Some questions could be answered by simple information retrieval. There are numerous systems that access Google and Wikipedia, including artifacts of machine ethics like LIEBOT and BESTBOT. But for the answers to other questions logical thinking was required. Perhaps Aristo is helping to abolish multiple-choice tests – not so much because it can solve them, but because they are often not effective.