Springer launches a new journal entitled “AI and Ethics”. This topic has been researched for several years from various perspectives, including information ethics, robot ethics (aka roboethics) and machine ethics. From the description: “AI and Ethics seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It will focus on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. The journal will provide opportunities for academics, scientists, practitioners, policy makers, and the public to consider how AI might affect our lives in the future, and what implications, benefits, and risks might emerge. Attention will be given to the potential intentional and unintentional misuses of the research and technology presented in articles we publish. Examples of harmful consequences include weaponization, bias in face recognition systems, and discrimination and unfairness with respect to race and gender.
Fujitsu has developed an artificial intelligence system that could ensure healthcare, hotel and food industry workers scrub their hands properly. This could support the fight against the COVID-19 pandemic. “The AI, which can recognize complex hand movements and can even detect when people aren’t using soap, was under development before the coronavirus outbreak for Japanese companies implementing stricter hygiene regulations … It is based on crime surveillance technology that can detect suspicious body movements.” (Reuters, 19 June 2020) Genta Suzuki, a senior researcher at the Japanese information technology company, told the news agency that the AI can’t identify people from their hands, but it could be coupled with identity recognition technology so companies could keep track of employees’ washing habits. Maybe in the future it won’t be our parents who will show us how to wash ourselves properly, but robots and AI systems. Or they save themselves this detour and clean us directly.
IBM will stop developing or selling facial recognition software due to concerns the technology is used to support racism. This was reported by MIT Technology Review on 9 June 2020. In a letter to Congress, IBM’s CEO Arvind Krishna wrote: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” (Letter to Congress, 8 June 2020) The extraordinary letter “also called for new federal rules to crack down on police misconduct, and more training and education for in-demand skills to improve economic opportunities for people of color” (MIT Technology Review, 9 June 2020). A talk at Stanford University in 2018 warned against the return of physiognomy in connection with face recognition. The paper is available here.
In the case of bodyhacking one intervenes invasively or non-invasively in the animal or human body, often in the sense of animal or human enhancement and sometimes with the ideology of transhumanism. It is about physical and psychological transformation, and it can result in the animal or human cyborg. Oliver Bendel wrote an article on bio- and bodyhacking for Bosch-Zünder, the legendary associate magazine that has been around since 1919. It was published in March 2020 in ten languages, in German, but also in English, Chinese, and Japanese. Some time ago, Oliver Bendel had already emphasized: “From the perspective of bio-, medical, technical, and information ethics, bodyhacking can be seen as an attempt to shape and improve one’s own or others’ lives and experiences. It becomes problematic as soon as social, political or economic pressure arises, for example when the wearing of a chip for storing data and for identification becomes the norm, which hardly anyone can avoid.” (Gabler Wirtschaftslexikon) He has recently published a scientific paper on the subject in the German Journal HMD. More about Bosch-Zünder at www.bosch.com/de/stories/bosch-zuender-mitarbeiterzeitung/.
Face recognition in public spaces is a threat to freedom. You can defend yourself with masks or with counter-technologies. Even make-up is a possibility. Adam Harvey demonstrated this in the context of the CV Dazzle project at the hacker congress 36C3 in Leipzig. As Heise reports, he uses biological characteristics such as face color, symmetry and shadows and modifies them until they seem unnatural to algorithms. The result, according to Adam Harvey, is an “anti face”. The style tips for reclaiming privacy could be useful in Hong Kong, where face recognition is widespread and used against freedom fighters. Further information can be found on the CV Dazzle website. “CV Dazzle explores how fashion can be used as camouflage from face-detection technology, the first step in automated face recognition.” (Website CV Dazzle)
The research article “Dissecting racial bias in an algorithm used to manage the health of populations” by Ziad Obermeyer, Brian Powers, Christine Vogeli and Sendhil Mullainathan has been well received by science and media. It was published in the journal Science on 25 October 2019. From the abstract: “Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses.” (Abstract) The authors suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts. The journal Nature quotes Milena Gianfrancesco, an epidemiologist at the University of California, San Francisco, with the following words: “We need a better way of actually assessing the health of the patients.”
A deepfake (or deep fake) is a picture or video created with the help of artificial intelligence that looks authentic but is not. Also the methods and techniques in this context are labeled with the term. Machine learning and especially deep learning are used. With deepfakes one wants to create objects of art and visual objects or means for discreditation, manipulation and propaganda. Politics and pornography are therefore closely interwoven with the phenomenon. According to Futurism, Facebook is teaming up with a consortium of Microsoft researchers and several prominent universities for a “Deepfake Detection Challenge”. “The idea is to build a data set, with the help of human user input, that’ll help neural networks detect what is and isn’t a deepfake. The end result, if all goes well, will be a system that can reliably fake videos online. Similar data sets already exist for object or speech recognition, but there isn’t one specifically made for detecting deepfakes yet.” (Futurism, 5 September 2019) The winning team will get a prize – presumably a higher sum of money. Facebook is investing a total of 10 million dollars in the competition.
“Robots, Empathy and Emotions” – this research project was tendered some time ago. The contract was awarded to a consortium of FHNW, ZHAW and the University of St. Gallen. The applicant, Prof. Dr. Hartmut Schulze from the FHNW School of Applied Psychology, covers the field of psychology. The co-applicant Prof. Dr. Oliver Bendel from the FHNW School of Business takes the perspective of information, robot and machine ethics, the co-applicant Prof. Dr. Maria Schubert from the ZHAW that of nursing science. The client TA-SWISS stated on its website: “What influence do robots … have on our society and on the people who interact with them? Are robots perhaps rather snitches than confidants? … What do we expect from these machines or what can we effectively expect from them? Numerous sociological, psychological, economic, philosophical and legal questions related to the present and future use and potential of robots are still open.” (Website TA-SWISS, own translation) The kick-off meeting with a top-class accompanying group took place in Bern, the capital of Switzerland, on 26 June 2019.
CONVERSATIONS 2019 is a full-day workshop on chatbot research. It will take place on November 19, 2019 at the University of Amsterdam. From the description: “Chatbots are conversational agents which allow the user access to information and services though natural language dialogue, through text or voice. … Research is crucial in helping realize the potential of chatbots as a means of help and support, information and entertainment, social interaction and relationships. The CONVERSATIONS workshop contributes to this endeavour by providing a cross-disciplinary arena for knowledge exchange by researchers with an interest in chatbots.” The topics of interest that may be explored in the papers and at the workshop include humanlike chatbots, networks of users and chatbots, trustworthy chatbot design and privacy and ethical issues in chatbot design and implementation. More information via conversations2019.wordpress.com/.
Robophilosophy or robot philosophy is a field of philosophy that deals with robots (hardware and software robots) as well as with enhancement options such as artificial intelligence. It is not only about the reality and history of development, but also the history of ideas, starting with the works of Homer and Ovid up to science fiction books and movies. Disciplines such as epistemology, ontology, aesthetics and ethics, including information and machine ethics, are involved. The website robophilosophy.com or robophilosophy.net is operated by Oliver Bendel, a robot philosopher who lives and works in Switzerland. Guest contributions are welcome. They should be exclusively scientific and should not advertise for companies.