The Birth of the Morality Menu

The idea of a morality menu (MOME) was born in 2018 in the context of machine ethics. It should make it possible to transfer the morality of a person to a machine. On a display you can see different rules of behaviour and you can activate or deactivate them with sliders. Oliver Bendel developed two design studies, one for an animal-friendly vacuum cleaning robot (LADYBIRD), the other for a voicebot like Google Duplex. At the end of 2018, he announced a project at the School of Business FHNW. Three students – Ozan Firat, Levin Padayatty and Yusuf Or – implemented a morality menu for a chatbot called MOBO from June 2019 to January 2020. The user enters personal information and then activates or deactivates nine different rules of conduct. MOBO compliments or does not compliment, responds with or without prejudice, threatens or does not threaten the interlocutor. It responds to each user individually, says his or her name – and addresses him or her formally or informally, depending on the setting. A video of the MOBO-MOME is available here.

Another Animal-friendly Machine

Between June 2019 and January 2020 the project HAPPY HEDGEHOG (HHH) was implemented at the School of Business FHNW. Initiator and client was Oliver Bendel. In the context of machine ethics, the students Emanuel Graf, Kevin Bollier, Michel Beugger and Vay Lien Chang developed the prototype of a lawnmower robot that stops working as soon as it discovers a hedgehog. HHH has a thermal imaging camera. If it encounters a warm object, it further examines it using image recognition. At night a lamp mounted on top helps. After training with hundreds of photos, HHH can identify a hedgehog quite accurately. Firstly, another moral machine has been created in the laboratory, and secondly, the team provides a possible solution to a problem that frequently occurs in practice: commercial lawnmower robots often kill baby hedgehogs in the dark. HAPPY HEDGEHOG could help save them. The video on youtu.be/ijIQ8lBygME shows it without casing; a photo with casing can be found here. The robot is in the tradition of LADYBIRD, another animal-friendly machine.

Opportunities and Risks of Facial Recognition

The book chapter “The BESTBOT Project” by Oliver Bendel, David Studer and Bradley Richards was published on 31 December 2019. It is part of the 2nd edition of the “Handbuch Maschinenethik”, edited by Oliver Bendel. From the abstract: “The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.” (Abstract) The book chapter can be downloaded from link.springer.com/referenceworkentry/10.1007/978-3-658-17484-2_32-1.

AI Workshop at the University of Potsdam

In 2018, Dr. Yuefang Zhou and Prof. Dr. Martin Fischer initiated the first international workshop on intimate human-robot relations at the University of Potsdam, “which resulted in the publication of an edited book on developments in human-robot intimate relationships”. This year, Prof. Dr. Martin Fischer, Prof. Dr. Rebecca Lazarides, and Dr. Yuefang Zhou are organizing the second edition. “As interest in the topic of humanoid AI continues to grow, the scope of the workshop has widened. During this year’s workshop, international experts from a variety of different disciplines will share their insights on motivational, social and cognitive aspects of learning, with a focus on humanoid intelligent tutoring systems and social learning companions/robots.” (Website Embracing AI) The international workshop “Learning from Humanoid AI: Motivational, Social & Cognitive Perspectives” will take place on 29 and 30 November 2019 at the University of Potsdam. Keynote speakers are Prof. Dr. Tony Belpaeme, Prof. Dr. Oliver Bendel, Prof. Dr. Angelo Cangelosi, Dr. Gabriella Cortellessa, Dr. Kate Devlin, Prof. Dr. Verena Hafner, Dr. Nicolas Spatola, Dr. Jessica Szczuka, and Prof. Dr. Agnieszka Wykowska. Further information is available at embracingai.wordpress.com/.

From Logic Programming to Machine Ethics

Luís Moniz Pereira is one of the best known and most active machine ethicists in the world. Together with his colleague Ari Saptawijaya he wrote the article “From Logic Programming to Machine Ethics” for the “Handbuch Maschinenethik” (“Handbook Machine Ethics”). From the abstract: “This chapter investigates the appropriateness of Logic Programming-based reasoning to machine ethics, an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity for moral decision making. The first part of the chapter aims at identifying morality viewpoints, as studied in moral philosophy and psychology, which are amenable to computational modeling, and then mapping them to appropriate Logic Programming-based reasoning features. The identified viewpoints are covered by two morality themes: moral permissibility and the dual-process model. In the second part, various Logic Programming-based reasoning features are applied to model these identified morality viewpoints, via classic moral examples taken off-the-shelf from the literature. For this purpose, our QUALM system mainly employs a combination of the Logic Programming features of abduction, updating, and counterfactuals. These features are all supported jointly by Logic Programming tabling mechanisms. The applications are also supported by other existing Logic Programming based systems, featuring preference handling and probabilistic reasoning, which complement QUALM in addressing the morality viewpoints in question. Throughout the chapter, many references to our published work are given, providing further examples and details about each topic. Thus, this chapter can be envisaged as an entry point survey on the employment of Logic Programming for knowledge modelling and technically implementing machine ethics.” (Abstract) Springer VS published the “Handbuch Maschinenethik” in October 2019. Editor is Oliver Bendel (Zurich, Switzerland).

Learning How to Behave

In October 2019 Springer VS published the “Handbuch Maschinenethik” (“Handbook Machine Ethics”) with German and English contributions. Editor is Oliver Bendel (Zurich, Switzerland). One of the articles was written by Bertram F. Malle (Brown University, Rhode Island) and Matthias Scheutz (Tufts University, Massachusetts). From the abstract: “We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence. As autonomous machines take on increasingly social roles in human communities, these machines need to have some level of moral competence to ensure safety, acceptance, and justified trust. We review the extensive and complex elements of human moral competence and ask how analogous competences could be implemented in a robot. We propose that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication). A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.” (Abstract “Handbuch Maschinenethik”). The book is available via www.springer.com.