Towards a Proxy Machine

“Once we place so-called ‘social robots’ into the social practices of our everyday lives and lifeworlds, we create complex, and possibly irreversible, interventions in the physical and semantic spaces of human culture and sociality. The long-term socio-cultural consequences of these interventions is currently impossible to gauge.” (Website Robophilosophy Conference) With these words the next Robophilosophy conference was announced. It would have taken place in Aarhus, Denmark, from 18 to 21 August 2019, but due to the COVID 19 pandemic it is being conducted online. One lecture will be given by Oliver Bendel. The abstract of the paper “The Morality Menu Project” states: “Machine ethics produces moral machines. The machine morality is usually fixed. Another approach is the morality menu (MOME). With this, owners or users transfer their own morality onto the machine, for example a social robot. The machine acts in the same way as they would act, in detail. A team at the School of Business FHNW implemented a MOME for the MOBO chatbot. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project and discusses advantages and disadvantages of such an approach. It turns out that a morality menu can be a valuable extension for certain moral machines.” In 2018 Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Joanna Bryson, and Oliver Bendel had been keynote speakers. In 2020, Catrin Misselhorn, Selma Sabanovic, and Shannon Vallor will be presenting. More information via conferences.au.dk/robo-philosophy/.

A Morality Markup Language

There are several markup languages for different applications. The best known is certainly the Hypertext Markup Language (HTML). AIML has established itself in the field of Artificial Intelligence (AI). For synthetic voices SSML is used. The question is whether the possibilities with regard to autonomous systems are exhausted. In the article “The Morality Menu” by Prof. Dr. Oliver Bendel, a Morality Markup Language (MOML) was proposed for the first time. In 2019, a student research project supervised by the information and machine ethicist investigated the possibilities of existing languages with regard to moral aspects and whether a MOML is justified. The results were presented in January 2020. A bachelor thesis at the School of Business FHNW will go one step further from the end of March 2020. In it, the basic features of a Morality Markup Language are to be developed. The basic structure and specific commands will be proposed and described. The application areas, advantages and disadvantages of such a markup language are to be presented. The client of the work is Prof. Dr. Oliver Bendel, supervisor Dr. Elzbieta Pustulka.

Moral and Immoral Machines

Since 2012, Oliver Bendel has invented 13 artifacts of machine ethics. Nine of them have actually been implemented, including LADYBIRD, the animal-friendly vacuum cleaning robot, and LIEBOT, the chatbot that can systematically lie. Both of them have achieved a certain popularity. The information and machine ethicist is convinced that ethics does not necessarily have to produce the good. It should explore the good and the evil and, like any science, serve to gain knowledge. Accordingly, he builds both moral and immoral machines. But the immoral ones he keeps in his laboratory. In 2020, if the project is accepted, HUGGIE will see the light of day. The project idea is to create a social robot that contributes directly to a good life and economic success by touching and hugging people and especially customers. HUGGIE should be able to warm up in some places, and it should be possible to change the materials it is covered with. A research question will be: What are the possibilities besides warmth and softness? Are optical stimuli (also on displays), vibrations, noises, voices etc. important for a successful hug? All moral and immoral machines that have been created between 2012 and 2020 are compiled in a new illustration, which is shown here for the first time.

HTML, SSML, AIML – and MOML?

On behalf of Prof. Dr. Oliver Bendel, a student at the School of Business FHNW, Alessandro Spadola, investigated in the context of machine ethics whether markup languages such as HTML, SSML and AIML can be used to transfer moral aspects to machines or websites and whether there is room for a new language that could be called Morality Markup Language (MOML). He presented his results in January 2020. From the management summary: “However, the idea that owners should be able to transmit their own personal morality has been explored by Bendel, who has proposed an open way of transferring morality to machines using a markup language. This research paper analyses whether a new markup language could be used to imbue machines with their owners’ sense of morality. This work begins with an analysis how a markup language is structured, describes the current well-known markup languages and analyses their differences. In doing so, it reveals that the main difference between the well-known markup languages lies in the different goals they pursue which at the same time forms the subject, which is marked up. This thesis then examines the possibility of transferring personal morality with the current languages available and discusses whether there is a need for a further language for this purpose. As is shown, morality can only be transmitted with increased effort and the knowledge of human perception because it is only possible to transmit them by interacting with the senses of the people. The answer to the question of whether there is room for another markup language is ‘yes’, since none of the languages analysed offer a simple way to transmit morality, and simplicity is a key factor in markup languages. Markup languages all have clear goals, but none have the goal of transferring and displaying morality. The language that could assume this task is ‘Morality Markup’, and the present work describes how such a language might look.” (Management Summary) The promising results are to be continued in the course of the year by another student in a bachelor thesis.

The Birth of the Morality Menu

The idea of a morality menu (MOME) was born in 2018 in the context of machine ethics. It should make it possible to transfer the morality of a person to a machine. On a display you can see different rules of behaviour and you can activate or deactivate them with sliders. Oliver Bendel developed two design studies, one for an animal-friendly vacuum cleaning robot (LADYBIRD), the other for a voicebot like Google Duplex. At the end of 2018, he announced a project at the School of Business FHNW. Three students – Ozan Firat, Levin Padayatty and Yusuf Or – implemented a morality menu for a chatbot called MOBO from June 2019 to January 2020. The user enters personal information and then activates or deactivates nine different rules of conduct. MOBO compliments or does not compliment, responds with or without prejudice, threatens or does not threaten the interlocutor. It responds to each user individually, says his or her name – and addresses him or her formally or informally, depending on the setting. A video of the MOBO-MOME is available here.

Another Animal-friendly Machine

Between June 2019 and January 2020 the project HAPPY HEDGEHOG (HHH) was implemented at the School of Business FHNW. Initiator and client was Oliver Bendel. In the context of machine ethics, the students Emanuel Graf, Kevin Bollier, Michel Beugger and Vay Lien Chang developed the prototype of a lawnmower robot that stops working as soon as it discovers a hedgehog. HHH has a thermal imaging camera. If it encounters a warm object, it further examines it using image recognition. At night a lamp mounted on top helps. After training with hundreds of photos, HHH can identify a hedgehog quite accurately. Firstly, another moral machine has been created in the laboratory, and secondly, the team provides a possible solution to a problem that frequently occurs in practice: commercial lawnmower robots often kill baby hedgehogs in the dark. HAPPY HEDGEHOG could help save them. The video on youtu.be/ijIQ8lBygME shows it without casing; a photo with casing can be found here. The robot is in the tradition of LADYBIRD, another animal-friendly machine.

Opportunities and Risks of Facial Recognition

The book chapter “The BESTBOT Project” by Oliver Bendel, David Studer and Bradley Richards was published on 31 December 2019. It is part of the 2nd edition of the “Handbuch Maschinenethik”, edited by Oliver Bendel. From the abstract: “The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.” (Abstract) The book chapter can be downloaded from link.springer.com/referenceworkentry/10.1007/978-3-658-17484-2_32-1.

AI Workshop at the University of Potsdam

In 2018, Dr. Yuefang Zhou and Prof. Dr. Martin Fischer initiated the first international workshop on intimate human-robot relations at the University of Potsdam, “which resulted in the publication of an edited book on developments in human-robot intimate relationships”. This year, Prof. Dr. Martin Fischer, Prof. Dr. Rebecca Lazarides, and Dr. Yuefang Zhou are organizing the second edition. “As interest in the topic of humanoid AI continues to grow, the scope of the workshop has widened. During this year’s workshop, international experts from a variety of different disciplines will share their insights on motivational, social and cognitive aspects of learning, with a focus on humanoid intelligent tutoring systems and social learning companions/robots.” (Website Embracing AI) The international workshop “Learning from Humanoid AI: Motivational, Social & Cognitive Perspectives” will take place on 29 and 30 November 2019 at the University of Potsdam. Keynote speakers are Prof. Dr. Tony Belpaeme, Prof. Dr. Oliver Bendel, Prof. Dr. Angelo Cangelosi, Dr. Gabriella Cortellessa, Dr. Kate Devlin, Prof. Dr. Verena Hafner, Dr. Nicolas Spatola, Dr. Jessica Szczuka, and Prof. Dr. Agnieszka Wykowska. Further information is available at embracingai.wordpress.com/.

From Logic Programming to Machine Ethics

Luís Moniz Pereira is one of the best known and most active machine ethicists in the world. Together with his colleague Ari Saptawijaya he wrote the article “From Logic Programming to Machine Ethics” for the “Handbuch Maschinenethik” (“Handbook Machine Ethics”). From the abstract: “This chapter investigates the appropriateness of Logic Programming-based reasoning to machine ethics, an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity for moral decision making. The first part of the chapter aims at identifying morality viewpoints, as studied in moral philosophy and psychology, which are amenable to computational modeling, and then mapping them to appropriate Logic Programming-based reasoning features. The identified viewpoints are covered by two morality themes: moral permissibility and the dual-process model. In the second part, various Logic Programming-based reasoning features are applied to model these identified morality viewpoints, via classic moral examples taken off-the-shelf from the literature. For this purpose, our QUALM system mainly employs a combination of the Logic Programming features of abduction, updating, and counterfactuals. These features are all supported jointly by Logic Programming tabling mechanisms. The applications are also supported by other existing Logic Programming based systems, featuring preference handling and probabilistic reasoning, which complement QUALM in addressing the morality viewpoints in question. Throughout the chapter, many references to our published work are given, providing further examples and details about each topic. Thus, this chapter can be envisaged as an entry point survey on the employment of Logic Programming for knowledge modelling and technically implementing machine ethics.” (Abstract) Springer VS published the “Handbuch Maschinenethik” in October 2019. Editor is Oliver Bendel (Zurich, Switzerland).

Learning How to Behave

In October 2019 Springer VS published the “Handbuch Maschinenethik” (“Handbook Machine Ethics”) with German and English contributions. Editor is Oliver Bendel (Zurich, Switzerland). One of the articles was written by Bertram F. Malle (Brown University, Rhode Island) and Matthias Scheutz (Tufts University, Massachusetts). From the abstract: “We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence. As autonomous machines take on increasingly social roles in human communities, these machines need to have some level of moral competence to ensure safety, acceptance, and justified trust. We review the extensive and complex elements of human moral competence and ask how analogous competences could be implemented in a robot. We propose that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication). A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.” (Abstract “Handbuch Maschinenethik”). The book is available via www.springer.com.