Towards a Proxy Machine

“Once we place so-called ‘social robots’ into the social practices of our everyday lives and lifeworlds, we create complex, and possibly irreversible, interventions in the physical and semantic spaces of human culture and sociality. The long-term socio-cultural consequences of these interventions is currently impossible to gauge.” (Website Robophilosophy Conference) With these words the next Robophilosophy conference was announced. It would have taken place in Aarhus, Denmark, from 18 to 21 August 2019, but due to the COVID 19 pandemic it is being conducted online. One lecture will be given by Oliver Bendel. The abstract of the paper “The Morality Menu Project” states: “Machine ethics produces moral machines. The machine morality is usually fixed. Another approach is the morality menu (MOME). With this, owners or users transfer their own morality onto the machine, for example a social robot. The machine acts in the same way as they would act, in detail. A team at the School of Business FHNW implemented a MOME for the MOBO chatbot. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project and discusses advantages and disadvantages of such an approach. It turns out that a morality menu can be a valuable extension for certain moral machines.” In 2018 Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Joanna Bryson, and Oliver Bendel had been keynote speakers. In 2020, Catrin Misselhorn, Selma Sabanovic, and Shannon Vallor will be presenting. More information via conferences.au.dk/robo-philosophy/.

Towards Animal-machine Interaction

Animal-machine interaction (AMI) and animal-computer interaction (ACI) are increasingly important research areas. For years, semi-autonomous and autonomous machines have been multiplying all over the world, not only in factories, but also in outdoor areas and in households. Robots in agriculture and service robots, some with artificial intelligence, encounter wild animals, farm animals and pets. Jackie Snow, who writes for New York Times, National Geographic, and Wall Street Journal, talked to several people on the subject last year. In an article for Fast Company, she quoted the ethicists Oliver Bendel (“Handbuch Maschinenethik”) and Peter Singer (“Animal Liberation”). Clara Mancini (“Animal-computer interaction: A manifesto”) also expressed her point of view. The article with the title “AI’s next ethical challenge: how to treat animals” can be accessed here. Today, research is also devoted to social robots. One question is how animals react to them. Human-computer interaction (HCI) experts from Yale University recently looked into this topic. Another question is whether we can create social robots specifically for animals. The first beginnings were made with toys and automatic feeders for pets. Could a social robot replace a contact person for weeks on end? What features should it have? In this context, we must pay attention to animal welfare from the outset. Some animals will love the new freedom, others will hate it.

Online Survey on Hugs by Robots

Embraces by robots are possible if they have two arms, such as Pepper and P-Care, restricted also with one arm. However, the hugs and touches feel different to those made by humans. When one uses warmth and softness, like in the HuggieBot project, the effect improves, but is still not the same. In hugs it is important that another person hugs us (hugging ourselves is totally different), and that this person is in a certain relationship to us. He or she may be strange to us, but there must be trust or desire. Whether this is the case with a robot must be assessed on a case-by-case basis. A multi-stage HUGGIE project is currently underway at the School of Business FHNW under the supervision of Prof. Dr. Oliver Bendel. Ümmühan Korucu and Leonie Brogle started with an online survey that targets the entire German-speaking world. The aim is to gain insights into how people of all ages and sexes judge a hug by a robot. In crises and catastrophes involving prolonged isolation, such as the COVID 19 pandemic, proxy hugs of this kind could well play a role. Prisons and longer journeys through space are also possible fields of applications. Click here for the survey (only in German):  ww3.unipark.de/uc/HUGGIE/

A Morality Markup Language

There are several markup languages for different applications. The best known is certainly the Hypertext Markup Language (HTML). AIML has established itself in the field of Artificial Intelligence (AI). For synthetic voices SSML is used. The question is whether the possibilities with regard to autonomous systems are exhausted. In the article “The Morality Menu” by Prof. Dr. Oliver Bendel, a Morality Markup Language (MOML) was proposed for the first time. In 2019, a student research project supervised by the information and machine ethicist investigated the possibilities of existing languages with regard to moral aspects and whether a MOML is justified. The results were presented in January 2020. A bachelor thesis at the School of Business FHNW will go one step further from the end of March 2020. In it, the basic features of a Morality Markup Language are to be developed. The basic structure and specific commands will be proposed and described. The application areas, advantages and disadvantages of such a markup language are to be presented. The client of the work is Prof. Dr. Oliver Bendel, supervisor Dr. Elzbieta Pustulka.

SPACE THEA

Space travel includes travel and transport to, through and from space for civil or military purposes. The take-off on earth is usually done with a launch vehicle. The spaceship, like the lander, is manned or unmanned. The target can be the orbit of a celestial body, a satellite, planet or comet. Man has been to the moon several times, now man wants to go to Mars. The astronaut will not greet the robots that are already there as if he or she had been lonely for months. For on the spaceship he or she had been in the best of company. SPACE THEA spoke to him or her every day. When she noticed that he or she had problems, she changed her tone of voice, the voice became softer and happier, and what she said gave the astronaut hope again. How SPACE THEA really sounds and what she should say is the subject of a research project that will start in spring 2020 at the School of Business FHNW. Under the supervision of Prof. Dr. Oliver Bendel, a student is developing a voicebot that shows empathy towards an astronaut. The scenario is a proposal that can also be rejected. Maybe in these times it is more important to have a virtual assistant for crises and catastrophes in case one is in isolation or quarantine. However, the project in the fields of social robotics and machine ethics is entitled “THE EMPHATIC ASSISTANT IN SPACE (SPACE THEA)”. The results – including the prototype – will be available by the end of 2020.

Moral and Immoral Machines

Since 2012, Oliver Bendel has invented 13 artifacts of machine ethics. Nine of them have actually been implemented, including LADYBIRD, the animal-friendly vacuum cleaning robot, and LIEBOT, the chatbot that can systematically lie. Both of them have achieved a certain popularity. The information and machine ethicist is convinced that ethics does not necessarily have to produce the good. It should explore the good and the evil and, like any science, serve to gain knowledge. Accordingly, he builds both moral and immoral machines. But the immoral ones he keeps in his laboratory. In 2020, if the project is accepted, HUGGIE will see the light of day. The project idea is to create a social robot that contributes directly to a good life and economic success by touching and hugging people and especially customers. HUGGIE should be able to warm up in some places, and it should be possible to change the materials it is covered with. A research question will be: What are the possibilities besides warmth and softness? Are optical stimuli (also on displays), vibrations, noises, voices etc. important for a successful hug? All moral and immoral machines that have been created between 2012 and 2020 are compiled in a new illustration, which is shown here for the first time.

HTML, SSML, AIML – and MOML?

On behalf of Prof. Dr. Oliver Bendel, a student at the School of Business FHNW, Alessandro Spadola, investigated in the context of machine ethics whether markup languages such as HTML, SSML and AIML can be used to transfer moral aspects to machines or websites and whether there is room for a new language that could be called Morality Markup Language (MOML). He presented his results in January 2020. From the management summary: “However, the idea that owners should be able to transmit their own personal morality has been explored by Bendel, who has proposed an open way of transferring morality to machines using a markup language. This research paper analyses whether a new markup language could be used to imbue machines with their owners’ sense of morality. This work begins with an analysis how a markup language is structured, describes the current well-known markup languages and analyses their differences. In doing so, it reveals that the main difference between the well-known markup languages lies in the different goals they pursue which at the same time forms the subject, which is marked up. This thesis then examines the possibility of transferring personal morality with the current languages available and discusses whether there is a need for a further language for this purpose. As is shown, morality can only be transmitted with increased effort and the knowledge of human perception because it is only possible to transmit them by interacting with the senses of the people. The answer to the question of whether there is room for another markup language is ‘yes’, since none of the languages analysed offer a simple way to transmit morality, and simplicity is a key factor in markup languages. Markup languages all have clear goals, but none have the goal of transferring and displaying morality. The language that could assume this task is ‘Morality Markup’, and the present work describes how such a language might look.” (Management Summary) The promising results are to be continued in the course of the year by another student in a bachelor thesis.

Another Animal-friendly Machine

Between June 2019 and January 2020 the project HAPPY HEDGEHOG (HHH) was implemented at the School of Business FHNW. Initiator and client was Oliver Bendel. In the context of machine ethics, the students Emanuel Graf, Kevin Bollier, Michel Beugger and Vay Lien Chang developed the prototype of a lawnmower robot that stops working as soon as it discovers a hedgehog. HHH has a thermal imaging camera. If it encounters a warm object, it further examines it using image recognition. At night a lamp mounted on top helps. After training with hundreds of photos, HHH can identify a hedgehog quite accurately. Firstly, another moral machine has been created in the laboratory, and secondly, the team provides a possible solution to a problem that frequently occurs in practice: commercial lawnmower robots often kill baby hedgehogs in the dark. HAPPY HEDGEHOG could help save them. The video on youtu.be/ijIQ8lBygME shows it without casing; a photo with casing can be found here. The robot is in the tradition of LADYBIRD, another animal-friendly machine.

Opportunities and Risks of Facial Recognition

The book chapter “The BESTBOT Project” by Oliver Bendel, David Studer and Bradley Richards was published on 31 December 2019. It is part of the 2nd edition of the “Handbuch Maschinenethik”, edited by Oliver Bendel. From the abstract: “The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.” (Abstract) The book chapter can be downloaded from link.springer.com/referenceworkentry/10.1007/978-3-658-17484-2_32-1.

Moral Machines and Ethical Correctness

The demand for autonomous vehicles is globally increasing. But its market penetration has to wait. There is a global disagreement concerning the correct “behaviour” of the cars, as a project supported by the MIT shows. moralmachine.mit.edu is an online platform, where people all over the world are introduced in various scenarios about Autonomous Driving (AD). For every single case, they are asked about their opinion, how the car would react correctly. The actual results present many differences among the various cultures, but also within the single societies groups exist mutually contradicting in their point of view. This resulting varience makes the manufacturers facing the problem of how the car should be programmed: uniquely for all, individually for each society and if so, what about the groups with a different ethical understanding? What has been globally similar up to a certain degree, was the opinion that the car should react utilitarian (causing the least misery in case of an accident). Hence, this is interesting as you would hardly use such a car with the knowledge it could react in a way harming your safety. In the utilitarian case, this could happen, creating a social dilemma.  Adjusting the car to that broadly shared opinion, the manufacturers would run to risk of selling no car or at least not enough. Prof. Dr. Oliver Bendel has published various articles about the issues caused by AD and ethics (see here). From his point of view, human road users should neither be quantified nor qualified. He also advocates that the vehicles have to be moralized towards animals like hedgehogs and toads. But in general, also the experts are at odds. So, how should the car manufacturers deal with all of these different interests and aspects? How should the society, in general, deal with ethical questions arising with the progress of technologies? Globally unique, individually for each country? The question, when autonomous cars will enter the mass market, is accordingly not depending on the progress of technologies, but rather on the ability to reduce 6 billion people to a common denominator.