A Morality Markup Language

There are several markup languages for different applications. The best known is certainly the Hypertext Markup Language (HTML). AIML has established itself in the field of Artificial Intelligence (AI). For synthetic voices SSML is used. The question is whether the possibilities with regard to autonomous systems are exhausted. In the article “The Morality Menu” by Prof. Dr. Oliver Bendel, a Morality Markup Language (MOML) was proposed for the first time. In 2019, a student research project supervised by the information and machine ethicist investigated the possibilities of existing languages with regard to moral aspects and whether a MOML is justified. The results were presented in January 2020. A bachelor thesis at the School of Business FHNW will go one step further from the end of March 2020. In it, the basic features of a Morality Markup Language are to be developed. The basic structure and specific commands will be proposed and described. The application areas, advantages and disadvantages of such a markup language are to be presented. The client of the work is Prof. Dr. Oliver Bendel, supervisor Dr. Elzbieta Pustulka.

Stanford University Must Stay in Bed

Stanford University announced that it would cancel in-person classes for the final two weeks of the winter quarter in response to the expanding outbreak of COVID-19. Even before that, the school had set its sights on larger events. These included the AAAI Spring Symposium Series, a legendary conference on artificial intelligence, which in recent years has also had a major impact on machine ethics and robot ethics or roboethics. The AAAI organization announced by email: “It is with regret that we must notify you of the cancellation of the physical meeting of the AAAI Spring Symposium at Stanford, March 23-25, due to the current situation surrounding the COVID-19 outbreak. Stanford has issued the following letter at news.stanford.edu/2020/03/03/message-campus-community-covid-19/, which strongly discourages and likely results in cancellation of any meeting with more than 150 participants.” What happens with the papers and talks is still unclear. Possibly they will be part of the AAAI Fall Symposium in Washington. The symposium “Applied AI in Healthcare: Safety, Community, and the Environment”, one of eight events, had to be cancelled as well – among other things, innovative approaches and technologies that are also relevant for crises and disasters such as COVID-19 would have been discussed there.

Moral and Immoral Machines

Since 2012, Oliver Bendel has invented 13 artifacts of machine ethics. Nine of them have actually been implemented, including LADYBIRD, the animal-friendly vacuum cleaning robot, and LIEBOT, the chatbot that can systematically lie. Both of them have achieved a certain popularity. The information and machine ethicist is convinced that ethics does not necessarily have to produce the good. It should explore the good and the evil and, like any science, serve to gain knowledge. Accordingly, he builds both moral and immoral machines. But the immoral ones he keeps in his laboratory. In 2020, if the project is accepted, HUGGIE will see the light of day. The project idea is to create a social robot that contributes directly to a good life and economic success by touching and hugging people and especially customers. HUGGIE should be able to warm up in some places, and it should be possible to change the materials it is covered with. A research question will be: What are the possibilities besides warmth and softness? Are optical stimuli (also on displays), vibrations, noises, voices etc. important for a successful hug? All moral and immoral machines that have been created between 2012 and 2020 are compiled in a new illustration, which is shown here for the first time.

Care Robots with Sexual Assistance Functions

The paper “Care Robots with Sexual Assistance Functions” by Oliver Bendel was accepted at the AAAI 2020 Spring Symposia. From the abstract: “Residents in retirement and nursing homes have sexual needs just like other people. However, the semi-public situation makes it difficult for them to satisfy these existential concerns. In addition, they may not be able to meet a suitable partner or find it difficult to have a relationship for mental or physical reasons. People who live or are cared for at home can also be affected by this problem. Perhaps they can host someone more easily and discreetly than the residents of a health facility, but some elderly and disabled people may be restricted in some ways. This article examines the opportunities and risks that arise with regard to care robots with sexual assistance functions. First of all, it deals with sexual well-being. Then it presents robotic systems ranging from sex robots to care robots. Finally, the focus is on care robots, with the author exploring technical and design issues. A brief ethical discussion completes the article. The result is that care robots with sexual assistance functions could be an enrichment of the everyday life of people in need of care, but that we also have to consider some technical, design and moral aspects.” The paper had been submitted to the symposium “Applied AI in Healthcare: Safety, Community, and the Environment”. Oliver Bendel will present the paper at Stanford University between 23 and 25 March 2020.

Towards a Human-like Chatbot

Google is currently working on Meena, a particular chatbot, which should be able to have arbitrary conversations and be used in many contexts. In their paper “Towards a Human-like Open-Domain Chatbot“, the developers present the 2.6 billion parameters end-to-end trained neural conversational model. They show that Meena “can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots”. “Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA.” (Google AI Blog) The company draws a comparison with OpenAI GPT-2, a model used in “Talk to Transformer” and Harmony, among others, which uses 1.5 billion parameters and is based on the text content of 8 million web pages.

Towards Self-replicating Machines

In recent decades, there have been several attempts to supplement traditional electronic storage media. 3D codes with color as the third dimension are an interesting approach. They can be applied to paper or film, for example. Another approach has now been presented by researchers from Switzerland and Israel. They are able to generate artificial DNA and place it in any object. From the Abstract: “We devised a ‘DNA-of-things’ (DoT) storage architecture to produce materials with immutable memory. In a DoT framework, DNA molecules record the data, and these molecules are then encapsulated in nanometer silica beads, which are fused into various materials that are used to print or cast objects in any shape. First, we applied DoT to three-dimensionally print a Stanford Bunny that contained a 45 kB digital DNA blueprint for its synthesis. We synthesized five generations of the bunny, each from the memory of the previous generation without additional DNA synthesis or degradation of information. … DoT could be applied to store electronic health records in medical implants, to hide data in everyday objects (steganography) and to manufacture objects containing their own blueprint. It may also facilitate the development of self-replicating machines.” (Abstract) The approach could also be interesting for robots. They could, for example, reproduce themselves on Mars. The article with the title “A DNA-of-things storage architecture to create materials with embedded memory” has been published in NATURE BIOTECHNOLOGY and can be accessed via www.nature.com/articles/s41587-019-0356-z.epdf.