A Markup Language for Moral Machines

A markup language is a machine-readable language for structuring and formatting texts and other data. The best known is the Hypertext Markup Language (HTML). Other well-known artifacts are SSML (for the adaptation of synthetic voices) and AIML (for artificial intelligence applications). We use markup languages to describe properties, affiliations and forms of representation of sections of a text or set of data. This is usually done by marking them with tags. In addition to tags, attributes and values can also be important. A student paper at the School of Business FHNW will describe and compare known markup languages. It will examine whether there is room for further artifacts of this kind. A markup language, which would be suitable for the morality in the written and spoken as well as the morally adequate display of pictures, videos and animations and the playing of sounds, could be called MOML (Morality Markup Language). Is such a language possible and helpful? Can it be used for moral machines? The paper will also deal with this. The supervisor of the project, which will last until the end of the year, is Prof. Dr. Oliver Bendel. Since 2012, he and his teams have created formulas and annotated decision trees for moral machines and a number of moral machines themselves, such as GOODBOT, LIEBOT, BESTBOT, and LADYBIRD.

The Future of Autonomous Driving

Driving in cities is a very complex matter. There are several reasons for this: You have to judge hundreds of objects and events at all times. You have to communicate with people. And you should be able to change decisions spontaneously, for example because you remember that you have to buy something. That’s a bad prospect for an autonomous car. Of course it can do some tricks: It can drive very slowly. It can use virtual tracks or special lanes and signals and sounds. A bus or shuttle is able to use such tricks. But hardly a car. Autonomous individual transport in cities will only be possible if the cities are redesigned. This has been done a few decades ago. And it wasn’t a good idea at all. So don’t let autonomous cars drive in the cities, but let them drive on the highways. Should autonomous cars make moral decisions about the lives and deaths of pedestrians and cyclists? They should better not. Moral machines are a valuable innovation in certain contexts. But not in the traffic of cities. Pedestrians and cyclists rarely get onto the highway. There are many reasons why we should allow autonomous cars only there.

The System that Detects Fear

Amazon Rekognition is a well-known software for facial recognition, including emotion detection. It is used in the BESTBOT, a moral machine that hides an immoral machine. The immoral is precisely caused by facial recognition, which endangers the privacy of the user and his or her informational autonomy. The project is intended not least to draw attention to this risk. Amazon announced on 12 August 2019 that it has improved and expanded its system: “Today, we are launching accuracy and functionality improvements to our face analysis features. Face analysis generates metadata about detected faces in the form of gender, age range, emotions, attributes such as ‘Smile’, face pose, face image quality and face landmarks. With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear’.” (Amazon, 12 August 2019) Because the BESTBOT accesses other systems such as MS Face API and Kairos, it can already recognize fear. So the change at Amazon means no change for this artifact of machine ethics.

Conversational Agents: Acting on the Wave of Research and Development

The papers of the CHI 2019 workshop “Conversational Agents: Acting on the Wave of Research and Development” (Glasgow, 5 May 2019) are now listed on convagents.org. The extended abstract by Oliver Bendel (School of Business FHNW) entitled “Chatbots as Moral and Immoral Machines” can be downloaded here. The workshop brought together experts from all over the world who are working on the basics of chatbots and voicebots and are implementing them in different ways. Companies such as Microsoft, Mozilla and Salesforce were also present. Approximately 40 extended abstracts were submitted. On 6 May, a bagpipe player opened the four-day conference following the 35 workshops. Dr. Aleks Krotoski, Pillowfort Productions, gave the first keynote. One of the paper sessions in the morning was dedicated to the topic “Values and Design”. All in all, both classical specific fields of applied ethics and the young discipline of machine ethics were represented at the conference. More information via chi2019.acm.org.

Ethical and Statistical Considerations in Models of Moral Judgments

Torty Sivill works at the Computer Science Department, University of Bristol. In August 2019 she published the article “Ethical and Statistical Considerations in Models of Moral Judgments”. “This work extends recent advancements in computational models of moral decision making by using mathematical and philosophical theory to suggest adaptations to state of the art. It demonstrates the importance of model assumptions and considers alternatives to the normal distribution when modeling ethical principles. We show how the ethical theories, utilitarianism and deontology can be embedded into informative prior distributions. We continue to expand the state of the art to consider ethical dilemmas beyond the Trolley Problem and show the adaptations needed to address this complexity. The adaptations made in this work are not solely intended to improve recent models but aim to raise awareness of the importance of interpreting results relative to assumptions made, either implicitly or explicitly, in model construction.” (Abstract) The article can be accessed via https://www.frontiersin.org/articles/10.3389/frobt.2019.00039/full.

The Relationship between Artificial Intelligence and Machine Ethics

Artificial intelligence has human or animal intelligence as a reference and attempts to represent it in certain aspects. It can also try to deviate from human or animal intelligence, for example by solving problems differently with its systems. Machine ethics is dedicated to machine morality, producing it and investigating it. Whether one likes the concepts and methods of machine ethics or not, one must acknowledge that novel autonomous machines emerge that appear more complete than earlier ones in a certain sense. It is almost surprising that artificial morality did not join artificial intelligence much earlier. Especially machines that simulate human intelligence and human morality for manageable areas of application seem to be a good idea. But what if a superintelligence with a supermorality forms a new species superior to ours? That’s science fiction, of course. But also something that some scientists want to achieve. Basically, it’s important to clarify terms and explain their connections. This is done in a graphics that was published in July 2019 on informationsethik.net and is linked here.