Ethical and Statistical Considerations in Models of Moral Judgments

Torty Sivill works at the Computer Science Department, University of Bristol. In August 2019 she published the article “Ethical and Statistical Considerations in Models of Moral Judgments”. “This work extends recent advancements in computational models of moral decision making by using mathematical and philosophical theory to suggest adaptations to state of the art. It demonstrates the importance of model assumptions and considers alternatives to the normal distribution when modeling ethical principles. We show how the ethical theories, utilitarianism and deontology can be embedded into informative prior distributions. We continue to expand the state of the art to consider ethical dilemmas beyond the Trolley Problem and show the adaptations needed to address this complexity. The adaptations made in this work are not solely intended to improve recent models but aim to raise awareness of the importance of interpreting results relative to assumptions made, either implicitly or explicitly, in model construction.” (Abstract) The article can be accessed via https://www.frontiersin.org/articles/10.3389/frobt.2019.00039/full.

The Relationship between Artificial Intelligence and Machine Ethics

Artificial intelligence has human or animal intelligence as a reference and attempts to represent it in certain aspects. It can also try to deviate from human or animal intelligence, for example by solving problems differently with its systems. Machine ethics is dedicated to machine morality, producing it and investigating it. Whether one likes the concepts and methods of machine ethics or not, one must acknowledge that novel autonomous machines emerge that appear more complete than earlier ones in a certain sense. It is almost surprising that artificial morality did not join artificial intelligence much earlier. Especially machines that simulate human intelligence and human morality for manageable areas of application seem to be a good idea. But what if a superintelligence with a supermorality forms a new species superior to ours? That’s science fiction, of course. But also something that some scientists want to achieve. Basically, it’s important to clarify terms and explain their connections. This is done in a graphics that was published in July 2019 on informationsethik.net and is linked here.

Development of a Morality Menu

Machine ethics produces moral and immoral machines. The morality is usually fixed, e.g. by programmed meta-rules and rules. The machine is thus capable of certain actions, not others. However, another approach is the morality menu (MOME for short). With this, the owner or user transfers his or her own morality onto the machine. The machine behaves in the same way as he or she would behave, in detail. Together with his teams, Prof. Dr. Oliver Bendel developed several artifacts of machine ethics at his university from 2013 to 2018. For one of them, he designed a morality menu that has not yet been implemented. Another concept exists for a virtual assistant that can make reservations and orders for its owner more or less independently. In the article “The Morality Menu” the author introduces the idea of the morality menu in the context of two concrete machines. Then he discusses advantages and disadvantages and presents possibilities for improvement. A morality menu can be a valuable extension for certain moral machines. You can download the article here. In 2019, a morality menu for a virtual machine will be developed at the School of Business FHNW.

Deceptive Machines

“AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook’s AI lab and Carnegie Mellon University, has bested some of the world’s top players …” (The Verge, 11 July 2019) According to the magazine, Pluribus was remarkably good at bluffing its opponents. The Wall Street Journal reported: “A new artificial intelligence program is so advanced at a key human skill – deception – that it wiped out five human poker players with one lousy hand.” (Wall Street Journal, 11 July 2019) Of course you don’t have to equate bluffing with cheating – but in this context interesting scientific questions arise. At the conference “Machine Ethics and Machine Law” in 2016 in Krakow, Ronald C. Arkin, Oliver Bendel, Jaap Hage, and Mojca Plesnicar discussed on the panel the question: “Should we develop robots that deceive?” Ron Arkin (who is in military research) and Oliver Bendel (who is not) came to the conclusion that we should – but they had very different arguments. The ethicist from Zurich, inventor of the LIEBOT, advocates free, independent research in which problematic and deceptive machines are also developed, in favour of an important gain in knowledge – but is committed to regulating the areas of application (for example dating portals or military operations). Further information about Pluribus can be found in the paper itself, entitled “Superhuman AI for multiplayer poker”.

Artifacts of Machine Ethics

More and more autonomous and semi-autonomous machines such as intelligent software agents, specific robots, specific drones and self-driving cars make decisions that have moral implications. Machine ethics as a discipline examines the possibilities and limits of moral and immoral machines. It does not only reflect ideas but develops artifacts like simulations and prototypes. In his talk at the University of Potsdam on 23 June 2019 (“Fundamentals and Artifacts of Machine Ethics”), Prof. Dr. Oliver Bendel outlined the fundamentals of machine ethics and present selected artifacts of moral and immoral machines, Furthermore, he discussed a project which will be completed by the end of 2019. The GOODBOT (2013) is a chatbot that responds morally adequate to problems of the users. The LIEBOT (2016) can lie systematically, using seven different strategies. LADYBIRD (2017) is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. The BESTBOT (2018) is a chatbot that recognizes certain problems and conditions of the users with the help of text analysis and facial recognition and reacts morally to them. 2019 is the year of the E-MOMA. The machine should be able to improve its morality on its own.

Happy Hedgehog

Between June 2019 and January 2020, the sixth artifact of machine ethics will be created at the FHNW School of Business. Prof. Dr. Oliver Bendel is the initiator, the client and – together with a colleague – the supervisor of the project. Animal-machine interaction is about the design, evaluation and implementation of (usually more sophisticated or complex) machines and computer systems with which animals interact and communicate and which interact and communicate with animals. Machine ethics has so far mainly referred to humans, but can also be useful for animals. It attempts to conceive moral machines and to implement them with the help of further disciplines such as computer science and AI or robotics. The aim of the project is the detailed description and prototypical implementation of an animal-friendly service robot, more precisely a mowing robot called HAPPY HEDGEHOG (HHH). With the help of sensors and moral rules, the robot should be able to recognize hedgehogs (especially young animals) and initiate appropriate measures (interruption of work, expulsion of the hedgehog, information of the owner). The project has similarities with another project carried out earlier, namely LADYBIRD. This time, however, more emphasis will be placed on existing equipment, platforms and software. The first artifact at the university was the GOODBOT – in 2013.