Deep fakes are a young phenomenon. Of course there have been fake videos for a long time. But that artificial intelligence makes the production possible, even in standard applications, is new. On August 1, an article dedicated to the phenomenon was published in the German newspaper Die Welt. It begins with the following words: “It is well known that a picture says more than a thousand words. And moving images, i.e. videos, are still regarded as unmistakable proof that something has taken place exactly as it can be seen in the film. … Powerful artificial intelligence (AI) processes now make it possible to produce such perfect counterfeits that it is no longer possible to tell with the naked eye whether a video is real or manipulated. In so-called deep fake videos, people say or do things they would never say or do.” Among others, the philosopher Oliver Bendel is quoted. The article with the title “Artificial intelligence could trigger wars” can be downloaded via www.welt.de.
Implementing Responsible Research and Innovation for Care Robots
The article “Implementing Responsible Research and Innovation for Care Robots through BS 8611” by Bernd Carsten Stahl is part of the open access book “Pflegeroboter” (published in November 2018). From the abstract: “The concept of responsible research and innovation (RRI) has gained prominence in European research. It has been integrated into the EU’s Horizon 2020 research framework as well as a number of individual Member States’ research strategies. Elsewhere we have discussed how the idea of RRI can be applied to healthcare robots … and we have speculated what such an implementation might look like in social reality … In this paper I will explore how parallel developments reflect the reasoning in RRI. The focus of the paper will therefore be on the recently published standard on ‘Robots and robotic devices: Guide to the ethical design and application of robots and robotic systems’ … I will analyse the standard and discuss how it can be applied to care robots. The key question to be discussed is whether and to what degree this can be seen as an implementation of RRI in the area of care robotics.” Until July 2019 there were 80,000 downloads of the book and individual chapters, which indicates a lively interest in the topic. More information via www.springer.com/de/book/9783658226978.
About Basic Property
The title of one of the AAAI 2019 Spring Symposia was “Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness”. An important keyword here is “social embeddedness”. Social embeddedness of AI includes issues like “AI and future economics (such as basic income, impact of AI on GDP)” or “well-being society (such as happiness of citizen, life quality)”. In his paper “Are Robot Tax, Basic Income or Basic Property Solutions to the Social Problems of Automation?”, Oliver Bendel discusses and criticizes these approaches in the context of automation and digitization. Moreover, he develops a relatively unknown proposal, unconditional basic property, and presents its potentials as well as its risks. The lecture by Oliver Bendel took place on 26 March 2019 at Stanford University and led to lively discussions. It was nominated for the “best presentation”. The paper has now been published as a preprint and can be downloaded here.
Development of a Morality Menu
Machine ethics produces moral and immoral machines. The morality is usually fixed, e.g. by programmed meta-rules and rules. The machine is thus capable of certain actions, not others. However, another approach is the morality menu (MOME for short). With this, the owner or user transfers his or her own morality onto the machine. The machine behaves in the same way as he or she would behave, in detail. Together with his teams, Prof. Dr. Oliver Bendel developed several artifacts of machine ethics at his university from 2013 to 2018. For one of them, he designed a morality menu that has not yet been implemented. Another concept exists for a virtual assistant that can make reservations and orders for its owner more or less independently. In the article “The Morality Menu” the author introduces the idea of the morality menu in the context of two concrete machines. Then he discusses advantages and disadvantages and presents possibilities for improvement. A morality menu can be a valuable extension for certain moral machines. You can download the article here. In 2019, a morality menu for a robot will be developed at the School of Business FHNW.
Deceptive Machines
“AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook’s AI lab and Carnegie Mellon University, has bested some of the world’s top players …” (The Verge, 11 July 2019) According to the magazine, Pluribus was remarkably good at bluffing its opponents. The Wall Street Journal reported: “A new artificial intelligence program is so advanced at a key human skill – deception – that it wiped out five human poker players with one lousy hand.” (Wall Street Journal, 11 July 2019) Of course you don’t have to equate bluffing with cheating – but in this context interesting scientific questions arise. At the conference “Machine Ethics and Machine Law” in 2016 in Krakow, Ronald C. Arkin, Oliver Bendel, Jaap Hage, and Mojca Plesnicar discussed on the panel the question: “Should we develop robots that deceive?” Ron Arkin (who is in military research) and Oliver Bendel (who is not) came to the conclusion that we should – but they had very different arguments. The ethicist from Zurich, inventor of the LIEBOT, advocates free, independent research in which problematic and deceptive machines are also developed, in favour of an important gain in knowledge – but is committed to regulating the areas of application (for example dating portals or military operations). Further information about Pluribus can be found in the paper itself, entitled “Superhuman AI for multiplayer poker”.
Artifacts of Machine Ethics
More and more autonomous and semi-autonomous machines such as intelligent software agents, specific robots, specific drones and self-driving cars make decisions that have moral implications. Machine ethics as a discipline examines the possibilities and limits of moral and immoral machines. It does not only reflect ideas but develops artifacts like simulations and prototypes. In his talk at the University of Potsdam on 23 June 2019 (“Fundamentals and Artifacts of Machine Ethics”), Prof. Dr. Oliver Bendel outlined the fundamentals of machine ethics and present selected artifacts of moral and immoral machines. Furthermore, he discussed a project which will be completed by the end of 2019. The GOODBOT (2013) is a chatbot that responds morally adequate to problems of the users. The LIEBOT (2016) can lie systematically, using seven different strategies. LADYBIRD (2017) is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. The BESTBOT (2018) is a chatbot that recognizes certain problems and conditions of the users with the help of text analysis and facial recognition and reacts morally to them. 2019 is the year of the E-MOMA. The machine should be able to improve its morality on its own.
Happy Hedgehog
Between June 2019 and January 2020, the sixth artifact of machine ethics will be created at the FHNW School of Business. Prof. Dr. Oliver Bendel is the initiator, the client and – together with a colleague – the supervisor of the project. Animal-machine interaction is about the design, evaluation and implementation of (usually more sophisticated or complex) machines and computer systems with which animals interact and communicate and which interact and communicate with animals. While machine ethics has largely focused on humans thus far, it can also prove beneficial for animals. It attempts to conceive moral machines and to implement them with the help of further disciplines such as computer science and AI or robotics. The aim of the project is the detailed description and prototypical implementation of an animal-friendly service robot, more precisely a mowing robot called HAPPY HEDGEHOG (HHH). With the help of sensors and moral rules, the robot should be able to recognize hedgehogs (especially young animals) and initiate appropriate measures (interruption of work, expulsion of the hedgehog, information of the owner). The project has similarities with another project carried out earlier, namely LADYBIRD. This time, however, more emphasis will be placed on existing equipment, platforms and software. The first artifact at the university was the GOODBOT – in 2013.
No Solution to the Nursing Crisis
“In Germany, around four million people will be dependent on care and nursing in 2030. Already today there is talk of a nursing crisis, which is likely to intensify further in view of demographic developments in the coming years. Fewer and fewer young people will be available to the labour market as potential carers for the elderly. Experts estimate that there will be a shortage of around half a million nursing staff in Germany by 2030. Given these dramatic forecasts, are nursing robots possibly the solution to the problem? Scientists from the disciplines of computer science, robotics, medicine, nursing science, social psychology, and philosophy explored this question at a Berlin conference of the Daimler and Benz Foundation. The machine ethicist and conference leader Professor Oliver Bendel first of all stated that many people had completely wrong ideas about care robots: ‘In the media there are often pictures or illustrations that do not correspond to reality’.” (Die Welt, 14 June 2019) With these words an article in the German newspaper Die Welt begins. Norbert Lossau provides a detailed description of the Berlin Colloquium, which took place on 22 May 2019. The article is available in both English and German. So, are robots the answer to the nursing crisis? Oliver Bendel would disagree. They can be useful for caregivers and patients. However, they do not solve the major issues.
AI Award for the Social Good
The Association for the Advancement of Artificial Intelligence (AAAI) and Squirrel AI Learning announced the establishment of a new one million dollars annual award for societal benefits of AI. According to a press release of the AAAI, the award will be sponsored by Squirrel AI Learning as part of its mission to promote the use of artificial intelligence with lasting positive effects for society. “This new international award will recognize significant contributions in the field of artificial intelligence with profound societal impact that have generated otherwise unattainable value for humanity. The award nomination and selection process will be designed by a committee led by AAAI that will include representatives from international organizations with relevant expertise that will be designated by Squirrel AI Learning.” (AAAI Press Release, 28 May 2019) The AAAI Spring Symposia have repeatedly devoted themselves to social good, also from the perspective of machine ethics. Further information via aaai.org/Pressroom/Releases//release-19-0528.php.
How to Treat Animals
Parallel to his work in machine ethics, Oliver Bendel is trying to establish animal-machine interaction (AMI) as a discipline. He was very impressed by Clara Mancini’s paper “Animal-Computer Interaction (ACI): A Manifesto” on animal-computer interaction. In his AMI research, he mainly investigates robots, gadgets, and devices and their behavior towards animals. There are not only moral questions, but also questions concerning the design of outer appearance and the ability to speak. The general background for his considerations is that more and more machines and animals meet in closed, half-open and open worlds. He believes that semi-autonomous and autonomous systems should have rules so that they treat animals well. They should not disturb, frighten, injure or kill them. Examples are toy robots, domestic robots, service robots in shopping malls and agricultural robots. Jackie Snow, who writes for New York Times, National Geographic, and Wall Street Journal, has talked to several experts about the topic. In an article for Fast Company, she quotes the ethicists Oliver Bendel and Peter Singer. Clara Mancini is also expressing her point of view. The article with the title “AI’s next ethical challenge: how to treat animals” can be downloaded here.