“Robots, Empathy and Emotions” – this research project was tendered some time ago. The contract was awarded to a consortium of FHNW, ZHAW and the University of St. Gallen. The applicant, Prof. Dr. Hartmut Schulze from the FHNW School of Applied Psychology, covers the field of psychology. The co-applicant Prof. Dr. Oliver Bendel from the FHNW School of Business takes the perspective of information, robot and machine ethics, the co-applicant Prof. Dr. Maria Schubert from the ZHAW that of nursing science. The client TA-SWISS stated on its website: “What influence do robots … have on our society and on the people who interact with them? Are robots perhaps rather snitches than confidants? … What do we expect from these machines or what can we effectively expect from them? Numerous sociological, psychological, economic, philosophical and legal questions related to the present and future use and potential of robots are still open.” (Website TA-SWISS, own translation) The kick-off meeting with a top-class accompanying group took place in Bern, the capital of Switzerland, on 26 June 2019.
Self-Repairing Robots
Soft robots with soft surfaces and soft fingers are in vogue. They can touch people, animals, and plants as well as fragile things in such a way that nothing is hurt or destroyed. However, they are vulnerable themselves. One cut, one punch, and they are damaged. According to the Guardian, a European commission-funded project is trying to solve this problem. It aims to create “self-healing” robots “that can feel pain, or sense damage, before swiftly patching themselves up without human intervention”. “The researchers have already successfully developed polymers that can heal themselves by creating new bonds after about 40 minutes. The next step will be to embed sensor fibres in the polymer which can detect where the damage is located. The end goal is to make the healing automated, avoiding the current need for heat to activate the system, through the touch of a human hand.” (Guardian, 8 August 2019) Surely the goal will not be that the robots really suffer. This would have tremendous implications – they would have to be given rights. Rather, it is an imaginary pain – a precondition for the self-repairing process or other reactions.
Knightscope’s Fourth Generation K5
Knightscope’s security robots have been on the road in Silicon Valley for years. They can see, hear and smell – and report anything suspicious to a central office. A new generation has emerged from a partnership with Samsung. The monolithic cone has become a piecemeal object. The company writes in its blog: “With its all new, fully suspended drivetrain, the K5v4 is uniquely suited to manage the more aggressive terrain outside Samsung’s Silicon Valley workplace. Since deploying, we have been able to reduce the amount of time it takes to complete a robot guard tour in areas inhibited by speed bumps, while continuing to sweep both of their multi-story parking garages for abandoned vehicles and provide their command center with the additional eyes and ears to provide more security intelligence and improve overall security.” (Knightscope, 16 June 2019) Security robots can certainly be an option in closed areas. When used in public spaces, many ethical questions are raised. However, security robots can do more than security cameras. And it is hard to escape the fourth generation.
Chatbots in Amsterdam
CONVERSATIONS 2019 is a full-day workshop on chatbot research. It will take place on November 19, 2019 at the University of Amsterdam. From the description: “Chatbots are conversational agents which allow the user access to information and services though natural language dialogue, through text or voice. … Research is crucial in helping realize the potential of chatbots as a means of help and support, information and entertainment, social interaction and relationships. The CONVERSATIONS workshop contributes to this endeavour by providing a cross-disciplinary arena for knowledge exchange by researchers with an interest in chatbots.” The topics of interest that may be explored in the papers and at the workshop include humanlike chatbots, networks of users and chatbots, trustworthy chatbot design and privacy and ethical issues in chatbot design and implementation. More information via conversations2019.wordpress.com/.
Implementing Responsible Research and Innovation for Care Robots
The article “Implementing Responsible Research and Innovation for Care Robots through BS 8611” by Bernd Carsten Stahl is part of the open access book “Pflegeroboter” (published in November 2018). From the abstract: “The concept of responsible research and innovation (RRI) has gained prominence in European research. It has been integrated into the EU’s Horizon 2020 research framework as well as a number of individual Member States’ research strategies. Elsewhere we have discussed how the idea of RRI can be applied to healthcare robots … and we have speculated what such an implementation might look like in social reality … In this paper I will explore how parallel developments reflect the reasoning in RRI. The focus of the paper will therefore be on the recently published standard on ‘Robots and robotic devices: Guide to the ethical design and application of robots and robotic systems’ … I will analyse the standard and discuss how it can be applied to care robots. The key question to be discussed is whether and to what degree this can be seen as an implementation of RRI in the area of care robotics.” Until July 2019 there were 80,000 downloads of the book and individual chapters, which indicates a lively interest in the topic. More information via www.springer.com/de/book/9783658226978.
Deceptive Machines
“AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook’s AI lab and Carnegie Mellon University, has bested some of the world’s top players …” (The Verge, 11 July 2019) According to the magazine, Pluribus was remarkably good at bluffing its opponents. The Wall Street Journal reported: “A new artificial intelligence program is so advanced at a key human skill – deception – that it wiped out five human poker players with one lousy hand.” (Wall Street Journal, 11 July 2019) Of course you don’t have to equate bluffing with cheating – but in this context interesting scientific questions arise. At the conference “Machine Ethics and Machine Law” in 2016 in Krakow, Ronald C. Arkin, Oliver Bendel, Jaap Hage, and Mojca Plesnicar discussed on the panel the question: “Should we develop robots that deceive?” Ron Arkin (who is in military research) and Oliver Bendel (who is not) came to the conclusion that we should – but they had very different arguments. The ethicist from Zurich, inventor of the LIEBOT, advocates free, independent research in which problematic and deceptive machines are also developed, in favour of an important gain in knowledge – but is committed to regulating the areas of application (for example dating portals or military operations). Further information about Pluribus can be found in the paper itself, entitled “Superhuman AI for multiplayer poker”.
Happy Hedgehog
Between June 2019 and January 2020, the sixth artifact of machine ethics will be created at the FHNW School of Business. Prof. Dr. Oliver Bendel is the initiator, the client and – together with a colleague – the supervisor of the project. Animal-machine interaction is about the design, evaluation and implementation of (usually more sophisticated or complex) machines and computer systems with which animals interact and communicate and which interact and communicate with animals. While machine ethics has largely focused on humans thus far, it can also prove beneficial for animals. It attempts to conceive moral machines and to implement them with the help of further disciplines such as computer science and AI or robotics. The aim of the project is the detailed description and prototypical implementation of an animal-friendly service robot, more precisely a mowing robot called HAPPY HEDGEHOG (HHH). With the help of sensors and moral rules, the robot should be able to recognize hedgehogs (especially young animals) and initiate appropriate measures (interruption of work, expulsion of the hedgehog, information of the owner). The project has similarities with another project carried out earlier, namely LADYBIRD. This time, however, more emphasis will be placed on existing equipment, platforms and software. The first artifact at the university was the GOODBOT – in 2013.