Knightscope’s security robots have been on the road in Silicon Valley for years. They can see, hear and smell – and report anything suspicious to a central office. A new generation has emerged from a partnership with Samsung. The monolithic cone has become a piecemeal object. The company writes in its blog: “With its all new, fully suspended drivetrain, the K5v4 is uniquely suited to manage the more aggressive terrain outside Samsung’s Silicon Valley workplace. Since deploying, we have been able to reduce the amount of time it takes to complete a robot guard tour in areas inhibited by speed bumps, while continuing to sweep both of their multi-story parking garages for abandoned vehicles and provide their command center with the additional eyes and ears to provide more security intelligence and improve overall security.” (Knightscope, 16 June 2019) Security robots can certainly be an option in closed areas. When used in public spaces, many ethical questions are raised. However, security robots can do more than security cameras. And it is hard to escape the fourth generation.
The Relationship between Artificial Intelligence and Machine Ethics
Artificial intelligence has human or animal intelligence as a reference and attempts to represent it in certain aspects. It can also try to deviate from human or animal intelligence, for example by solving problems differently with its systems. Machine ethics is dedicated to machine morality, producing it and investigating it. Whether one likes the concepts and methods of machine ethics or not, one must acknowledge that novel autonomous machines emerge that appear more complete than earlier ones in a certain sense. It is almost surprising that artificial morality did not join artificial intelligence much earlier. Especially machines that simulate human intelligence and human morality for manageable areas of application seem to be a good idea. But what if a superintelligence with a supermorality forms a new species superior to ours? That’s science fiction, of course. But also something that some scientists want to achieve. Basically, it’s important to clarify terms and explain their connections. This is done in a graphics that was published in July 2019 on informationsethik.net and is linked here.
China’s Brain Drain
“China’s AI talent base is growing, and then leaving” – this is what Joy Dantong Ma writes in an article with the same title. Artificial intelligence is promoted in the People’s Republic in various ways. Money is invested in technologies, institutions, and people. “China has been successful in producing AI talent, evidenced by the rapid growth of AI human capital over the last decade.” (MacroPolo, 30 July 2019) This seems to be good news for the country in the Far East. But the study to which the article refers comes to a different conclusion. While “Beijing has cultivated an army of top AI talent, well over half of that talent eventually ended up in America rather than getting hired by domestic companies and institutions”. “That’s because most of the government resources went into expanding the talent base rather than creating incentives and an environment in which they stay.” (MacroPolo, 30 July 2019) According to Joy Dantong Ma, Beijing seems to have recognized its failure in retaining talent. “The well-known New Generation Artificial Intelligence Development Plan, released in 2017, vowed to lure top-notch AI scientists in neural network, machine learning, self-driving cars, and intelligent robotics by opening up special channels and offering up competitive compensation packages. Still, it’s not clear that Beijing will be able to reverse the Chinese AI brains from draining to its biggest competitor, the United States.” (MacroPolo, 30 July 2019) Does the USA even want the talents? That is anything but clear in these times.
Unknown Links between Classic Artworks
Can machine vision detect unknown connections between famous or classic artworks? This seems to be the case, as the work of Tomas Jenicek and Ondřej Chum shows. In their paper “Linking Art through Human Poses” they write: “We address the discovery of composition transfer in artworks based on their visual content. Automated analysis of large art collections, which are growing as a result of art digitization among museums and galleries, is an important tool for art history and assists cultural heritage preservation. Modern image retrieval systems offer good performance on visually similar artworks, but fail in the cases of more abstract composition transfer. The proposed approach links artworks through a pose similarity of human figures depicted in images.” (Abstract) Human figures are the subject of many paintings from the Middle Ages to the Modern Age and their unmistakable poses have often been a source of inspiration among artists. Think of “Sleeping Venus” by Giorgione, “Venus of Urbino” by Titian and “Olympia” by Édouard Manet – paintings with similarities and correlations. The method of the two scientists consists of fast pose matching and robust spatial verification. They “experimentally show that explicit human pose matching is superior to standard content-based image retrieval methods on a manually annotated art composition transfer dataset” (Abstract). The paper can be downloaded via arxiv.org/abs/1907.03537.
Chatbots in Amsterdam
CONVERSATIONS 2019 is a full-day workshop on chatbot research. It will take place on November 19, 2019 at the University of Amsterdam. From the description: “Chatbots are conversational agents which allow the user access to information and services though natural language dialogue, through text or voice. … Research is crucial in helping realize the potential of chatbots as a means of help and support, information and entertainment, social interaction and relationships. The CONVERSATIONS workshop contributes to this endeavour by providing a cross-disciplinary arena for knowledge exchange by researchers with an interest in chatbots.” The topics of interest that may be explored in the papers and at the workshop include humanlike chatbots, networks of users and chatbots, trustworthy chatbot design and privacy and ethical issues in chatbot design and implementation. More information via conversations2019.wordpress.com/.
Could Artificial Intelligence Trigger Wars?
Deep fakes are a young phenomenon. Of course there have been fake videos for a long time. But that artificial intelligence makes the production possible, even in standard applications, is new. On August 1, an article dedicated to the phenomenon was published in the German newspaper Die Welt. It begins with the following words: “It is well known that a picture says more than a thousand words. And moving images, i.e. videos, are still regarded as unmistakable proof that something has taken place exactly as it can be seen in the film. … Powerful artificial intelligence (AI) processes now make it possible to produce such perfect counterfeits that it is no longer possible to tell with the naked eye whether a video is real or manipulated. In so-called deep fake videos, people say or do things they would never say or do.” Among others, the philosopher Oliver Bendel is quoted. The article with the title “Artificial intelligence could trigger wars” can be downloaded via www.welt.de.
Implementing Responsible Research and Innovation for Care Robots
The article “Implementing Responsible Research and Innovation for Care Robots through BS 8611” by Bernd Carsten Stahl is part of the open access book “Pflegeroboter” (published in November 2018). From the abstract: “The concept of responsible research and innovation (RRI) has gained prominence in European research. It has been integrated into the EU’s Horizon 2020 research framework as well as a number of individual Member States’ research strategies. Elsewhere we have discussed how the idea of RRI can be applied to healthcare robots … and we have speculated what such an implementation might look like in social reality … In this paper I will explore how parallel developments reflect the reasoning in RRI. The focus of the paper will therefore be on the recently published standard on ‘Robots and robotic devices: Guide to the ethical design and application of robots and robotic systems’ … I will analyse the standard and discuss how it can be applied to care robots. The key question to be discussed is whether and to what degree this can be seen as an implementation of RRI in the area of care robotics.” Until July 2019 there were 80,000 downloads of the book and individual chapters, which indicates a lively interest in the topic. More information via www.springer.com/de/book/9783658226978.
About Basic Property
The title of one of the AAAI 2019 Spring Symposia was “Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness”. An important keyword here is “social embeddedness”. Social embeddedness of AI includes issues like “AI and future economics (such as basic income, impact of AI on GDP)” or “well-being society (such as happiness of citizen, life quality)”. In his paper “Are Robot Tax, Basic Income or Basic Property Solutions to the Social Problems of Automation?”, Oliver Bendel discusses and criticizes these approaches in the context of automation and digitization. Moreover, he develops a relatively unknown proposal, unconditional basic property, and presents its potentials as well as its risks. The lecture by Oliver Bendel took place on 26 March 2019 at Stanford University and led to lively discussions. It was nominated for the “best presentation”. The paper has now been published as a preprint and can be downloaded here.
Development of a Morality Menu
Machine ethics produces moral and immoral machines. The morality is usually fixed, e.g. by programmed meta-rules and rules. The machine is thus capable of certain actions, not others. However, another approach is the morality menu (MOME for short). With this, the owner or user transfers his or her own morality onto the machine. The machine behaves in the same way as he or she would behave, in detail. Together with his teams, Prof. Dr. Oliver Bendel developed several artifacts of machine ethics at his university from 2013 to 2018. For one of them, he designed a morality menu that has not yet been implemented. Another concept exists for a virtual assistant that can make reservations and orders for its owner more or less independently. In the article “The Morality Menu” the author introduces the idea of the morality menu in the context of two concrete machines. Then he discusses advantages and disadvantages and presents possibilities for improvement. A morality menu can be a valuable extension for certain moral machines. You can download the article here. In 2019, a morality menu for a robot will be developed at the School of Business FHNW.
Deceptive Machines
“AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook’s AI lab and Carnegie Mellon University, has bested some of the world’s top players …” (The Verge, 11 July 2019) According to the magazine, Pluribus was remarkably good at bluffing its opponents. The Wall Street Journal reported: “A new artificial intelligence program is so advanced at a key human skill – deception – that it wiped out five human poker players with one lousy hand.” (Wall Street Journal, 11 July 2019) Of course you don’t have to equate bluffing with cheating – but in this context interesting scientific questions arise. At the conference “Machine Ethics and Machine Law” in 2016 in Krakow, Ronald C. Arkin, Oliver Bendel, Jaap Hage, and Mojca Plesnicar discussed on the panel the question: “Should we develop robots that deceive?” Ron Arkin (who is in military research) and Oliver Bendel (who is not) came to the conclusion that we should – but they had very different arguments. The ethicist from Zurich, inventor of the LIEBOT, advocates free, independent research in which problematic and deceptive machines are also developed, in favour of an important gain in knowledge – but is committed to regulating the areas of application (for example dating portals or military operations). Further information about Pluribus can be found in the paper itself, entitled “Superhuman AI for multiplayer poker”.