Pepper’s New Job

SoftBank Robotics has announced that it will operate a cafe in Tokyo. The humanoid robot Pepper is to play a major role in this. But people will not disappear. They will of course be guests, but also, as in the traditional establishments of this kind, waitresses and waiters. At least that’s what ZDNET reports. “The cafe, called Pepper Parlor, will utilise both human and robot staff to serve customers, and marks the company’s first time operating a restaurant or cafe.” (ZDNET, 13 September 2019) According to SoftBank Robotics, the aim is “to create a space where people can easily experience the coexistence of people and robots and enjoy the evolution of robots and the future of living with robots”. “We want to make robots not only for convenience and efficiency, but also to expand the possibilities of people and bring happiness.” (ZDNET, 13 September 2019) This opens up new career opportunities for the little robot, which recognizes and shows emotions, and which listens and talks and is trained in high-five. It has long since left its family’s lap, it can be found in shopping malls and nursing homes. Now it will be serving waffles in a cafe in Tokyo.

Ethics in AI for Kids and Teens

In summer 2019, Blakeley Payne ran a very special course at MIT. According to an article in Quartz magazine, the graduate student had created an AI ethics curriculum to make kids and teens aware of how AI systems mediate their everyday lives. “By starting early, she hopes the kids will become more conscious of how AI is designed and how it can manipulate them. These lessons also help prepare them for the jobs of the future, and potentially become AI designers rather than just consumers.” (Quartz, 4 September 2019) Not everyone is convinced that artificial intelligence is the right topic for kids and teens. “Some argue that developing kindness, citizenship, or even a foreign language might serve students better than learning AI systems that could be outdated by the time they graduate. But Payne sees middle school as a unique time to start kids understanding the world they live in: it’s around ages 10 to 14 year that kids start to experience higher-level thoughts and deal with complex moral reasoning. And most of them have smartphones loaded with all sorts of AI.” (Quartz, 4 September 2019) There is no doubt that the MIT course could be a role model for schools around the world. The renowned university once again seems to be setting new standards.

Fighting Deepfakes with Deepfakes

A deepfake (or deep fake) is a picture or video created with the help of artificial intelligence that looks authentic but is not. Also the methods and techniques in this context are labeled with the term. Machine learning and especially deep learning are used. With deepfakes one wants to create objects of art and visual objects or means for discreditation, manipulation and propaganda. Politics and pornography are therefore closely interwoven with the phenomenon. According to Futurism, Facebook is teaming up with a consortium of Microsoft researchers and several prominent universities for a “Deepfake Detection Challenge”. “The idea is to build a data set, with the help of human user input, that’ll help neural networks detect what is and isn’t a deepfake. The end result, if all goes well, will be a system that can reliably fake videos online. Similar data sets already exist for object or speech recognition, but there isn’t one specifically made for detecting deepfakes yet.” (Futurism, 5 September 2019) The winning team will get a prize – presumably a higher sum of money. Facebook is investing a total of 10 million dollars in the competition.

Punch the Robot

Robots are repeatedly damaged or destroyed. The hitchBOT is a well-known example.  But also the security robot K5 has become a victim of attacks several times. The latest case is described in the magazine Wired: “Every day for 10 months, Knightscope K5 patrolled the parking garage across the street from the city hall in Hayward, California. An autonomous security robot, it rolled around by itself, taking video and reading license plates. Locals had complained the garage was dangerous, but K5 seemed to be doing a good job restoring safety. Until the night of August 3, when a stranger came up to K5, knocked it down, and kicked it repeatedly, inflicting serious damage.” (Wired, 29 August 2019) The author investigates the question of whether one may attack robots. Of course you shouldn’t damage other people’s property. But what if the robot is a spy, a data collector, a profile creator? Digital self-defence (which exploits digital as well as analog possibilities) seems to be a proven tool not only in Hong Kong, but also in the US and Europe. The rights of robots that some demand cannot be a serious problem. Robots do not have rights. They feel nothing, they do not suffer, they have no consciousness. “So punch the robot, I tell you! Test the strength of your sociopolitical convictions on this lunk of inorganic matter!” (Wired, 29 August 2019)

The Reversed Cyborg

Chimeras in the biological and medical sense are organisms that consist of cells or tissues of different individuals and yet form closed and viable (not necessarily reproductive) organisms. They can be located within a species or between species and can be both plants and animals. There are natural (blood chimeras in mammals) and artificial mixed organisms (grafting in plants, animal-human embryos). Cyborgs are not chimeras in this sense. Nevertheless, research in this field might also be relevant for them, in particular for inverted or reversed cyborgs, for example robots in which an animal or human brain or organ is implanted. Animal-human chimeras for the production of human organs are regarded as unproblematic by many ethicists. According to a comment by Oliver Bendel, this is astonishing, since findings from animal ethics and veterinary medicine and in particular suffering and death of non-human living beings are ignored.

The Fight against Plastic in the Seas

The pollution of water by plastic is a topic that has been in the media for a few years now. In 2015, the School of Engineering FHNW and the School of Business FHNW investigated whether a robotic fish – like Oliver Bendel’s CLEANINGFISH (2014) – could be a solution. In 2018, the information and machine ethicist commissioned another work to investigate several existing or planned projects dealing with marine pollution. Rolf Stucki’s final thesis in the EUT study program was based on “a literature research on the current state of the plastics problem worldwide and its effects, but also on the properties and advantages of plastics” (Management Summary, own translation). “In addition, interviews were conducted with representatives of the projects. In order to assess the internal company factors (strengths, weaknesses) and external environmental factors (opportunities, risks), SWOT analyses were prepared on the basis of the answers and the research” (Management Summary) According to Stucki, the results show that most projects are financially dependent on sponsors and donors. Two of them are in the concept phase; they should prove their technical and financial feasibility in the medium term. With regard to social commitment, it can be said that all six projects are very active. A poster shows a comparison (the photos were alienated for publication in this blog). WasteShark stands out as a robot. He is, so to speak, the CLEANINGFISH who has become reality.