“Sensitive synthetic skin enables robots to sense their own bodies and surroundings – a crucial capability if they are to be in close contact with people. Inspired by human skin, a team at the Technical University of Munich (TUM) has developed a system combining artificial skin with control algorithms and used it to create the first autonomous humanoid robot with full-body artificial skin.” (Press Release TUM, 10 October 2019) The robot skin consists of hexagonal cells which are about the size of a two-euro coin. Each of them is equipped with a microprocessor and sensors to detect contact, acceleration, proximity, and temperature. “Such artificial skin enables robots to perceive their surroundings in much greater detail and with more sensitivity. This not only helps them to move safely. It also makes them safer when operating near people and gives them the ability to anticipate and actively avoid accidents.” (Press Release TUM, 10 October 2019) The artificial skin could become important for service robots of all kinds, but also for certain industrial robots (Photo: Department of Electrical and Computer Engineering, Astrid Eckert).
The papers of the AAAI 2019 Spring Symposium “Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness symposium” were published in October 2019. The participants had met at Stanford University at the end of March 2019 to present and discuss their findings. Session 5 (“Social Embeddedness”) includes the following publications: “Are Robot Tax, Basic Income or Basic Property Solutions to the Social Problems of Automation?” (Oliver Bendel), “Context-based Network Analysis of Structured Knowledge for Data Utilization” (Teruaki Hayashi, Yukio Ohsawa), “Extended Mind, Embedded AI, and ‘the Barrier of Meaning'” (Sadeq Rahimi), “Concept of Future Prototyping Methodology to Enhance Value Creation within Future Contexts” (Miwa Nishinaka, Yusuke Kishita, Hisashi Masuda, Kunio Shirahada), and “Maintaining Knowledge Distribution System’s Sustainability Using Common Value Auctions” (Anas Al-Tirawi, Robert G. Reynolds). The papers can be downloaded via ceur-ws.org/Vol-2448/.
“Robot priests can bless you, advise you, and even perform your funeral” – this is the title of an article published in Vox on 9 September 2019. “A new priest named Mindar is holding forth at Kodaiji, a 400-year-old Buddhist temple in Kyoto, Japan. Like other clergy members, this priest can deliver sermons and move around to interface with worshippers. But Mindar comes with some … unusual traits. A body made of aluminum and silicone, for starters.” (Vox, 9 September 2019) The robot looks like Kannon, the Buddhist deity of mercy. According to Vox, it is an attempt to reignite people’s passion for their faith in a country where religious affiliation is on the decline. “For now, Mindar is not AI-powered. It just recites the same preprogrammed sermon about the Heart Sutra over and over. But the robot’s creators say they plan to give it machine-learning capabilities that’ll enable it to tailor feedback to worshippers’ specific spiritual and ethical problems.” (Vox, 9 September 2019) There is hope that the robot will not bring people back to faith, but rather enthuse them for the knowledge of science – the science that Mindar created.
SoftBank Robotics has announced that it will operate a cafe in Tokyo. The humanoid robot Pepper is to play a major role in this. But people will not disappear. They will of course be guests, but also, as in the traditional establishments of this kind, waitresses and waiters. At least that’s what ZDNET reports. “The cafe, called Pepper Parlor, will utilise both human and robot staff to serve customers, and marks the company’s first time operating a restaurant or cafe.” (ZDNET, 13 September 2019) According to SoftBank Robotics, the aim is “to create a space where people can easily experience the coexistence of people and robots and enjoy the evolution of robots and the future of living with robots”. “We want to make robots not only for convenience and efficiency, but also to expand the possibilities of people and bring happiness.” (ZDNET, 13 September 2019) This opens up new career opportunities for the little robot, which recognizes and shows emotions, and which listens and talks and is trained in high-five. It has long since left its family’s lap, it can be found in shopping malls and nursing homes. Now it will be serving waffles in a cafe in Tokyo.
Robots are repeatedly damaged or destroyed. The hitchBOT is a well-known example. But also the security robot K5 has become a victim of attacks several times. The latest case is described in the magazine Wired: “Every day for 10 months, Knightscope K5 patrolled the parking garage across the street from the city hall in Hayward, California. An autonomous security robot, it rolled around by itself, taking video and reading license plates. Locals had complained the garage was dangerous, but K5 seemed to be doing a good job restoring safety. Until the night of August 3, when a stranger came up to K5, knocked it down, and kicked it repeatedly, inflicting serious damage.” (Wired, 29 August 2019) The author investigates the question of whether one may attack robots. Of course you shouldn’t damage other people’s property. But what if the robot is a spy, a data collector, a profile creator? Digital self-defence (which exploits digital as well as analog possibilities) seems to be a proven tool not only in Hong Kong, but also in the US and Europe. The rights of robots that some demand cannot be a serious problem. Robots do not have rights. They feel nothing, they do not suffer, they have no consciousness. “So punch the robot, I tell you! Test the strength of your sociopolitical convictions on this lunk of inorganic matter!” (Wired, 29 August 2019)
Olli 2.0 is born. Local Motors and IBM had presented an autonomous shuttle in 2016 that reminded of the Smart Shuttle operated by PostAuto AG in Sion, Valais. Unlike this one, however, Olli 1.0 could not only think but also speak, both at a high level and with the help of IBM Watson. Passengers’ wishes are accepted, for example with regard to destinations, and these are even suggested; when it is hot, the car drives to the nearest ice cream parlour. Olli could also reassure passengers or passers-by: “For citizens of Maryland, many of whom have never seen a self-driving car, Watson’s reassuring communications could be critical to making them more comfortable with the idea that there’s no human being at the wheel.” (Information IBM) Olli 2.0, launched in 2019, can not only access IBM Watson, but also Amazon’s deep learning chatbot service Lex. Something else is different from the predecessor: Olli is now 80% 3D-printed. This was reported by Techcrunch on August 31, 2019. The magazine contains more interesting information about the shuttle. By the way, it is intended for small areas, especially for trade fairs and university campuses. But like the Smart Shuttle, it has already been tested in small cities.