The Robot Called HAPPY HEDGEHOG

The paper “The HAPPY HEDGEHOG Project” by Prof. Dr. Oliver Bendel, Emanuel Graf and Kevin Bollier was accepted at the AAAI Spring Symposia 2021. The researchers will present it at the sub-conference “Machine Learning for Mobile Robot Navigation in the Wild” at the end of March. The project was conducted at the School of Business FHNW between June 2019 and January 2020. Emanuel Graf, Kevin Bollier, Michel Beugger and Vay Lien Chang developed a prototype of a lawn mowing robot in the context of machine ethics and social robotics, which stops its work as soon as it detects a hedgehog. HHH has a thermal imaging camera. When it encounters a warm object, it uses image recognition to investigate it further. At night, a lamp mounted on top helps. After training with hundreds of photos, HHH can quite accurately identify a hedgehog. With this artifact, the team provides a solution to a problem that frequently occurs in practice. Commercial robotic mowers repeatedly kill young hedgehogs in the dark. HAPPY HEDGEHOG could help to save them. The video on informationsethik.net shows it without disguise. The robot is in the tradition of LADYBIRD.

Hyundai Now Creates Tigers

Hyundai Motor Group has revealed a robot named TIGER, which stands for Transforming Intelligent Ground Excursion Robot. According to the company, it’s the second Ultimate Mobility Vehicle (UMV) and the first designed to be uncrewed. “TIGER’s exceptional capabilities are designed to function as a mobile scientific exploration platform in extreme, remote locations. Based on a modular platform architecture, its features include a sophisticated leg and wheel locomotion system, 360-degree directional control, and a range of sensors for remote observation. It is also intended to connect to unmanned aerial vehicles (UAVs), which can fully charge and deliver TIGER to inaccessible locations.” (Media Release, 10 February 2021) A video can be viewed here. With TIGER, the company has developed a very interesting proof of concept. The combination of legs and wheels in particular could prove to be the solution of the future.

Ariana Grande Bows to “Metropolis”

Ariana Grande walks in the footsteps of Fritz Lang with her video “34+35”. The director and actor is famous for his science fiction film “Metropolis” from 1927, in which a robot transforms into an artificial, human-looking woman, the copy of the real Maria (aka Mary). The video, which quotes the famous role model, was produced by Director X. V Magazine writes: “34+35” is the second track of Grande’s recent release “Positions”, “but it is first in sexually charged metaphors” (V Magazine, 17 November 2020). The robot in the “campy video” has the pretty head of Ariane Grande from the beginning. The point is to bring it to life. This happens in an apparatus reminiscent of the one in “Metropolis”. Of course, Ariana Grande is also the scientist who performs the experiment, so she corresponds to the crazy guy named Rotwang. Some lines in the song suggest that the Ariana Grande robot is a sex robot. “Can you stay up all night?/Fuck me ’til the daylight/Thirty-four, thirty-five”, sings the star from Boca Raton. V Magazine writes: “Presumably, a robot could do such a thing, and that is perhaps what this mechanized lady has been designed for.” (V Magazine, 17 November 2020) The purpose of the artificial Maria is different. She is used as a deceptive robot. As such, she is more in the tradition of the research of Ronald C. Arkin and Oliver Bendel.

A Prod, a Stroke, or a Hug?

Soft robots with transparent artificial skin can detect human touch with internal cameras and differentiate between a prod, a stroke, or a hug. This is what New Scientist writes in its article “Robot that looks like a bin bag can understand what a hug is“. According to the magazine, the technology could lead to better non-verbal communication between humans and robots. What is behind this message? A scientific experiment that is indeed very interesting. “Guy Hoffman and his colleagues at Cornell University, New York, created a prototype robot with nylon skin stretched over a 1.2-metre tall cylindrical scaffold atop a platform on wheels. Inside the cylinder sits a commercial USB camera which is used to interpret different types of touch on the nylon.” (New Scientist, 29 January 2021) In recent years, there have been several prototypes, studies and surveys on hugging robots. For example, the projects with PR2, Hugvie, and HUGGIE are worth mentioning. Cornell University’s research certainly represents another milestone in this context and in a way puts humans in the foreground.

Alexa has Hunches

Amazon’s Alexa can perform actions on her own based on previous instructions from the user without asking beforehand. Until now, the voicebot always asked before it did anything. Now it has hunches, which is what Amazon calls the function. On its website, the company writes: “Managing your home’s energy usage is easier than ever, with the Alexa energy dashboard. It works with a variety of smart lights, plugs, switches, water heaters, thermostats, TVs and Echo devices. Once you connect your devices to Alexa, you can start tracking the energy they use, right in the Alexa app. Plus, try an exciting new Hunches feature that can help you save energy without even thinking about it. Now, if Alexa has a hunch that you forgot to turn off a light and no one is home or everyone went to bed, Alexa can automatically turn it off for you. It’s a smart and convenient way to help your home be kinder to the world around it. Every device, every home, and every day counts. Let’s make a difference, together. Amazon is committed to building a sustainable business for our customers and the planet.” (Website Amazon) It will be interesting to see how often Alexa is right with her hunches and how often she is wrong.

Reclaim Your Face

The “Reclaim Your Face” alliance, which calls for a ban on biometric facial recognition in public space, has been registered as an official European Citizens’ Initiative. One of the goals is to establish transparency: “Facial recognition is being used across Europe in secretive and discriminatory ways. What tools are being used? Is there evidence that it’s really needed? What is it motivated by?” (Website RYF) Another one is to draw red lines: “Some uses of biometrics are just too harmful: unfair treatment based on how we look, no right to express ourselves freely, being treated as a potential criminal suspect.” (Website RYF) Finally, the initiative demands respect for human: “Biometric mass surveillance is designed to manipulate our behaviour and control what we do. The general public are being used as experimental test subjects. We demand respect for our free will and free choices.” (Website RYF) In recent years, the use of facial recognition techniques have been the subject of critical reflection, such as in the paper “The Uncanny Return of Physiognomy” presented at the 2018 AAAI Spring Symposia or in the chapter “Some Ethical and Legal Issues of FRT” published in the book “Face Recognition Technology” in 2020. More information at reclaimyourface.eu.

A Fish-inspired Robotic Swarm

A team from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering has developed fish-inspired robots that can synchronize their movements like a real school of fish, without any external control. According to a SEAS press release, it is the first time scientists have demonstrated complex 3D collective behaviors with implicit coordination in underwater robots. “Robots are often deployed in areas that are inaccessible or dangerous to humans, areas where human intervention might not even be possible”, said Florian Berlinger, a PhD Candidate at SEAS and Wyss in an interview. “In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient.” (SEAS, 13 January 2021) The fish-inspired robotic swarm, dubbed Blueswarm, was created in the lab of Prof. Radhika Nagpal, an expert in self-organizing systems. There are several studies and prototypes in the field of robotic fishs, from CLEANINGFISH (School of Business FHNW) to an invention by Cornell University in New York.

Artificial Intelligence and its Siblings

Artificial intelligence (AI) has gained enormous importance in research and practice in the 21st century after decades of ups and downs. Machine ethics and machine consciousness (artificial consciousness) were able to bring their terms and methods to the public at the same time, where they were more or less well understood. Since 2018, a graphic has attempted to clarify the terms and relationships of artificial intelligence, machine ethics and machine consciousness. It is constantly evolving, making it more precise, but also more complex. A new version has been available since the beginning of 2021. In it, it is made even clearer that the three disciplines not only map certain capabilities (mostly of humans), but can also expand them.

A Mobile Charging Robot

Futurism.com reports that Volkswagen has unveiled a working prototype of a robot that can autonomously charge electric cars. “The Mobile Charging Robot is an adorable squat bot – which, when you get right down to it, is strikingly reminiscent of the R2-D2 droid from ‘Star Wars,’ bleeps and bloops included.” (Futurism.com, 28 December 2020) As a result, the service robot becomes a social robot. This may be a benefit for the video, but whether it is necessary in practice remains to be seen. The basic idea is that the robots move to cars that are parked in large residential complexes – and where there is not necessarily a human in the vicinity (and where therefore no social interaction is needed). But the concept is questionable in other respects as well. A mobile energy storage of this type seems to be inefficient: “basically, you’d have to charge the robot’s battery supply which it then uses to charge electric cars” (Futurism.com, 28 December 2020). Nevertheless, the idea should be pursued. Without a doubt, there are logistical advantages to having a robot drive to and charge cars – fewer charging stations are needed, and you can service two vehicles at once.

The Morality Menu Project

From 18 to 21 August 2020, the Robophilosophy conference took place. Due to the pandemic, participants could not meet in Aarhus as originally planned, but only in virtual space. Nevertheless, the conference was a complete success. At the end of the year, the conference proceedings were published by IOS Press, including the paper “The Morality Menu Project” by Oliver Bendel. From the abstract: “The discipline of machine ethics examines, designs and produces moral machines. The artificial morality is usually pre-programmed by a manufacturer or developer. However, another approach is the more flexible morality menu (MOME). With this, owners or users replicate their own moral preferences onto a machine. A team at the FHNW implemented a MOME for MOBO (a chatbot) in 2019/2020. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project and discusses advantages and disadvantages of such an approach. It turns out that a morality menu could be a valuable extension for certain moral machines.” The book can be ordered on the publisher’s website. An author’s copy is available here.