Will Biorobots Clean Up the Seas?

In a paper published on 13 January 2020, researchers from the University of Vermont and Tufts University discuss computer-designed, novel organisms called Xenobots. Xenobots consist of skin and muscle cells. The skin cells stabilize the organisms, the muscle cells enable them to perform different activities. A nervous system is not present. An AI system calculates the optimal structure and ratio of the cells in relation to a specific function. The Xenobots are assembled according to the resulting construction plan. In fact, the cells appear to work together. The researchers see different areas of application. One could build Xenobots that move forward in the sea and have a pocket inside in which they collect microplastics. Once the biorobots are filled, they can go to a place where they die, whereby it is not clear whether they live at all, like classical organisms. In any case, all that would remain in this place would be the plastic particles and functionless cells. Both can easily be disposed of. However, Xenobots would also be swallowed by marine animals like fishs and turtles during their work and would be exposed to other dangers. In addition, normal robots are better suited for the removal of macroplastics.

The Birth of the Morality Menu

The idea of a morality menu (MOME) was born in 2018 in the context of machine ethics. It should make it possible to transfer the morality of a person to a machine. On a display you can see different rules of behaviour and you can activate or deactivate them with sliders. Oliver Bendel developed two design studies, one for an animal-friendly vacuum cleaning robot (LADYBIRD), the other for a voicebot like Google Duplex. At the end of 2018, he announced a project at the School of Business FHNW. Three students – Ozan Firat, Levin Padayatty and Yusuf Or – implemented a morality menu for a chatbot called MOBO from June 2019 to January 2020. The user enters personal information and then activates or deactivates nine different rules of conduct. MOBO compliments or does not compliment, responds with or without prejudice, threatens or does not threaten the interlocutor. It responds to each user individually, says his or her name – and addresses him or her formally or informally, depending on the setting. A video of the MOBO-MOME is available here.

The Old, New Neons

The company Neon picks up an old concept with its Neons, namely that of avatars. Twenty years ago, Oliver Bendel distinguished between two different types in the Lexikon der Wirtschaftsinformatik. With reference to the second, he wrote: “Avatars, on the other hand, can represent any figure with certain functions. Such avatars appear on the Internet – for example as customer advisors and newsreaders – or populate the adventure worlds of computer games as game partners and opponents. They often have an anthropomorphic appearance and independent behaviour or even real characters …” (Lexikon der Wirtschaftsinformatik, 2001, own translation) It is precisely this type that the company, which is part of the Samsung Group and was founded by Pranav Mistry, is now adapting, taking advantage of today’s possibilities. “These are virtual figures that are generated entirely on the computer and are supposed to react autonomously in real time; Mistry spoke of a latency of less than 20 milliseconds.” (Heise Online, 8 January 2019, own translation) The neons are supposed to show emotions (as do some social robots that are conquering the market) and thus facilitate and strengthen bonds. “The AI-driven character is neither a language assistant a la Bixby nor an interface to the Internet. Instead, it is a friend who can speak several languages, learn new skills and connect to other services, Mistry explained at CES.” (Heise Online, 8 January 2019, own translation)

Another Animal-friendly Machine

Between June 2019 and January 2020 the project HAPPY HEDGEHOG (HHH) was implemented at the School of Business FHNW. Initiator and client was Oliver Bendel. In the context of machine ethics, the students Emanuel Graf, Kevin Bollier, Michel Beugger and Vay Lien Chang developed the prototype of a lawnmower robot that stops working as soon as it discovers a hedgehog. HHH has a thermal imaging camera. If it encounters a warm object, it further examines it using image recognition. At night a lamp mounted on top helps. After training with hundreds of photos, HHH can identify a hedgehog quite accurately. Firstly, another moral machine has been created in the laboratory, and secondly, the team provides a possible solution to a problem that frequently occurs in practice: commercial lawnmower robots often kill baby hedgehogs in the dark. HAPPY HEDGEHOG could help save them. The video on youtu.be/ijIQ8lBygME shows it without casing; a photo with casing can be found here. The robot is in the tradition of LADYBIRD, another animal-friendly machine.

Opportunities and Risks of Facial Recognition

The book chapter “The BESTBOT Project” by Oliver Bendel, David Studer and Bradley Richards was published on 31 December 2019. It is part of the 2nd edition of the “Handbuch Maschinenethik”, edited by Oliver Bendel. From the abstract: “The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.” (Abstract) The book chapter can be downloaded from link.springer.com/referenceworkentry/10.1007/978-3-658-17484-2_32-1.

Towards an Anti Face

Face recognition in public spaces is a threat to freedom. You can defend yourself with masks or with counter-technologies. Even make-up is a possibility. Adam Harvey demonstrated this in the context of the CV Dazzle project at the hacker congress 36C3 in Leipzig. As Heise reports, he uses biological characteristics such as face color, symmetry and shadows and modifies them until they seem unnatural to algorithms. The result, according to Adam Harvey, is an “anti face”. The style tips for reclaiming privacy could be useful in Hong Kong, where face recognition is widespread and used against freedom fighters. Further information can be found on the CV Dazzle website. “CV Dazzle explores how fashion can be used as camouflage from face-detection technology, the first step in automated face recognition.” (Website CV Dazzle)