The Interceptor in the Cengkareng Drain

Plastic in rivers and seas is one of the biggest problems of our time. Whether bottles or bags, whether macro or micro plastic – the flora and fauna is impaired and destroyed. Six projects against plastic waste have already been presented here. The focus was on the seas. One of the initiatives is now also active in the rivers. This is very important because what is fished out in the rivers no longer ends up in the oceans. The magazine Fast Company reported on 26 October 2019: “In the Cengkareng Drain, a river that runs through the megacity of Jakarta, Indonesia, tons of plastic trash flows to the ocean each year. But now a new solar-powered robot called the Interceptor is gobbling up the waste so that it can be recycled instead. The system was designed by the nonprofit The Ocean Cleanup, which spent the past four years secretly developing and testing the technology while it continued to work on its main project – a device that can capture plastic trash once it’s already in the ocean.” (Fast Company, 26 October 2019) This is good news. However, the most important thing is to avoid plastic waste. Otherwise, the destruction of the waters will continue unstoppably.

From Logic Programming to Machine Ethics

Luís Moniz Pereira is one of the best known and most active machine ethicists in the world. Together with his colleague Ari Saptawijaya he wrote the article “From Logic Programming to Machine Ethics” for the “Handbuch Maschinenethik” (“Handbook Machine Ethics”). From the abstract: “This chapter investigates the appropriateness of Logic Programming-based reasoning to machine ethics, an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity for moral decision making. The first part of the chapter aims at identifying morality viewpoints, as studied in moral philosophy and psychology, which are amenable to computational modeling, and then mapping them to appropriate Logic Programming-based reasoning features. The identified viewpoints are covered by two morality themes: moral permissibility and the dual-process model. In the second part, various Logic Programming-based reasoning features are applied to model these identified morality viewpoints, via classic moral examples taken off-the-shelf from the literature. For this purpose, our QUALM system mainly employs a combination of the Logic Programming features of abduction, updating, and counterfactuals. These features are all supported jointly by Logic Programming tabling mechanisms. The applications are also supported by other existing Logic Programming based systems, featuring preference handling and probabilistic reasoning, which complement QUALM in addressing the morality viewpoints in question. Throughout the chapter, many references to our published work are given, providing further examples and details about each topic. Thus, this chapter can be envisaged as an entry point survey on the employment of Logic Programming for knowledge modelling and technically implementing machine ethics.” (Abstract) Springer VS published the “Handbuch Maschinenethik” in October 2019. Editor is Oliver Bendel (Zurich, Switzerland).

Desire in the Age of Robots and AI

Rebecca Gibson’s book “Desire in the Age of Robots and AI” was published by Palgrave Macmillan at the end of 2019. From the abstract: “This book examines how science fiction’s portrayal of humanity’s desire for robotic companions influences and reflects changes in our actual desires. It begins by taking the reader on a journey that outlines basic human desires – in short, we are storytellers, and we need the objects of our desire to be able to mirror that aspect of our beings. This not only explains the reasons we seek out differences in our mates, but also why we crave sex and romance with robots. In creating a new species of potential companions, science fiction highlights what we already want and how our desires dictate – and are in return recreated – by what is written. But sex with robots is more than a sci-fi pop-culture phenomenon; it’s a driving force in the latest technological advances in cybernetic science. As such, this book looks at both what we imagine and what we can create in terms of the newest iterations of robotic companionship.” (Information Palgrave Macmillan) One chapter is entitled “Angel Replicants and Solid Holograms: Blade Runner 2049 and Its Impact on Robotics”. This is a further contribution to the robots and holograms in the well-known film. Already “Hologram Girl” by Oliver Bendel dealt with the holograms in this fictional work and the possible relationships with them and with their colleagues in the real world.

Health Care Prediction Algorithm Biased against Black People

The research article “Dissecting racial bias in an algorithm used to manage the health of populations” by Ziad Obermeyer, Brian Powers, Christine Vogeli and Sendhil Mullainathan has been well received by science and media. It was published in the journal Science on 25 October 2019. From the abstract: “Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses.” (Abstract) The authors suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts. The journal Nature quotes Milena Gianfrancesco, an epidemiologist at the University of California, San Francisco, with the following words: “We need a better way of actually assessing the health of the patients.”

Learning How to Behave

In October 2019 Springer VS published the “Handbuch Maschinenethik” (“Handbook Machine Ethics”) with German and English contributions. Editor is Oliver Bendel (Zurich, Switzerland). One of the articles was written by Bertram F. Malle (Brown University, Rhode Island) and Matthias Scheutz (Tufts University, Massachusetts). From the abstract: “We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence. As autonomous machines take on increasingly social roles in human communities, these machines need to have some level of moral competence to ensure safety, acceptance, and justified trust. We review the extensive and complex elements of human moral competence and ask how analogous competences could be implemented in a robot. We propose that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication). A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.” (Abstract “Handbuch Maschinenethik”). The book is available via www.springer.com.

How to Improve Robot Hugs

Hugs are very important to many of us. We are embraced by familiar and strange people. When we hug ourselves, it does not have the same effect. And when a robot hugs us, it has no effect at all – or we don’t feel comfortable. But you can change that a bit. Alexis E. Block and Katherine J. Kuchenbecker from the Max Planck Institute for Intelligent Systems have published a paper on a research project in this field. The purpose of the project was to evaluate human responses to different robot physical characteristics and hugging behaviors. “Analysis of the results showed that people significantly prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end. Taking part in the experiment also significantly increased positive user opinions of robots and robot use.” (Abstract) The paper “Softness, Warmth, and Responsiveness Improve Robot Hugs” was published in the International Journal of Social Robotics in January 2019 (First Online: 25 October 2018). It is available via link.springer.com/article/10.1007/s12369-018-0495-2.