According to Tages-Anzeiger (9 December 2019), a consortium led by a Swiss start-up named Clearspace has won a ESA competition and was awarded the contract for a waste disposal mission in orbit. The so-called “chaser” of the EPFL spin-off has four robotic arms with which a remnant of the ESA launch vehicle Vega is to be captured and drawn. Chaser and the part of Vespa will then burn up together in the atmosphere. Later, the company wants to look for new targets. According to Luc Piguet, the issue of space debris is more urgent than ever. The founder and CEO of Clearspace says that there are currently almost 2,000 active and 3,000 inactive satellites. The problem is likely to worsen over the next few years. Where people roam, the mountains of rubbish grow and space fills with rubbish. This may sound literary, but above all, it’s terrible. Robots could be a solution both on Earth and in orbit.
The international workshop “Learning from Humanoid AI: Motivational, Social & Cognitive Perspectives” took place from 30 November – 1 December 2019 at the University of Potsdam. Dr. Jessica Szczuka raised the question: “What do men and women see in sex robots?” … Her talk was based on the paper “Jealousy 4.0? An empirical study on jealousy-related discomfort of women evoked by other women and gynoid robots” by herself and Nicole Krämer. In their introduction the authors write: “In a paper discussing machine ethics, Bendel asked whether it is ‘possible to be unfaithful to the human love partner with a sex robot, and can a man or a woman be jealous because of the robot’s other love affairs?’ … In this line, the present study aims to empirically investigate whether women perceive robots as potential competitors to their relationship in the same way as they perceive other women to be so. As the degree of human-likeness of robots contributes to the similarity between female-looking robots and women, we additionally investigated differences between machine-like female-looking robots and human-like female-looking robots with respect to their ability to evoke jealousy-related discomfort.” (Paper) The paper can be accessed here.
In 2018, Dr. Yuefang Zhou and Prof. Dr. Martin Fischer initiated the first international workshop on intimate human-robot relations at the University of Potsdam, “which resulted in the publication of an edited book on developments in human-robot intimate relationships”. This year, Prof. Dr. Martin Fischer, Prof. Dr. Rebecca Lazarides, and Dr. Yuefang Zhou are organizing the second edition. “As interest in the topic of humanoid AI continues to grow, the scope of the workshop has widened. During this year’s workshop, international experts from a variety of different disciplines will share their insights on motivational, social and cognitive aspects of learning, with a focus on humanoid intelligent tutoring systems and social learning companions/robots.” (Website Embracing AI) The international workshop “Learning from Humanoid AI: Motivational, Social & Cognitive Perspectives” will take place on 29 and 30 November 2019 at the University of Potsdam. Keynote speakers are Prof. Dr. Tony Belpaeme, Prof. Dr. Oliver Bendel, Prof. Dr. Angelo Cangelosi, Dr. Gabriella Cortellessa, Dr. Kate Devlin, Prof. Dr. Verena Hafner, Dr. Nicolas Spatola, Dr. Jessica Szczuka, and Prof. Dr. Agnieszka Wykowska. Further information is available at embracingai.wordpress.com/.
Various media claimed in November 2019 that there would be very special experiments with cows in Russia. There are pictures circulating showing an animal wearing a virtual reality (VR) headset. This one could reduce anxiety and increase milk yield if it would show a pleasant environment – that’s at least the media’s assumption. But, according to The Verge, “it’s not at all clear whether this is a genuine trial or an elaborate marketing stunt” (The Verge, 26 November 2019). At the moment, there is hardly any evidence as to whether VR would work for cows. There is no doubt that it makes sense for humans, at least in the context of marketing. They could wear VR glasses to see a landscape with cows. They would then believe that most cows have a good life. But this good life does not exist. Cows suffer from what you do to them – some more, some less. “At the end of the day, what we can say is that someone took the time to make at least one mock-up virtual reality headset for a cow and took these pictures. We don’t need to milk the story any more than that.” (The Verge, 26 November 2019)
In his lecture “Service Robots in Health Care” at the Orient-Institut Istanbul on 18 December 2019, Prof. Dr. Oliver Bendel from Zurich, Switzerland is going to deal with care robots as well as therapy and surgery robots. He will present well-known and less known examples and clarify the goals, tasks and characteristics of these service robots in the healthcare sector. Afterwards he will investigate current and future functions of care robots, including sexual assistance functions. Against this background, the lecture is going to consider both the perspective of information ethics and machine ethics. In the end, it should become clear which robot types and prototypes or products are available in health care, which purposes they fulfil, which functions they assume, how the healthcare system changes through their use and which implications and consequences this has for the individual and society. The program of the series “Human, medicine and society: past, present and future encounters” can be downloaded here.
“Alphabet X, the company’s early research and development division, has unveiled the Everyday Robot project, whose aim is to develop a ‘general-purpose learning robot.’ The idea is to equip robots with cameras and complex machine-learning software, letting them observe the world around them and learn from it without needing to be taught every potential situation they may encounter.” (MIT Technology Review, 23 November 2019) This was reported by MIT Technology Review on 23 November 2019 in the article “Alphabet X’s ‘Everyday Robot’ project is making machines that learn as they go”. The approach of Alphabet X seems to be well though-out and target-oriented. In a way, it is oriented towards human learning. One could also teach robots human language in this way. With the help of microphones, cameras and machine learning, they would gradually understand us better and better. For example, they observe how we point to and comment on a person. Or they perceive that we point to an object and say a certain term – and after some time they conclude that this is the name of the object. However, such frameworks pose ethical and legal challenges. You can’t just designate cities as such test areas. The result would be comprehensive surveillance in public spaces. Specially established test areas, on the other hand, would probably not have the same benefits as “natural environments”. Many questions still need to be answered.