In 2018, Dr. Yuefang Zhou and Prof. Dr. Martin Fischer initiated the first international workshop on intimate human-robot relations at the University of Potsdam, “which resulted in the publication of an edited book on developments in human-robot intimate relationships”. This year, Prof. Dr. Martin Fischer, Prof. Dr. Rebecca Lazarides, and Dr. Yuefang Zhou are organizing the second edition. “As interest in the topic of humanoid AI continues to grow, the scope of the workshop has widened. During this year’s workshop, international experts from a variety of different disciplines will share their insights on motivational, social and cognitive aspects of learning, with a focus on humanoid intelligent tutoring systems and social learning companions/robots.” (Website Embracing AI) The international workshop “Learning from Humanoid AI: Motivational, Social & Cognitive Perspectives” will take place on 29 and 30 November 2019 at the University of Potsdam. Keynote speakers are Prof. Dr. Tony Belpaeme, Prof. Dr. Oliver Bendel, Prof. Dr. Angelo Cangelosi, Dr. Gabriella Cortellessa, Dr. Kate Devlin, Prof. Dr. Verena Hafner, Dr. Nicolas Spatola, Dr. Jessica Szczuka, and Prof. Dr. Agnieszka Wykowska. Further information is available at embracingai.wordpress.com/.
The research article “Dissecting racial bias in an algorithm used to manage the health of populations” by Ziad Obermeyer, Brian Powers, Christine Vogeli and Sendhil Mullainathan has been well received by science and media. It was published in the journal Science on 25 October 2019. From the abstract: “Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses.” (Abstract) The authors suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts. The journal Nature quotes Milena Gianfrancesco, an epidemiologist at the University of California, San Francisco, with the following words: “We need a better way of actually assessing the health of the patients.”
The Mayflower Autonomous Ship (MAS) could be the first vessel to cross the Atlantic that is able to navigate around obstacles and hazards by itself. It will depart from Plymouth, UK on the fourth centenary of the original Mayflower voyage, on 6 September 2020, and will reach Plymouth, USA after an ecxiting tour, dedicated to science. “The project was put together by marine research and exploration company ProMare in an effort to expand the scope of marine research. The boat will carry three research pods equipped with scientific instruments to measure various phenomena such as ocean plastics, mammal behaviour or sea level changes.” (ZDnet, 16 October 2019) According to ZDnet, IBM has now joined the initiative to supply technical support for all navigation operations. It is important that the futuristic ship is not only careful with things, but also with animals. In this context, insights of animal-machine interaction and machine ethics might be useful. Ultimately, the excursion will help the marine mammals by obtaining data on their behaviour.
In summer 2019, Blakeley Payne ran a very special course at MIT. According to an article in Quartz magazine, the graduate student had created an AI ethics curriculum to make kids and teens aware of how AI systems mediate their everyday lives. “By starting early, she hopes the kids will become more conscious of how AI is designed and how it can manipulate them. These lessons also help prepare them for the jobs of the future, and potentially become AI designers rather than just consumers.” (Quartz, 4 September 2019) Not everyone is convinced that artificial intelligence is the right topic for kids and teens. “Some argue that developing kindness, citizenship, or even a foreign language might serve students better than learning AI systems that could be outdated by the time they graduate. But Payne sees middle school as a unique time to start kids understanding the world they live in: it’s around ages 10 to 14 year that kids start to experience higher-level thoughts and deal with complex moral reasoning. And most of them have smartphones loaded with all sorts of AI.” (Quartz, 4 September 2019) There is no doubt that the MIT course could be a role model for schools around the world. The renowned university once again seems to be setting new standards.
The whistleblower Edward Snowden spoke to the Guardian about his new life and concerns for the future. The reason for the two-hour interview was his book “Permanent Record”, which will be published on 17 September 2019. “In his book, Snowden describes in detail for the first time his background, and what led him to leak details of the secret programmes being run by the US National Security Agency (NSA) and the UK’s secret communication headquarters, GCHQ.” (Guardian, 13 September 2019) According to the Guardian, Snowden said: “The greatest danger still lies ahead, with the refinement of artificial intelligence capabilities, such as facial and pattern recognition.” (Guardian, 13 September 2019) The number of public appearances by and interviews with him is rather manageable. On 7 September 2016, the movie “Snowden” was shown as a preview in the Cinéma Vendôme in Brussels. Jan Philipp Albrecht, Member of the European Parliament, invited Viviane Reding, the Luxembourg politician and journalist, and authors and scientists such as Yvonne Hofstetter and Oliver Bendel. After the preview, Edward Snowden was connected to the participants via videoconferencing for almost three quarters of an hour.
Voice assistants often have difficulties with dialects. This was already evident in the case of Siri in 2012. In German-speaking Switzerland, she did not always understand users. There is a similar problem in the UK. Alexa and other voice assistants have trouble understanding the accents there. According to the Guardian, the BBC is preparing to launch a rival to Amazon’s Alexa called Beeb (a nickname for the public service broadcaster, just like “Auntie”). “The voice assistant, which has been created by an in-house BBC team, will be launched next year, with a focus on enabling people to find their favourite programmes and interact with online services. While some US-developed products have struggled to understand strong regional accents, the BBC will … ask staff in offices around the UK to record their voices and make sure the software understands them.” (Guardian, 27 August 2019) Auntie has no plans to develop or offer a physical product such as Amazon’s Echo speaker or a Google Home device. Instead, the Beeb software will be built into the BBC online services. It remains to be seen whether this will solve all problems of comprehension.