Ethics in AI for Kids and Teens

In summer 2019, Blakeley Payne ran a very special course at MIT. According to an article in Quartz magazine, the graduate student had created an AI ethics curriculum to make kids and teens aware of how AI systems mediate their everyday lives. “By starting early, she hopes the kids will become more conscious of how AI is designed and how it can manipulate them. These lessons also help prepare them for the jobs of the future, and potentially become AI designers rather than just consumers.” (Quartz, 4 September 2019) Not everyone is convinced that artificial intelligence is the right topic for kids and teens. “Some argue that developing kindness, citizenship, or even a foreign language might serve students better than learning AI systems that could be outdated by the time they graduate. But Payne sees middle school as a unique time to start kids understanding the world they live in: it’s around ages 10 to 14 year that kids start to experience higher-level thoughts and deal with complex moral reasoning. And most of them have smartphones loaded with all sorts of AI.” (Quartz, 4 September 2019) There is no doubt that the MIT course could be a role model for schools around the world. The renowned university once again seems to be setting new standards.

Permanent Record

The whistleblower Edward Snowden spoke to the Guardian about his new life and concerns for the future. The reason for the two-hour interview was his book “Permanent Record”, which will be published on 17 September 2019. “In his book, Snowden describes in detail for the first time his background, and what led him to leak details of the secret programmes being run by the US National Security Agency (NSA) and the UK’s secret communication headquarters, GCHQ.” (Guardian, 13 September 2019) According to the Guardian, Snowden said: “The greatest danger still lies ahead, with the refinement of artificial intelligence capabilities, such as facial and pattern recognition.” (Guardian, 13 September 2019) The number of public appearances by and interviews with him is rather manageable. On 7 September 2016, the movie “Snowden” was shown as a preview in the Cinéma Vendôme in Brussels. Jan Philipp Albrecht, Member of the European Parliament, invited Viviane Reding, the Luxembourg politician and journalist, and authors and scientists such as Yvonne Hofstetter and Oliver Bendel. After the preview, Edward Snowden was connected to the participants via videoconferencing for almost three quarters of an hour.

 

Dialects and Accents as a Challenge for Voice Assistants

Voice assistants often have difficulties with dialects. This was already evident in the case of Siri in 2012. In German-speaking Switzerland, she did not always understand users. There is a similar problem in the UK. Alexa and other voice assistants have trouble understanding the accents there. According to the Guardian, the BBC is preparing to launch a rival to Amazon’s Alexa called Beeb (a nickname for the public service broadcaster, just like “Auntie”). “The voice assistant, which has been created by an in-house BBC team, will be launched next year, with a focus on enabling people to find their favourite programmes and interact with online services. While some US-developed products have struggled to understand strong regional accents, the BBC will … ask staff in offices around the UK to record their voices and make sure the software understands them.” (Guardian, 27 August 2019) Auntie has no plans to develop or offer a physical product such as Amazon’s Echo speaker or a Google Home device. Instead, the Beeb software will be built into the BBC online services. It remains to be seen whether this will solve all problems of comprehension.

An AI System for Multiple-choice Tests

According to the New York Times, the Allen Institute for Artificial Intelligence unveiled a new system that correctly answered more than 90 percent of the questions on an eighth-grade science test and more than 80 percent on a 12th-grade exam. Is it really a breakthrough for AI technology, as the title of the article claims? This is quite controversial among experts. The newspaper is optimistic: “The system, called Aristo, is an indication that in just the past several months researchers have made significant progress in developing A.I. that can understand languages and mimic the logic and decision-making of humans.” (NYT, 4 September 2019) Aristo was built for multiple-choice tests. “It took standard exams written for students in New York, though the Allen Institute removed all questions that included pictures and diagrams.” (NYT, 4 September 2019) Some questions could be answered by simple information retrieval. There are numerous systems that access Google and Wikipedia, including artifacts of machine ethics like LIEBOT and BESTBOT. But for the answers to other questions logical thinking was required. Perhaps Aristo is helping to abolish multiple-choice tests – not so much because it can solve them, but because they are often not effective.

Intimate Relationships with Humanoid Robots: Are we both ready?

In the recent years, there has been a widespread media coverage of the arrival of the sex robots (e.g., Harmony & Henry, produced by Realbotix™, 2019). This has created heated discussions around the pros and cons of introducing sex robots into human relationships. Nevertheless, it has also served to draw attention to far-reaching social and ethical challenges that will be imposed on us as users of this new technology. A recent Nature editorial (entitled AI LOVE YOU, Nature 547, 138, July 2017) called for urgent empirical research so that empirical evidences can be used to inform robotics design and guide public ethical debates. In response to this urgent need for empirical research in this field, Yuefang Zhou co-organized the first international workshop (AI Love You, 2017) on the theme of human-robot intimate relationships. The workshop brought together an interdisciplinary team, including psychologists, philosophers, computer scientists, ethicists, clinicians, as well as interested members of the general public to discuss this emerging topic. The newly released book (“AI Love You: Developments in Human-Robot Intimate Relationships”, 2019, www.springer.com/gp/book/9783030197339) builds on the presentations and discussions at the workshop to answer the questions of readiness from the perspectives of both technology and humans as users of the technology.

The System that Detects Fear

Amazon Rekognition is a well-known software for facial recognition, including emotion detection. It is used in the BESTBOT, a moral machine that hides an immoral machine. The immoral is precisely caused by facial recognition, which endangers the privacy of the user and his or her informational autonomy. The project is intended not least to draw attention to this risk. Amazon announced on 12 August 2019 that it has improved and expanded its system: “Today, we are launching accuracy and functionality improvements to our face analysis features. Face analysis generates metadata about detected faces in the form of gender, age range, emotions, attributes such as ‘Smile’, face pose, face image quality and face landmarks. With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear’.” (Amazon, 12 August 2019) Because the BESTBOT accesses other systems such as MS Face API and Kairos, it can already recognize fear. So the change at Amazon means no change for this artifact of machine ethics.