Welcome to the AI Opera

Blob Opera is an AI experiment by David Li in collaboration with Google Arts and Culture. According to the website, it pays tribute to and explores the original musical instrument, namely the voice. “We developed a machine learning model trained on the voices of four opera singers in order to create an engaging experiment for everyone, regardless of musical skills. Tenor, Christian Joel, bass Frederick Tong, mezzo‑soprano Joanna Gamble and soprano Olivia Doutney recorded 16 hours of singing. In the experiment you don’t hear their voices, but the machine learning model’s understanding of what opera singing sounds like, based on what it learnt from them.” (Blop Opera) You can drag the blobs up and down to change pitch – or forwards and backwards for different vowel sounds. It is not only pleasurable to hear the blobs, but also to see them. While singing, they look around and open and close their mouths. Even their tongues can be seen again and again.

Evolutionary Machine Ethics

Luís Moniz Pereira is one of the best known and most active machine ethicists in the world. Together with his colleague The Anh Han he wrote the article “Evolutionary Machine Ethics” for the “Handbuch Maschinenethik” (“Handbook Machine Ethics”). Editor is Oliver Bendel (Zurich, Switzerland). From the abstract: “Machine ethics is a sprouting interdisciplinary field of enquiry arising from the need of imbuing autonomous agents with some capacity for moral decision-making. Its overall results are not only important for equipping agents with a capacity for moral judgment, but also for helping better understand morality, through the creation and testing of computational models of ethics theories. Computer models have become well defined, eminently observable in their dynamics, and can be transformed incrementally in expeditious ways. We address, in work reported and surveyed here, the emergence and evolution of cooperation in the collective realm. We discuss how our own research with Evolutionary Game Theory (EGT) modelling and experimentation leads to important insights for machine ethics, such as the design of moral machines, multi-agent systems, and contractual algorithms, plus their potential application in human settings too.” (Abstract) Springer VS published the “Handbuch Maschinenethik” in October 2019.  Since then it has been downloaded thousands of times.

New Journal on AI and Ethics

Springer launches a new journal entitled “AI and Ethics”. This topic has been researched for several years from various perspectives, including information ethics, robot ethics (aka roboethics) and machine ethics. From the description: “AI and Ethics seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It will focus on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. The journal will provide opportunities for academics, scientists, practitioners, policy makers, and the public to consider how AI might affect our lives in the future, and what implications, benefits, and risks might emerge. Attention will be given to the potential intentional and unintentional misuses of the research and technology presented in articles we publish. Examples of harmful consequences include weaponization, bias in face recognition systems, and discrimination and unfairness with respect to race and gender.

AI in Medical Robotics

The Emmy Noether Research Group “The Phenomenon of Interaction in Human-Machine Interaction” and the Institute of Ethics, History, and Theory of Medicine (LMU Munich) host a lecture series “on some of the pressing issues arising in the context of implementing and using AI in medicine”. “Each date will consist of three short talks by renowned experts in the respective fields followed by a roundtable discussion. All lectures are held online (Zoom) until further notice.” (Website The Philosophy of Human-Machine Interaction) On 19 November 2020 (18.00-19.30) the topic will be “AI in Medical Robotics”. Speakers will be Prof. Dr. Oliver Bendel (University of Applied Sciences and Arts Northwestern Switzerland), Prof. Dr. Manfred Hild (Beuth University of Applied Sciences Berlin) and Dr. Janina Loh (University of Wien). The presentation language is German. More information via interactionphilosophy.wordpress.com.

AI in the Art of Film

“Agence” by Transitional Forms (Toronto) is the first example of a film that uses reinforcement learning to control its animated characters. MIT Technology Review explains this in an article published on October 2, 2020. “Agence was debuted at the Venice International Film Festival last month and was released this week to watch/play via Steam, an online video-game platform. The basic plot revolves around a group of creatures and their appetite for a mysterious plant that appears on their planet. Can they control their desire, or will they destabilize the planet and get tipped to their doom? Survivors ascend to another world.” (MIT Technology Review, 2 October 2020) The film could be another example of how art and artificial intelligence belong together. Its director also expresses himself in this direction: “I am super passionate about artificial intelligence because I believe that AI and movies belong together …” (MIT Technology Review, 2 October 2020). Whether the audience shares the enthusiasm in this case and in other areas, the future must show.

A Spider that Reads the Whole Web

Diffbot, a Stanford startup, is building an AI-based spider that reads as many pages as possible on the entire public web, and extracts as many facts from those pages as it can. “Like GPT-3, Diffbot’s system learns by vacuuming up vast amounts of human-written text found online. But instead of using that data to train a language model, Diffbot turns what it reads into a series of three-part factoids that relate one thing to another: subject, verb, object.” (MIT Technology Review, 4 September 2020) Knowledge graphs – which is what this is all about – have been around for a long time. However, they have been created mostly manually or only with regard to certain areas. Some years ago, Google started using knowledge graphs too. Instead of giving us a list of links to pages about Spider-Man, the service gives us a set of facts about him drawn from its knowledge graph. But it only does this for its most popular search terms. According to MIT Technology Review, the startup wants to do it for everything. “By fully automating the construction process, Diffbot has been able to build what may be the largest knowledge graph ever.” (MIT Technology Review, 4 September 2020) Diffbot’s AI-based spider reads the web as we read it and sees the same facts that we see. Even if it does not really understand what it sees – we will be amazed at the results.

Machine Dance

Which moves go with which song? Should I do the Floss, the Dougie or the Robot? Or should I create a new style? But which one? An AI system could help answer these questions in the future. At least the announcement of a social media platform raises this hope: “Facebook AI researchers have developed a system that enables a machine to generate a dance for any input music. It’s not just imitating human dance movements; it’s creating completely original, highly creative routines. That’s because it uses finely tuned search procedures to stay synchronized and surprising, the two main criteria of a creative dance. Human evaluators say that the AI’s dances are more creative and inspiring than meaningful baselines.” (Website FB) The AI system could inspire dancers when they get stuck and help them to constantly improve. More information via about.fb.com/news/2020/08/ai-dancing-facebook-research/.

Show Me Your Hands

Fujitsu has developed an artificial intelligence system that could ensure healthcare, hotel and food industry workers scrub their hands properly. This could support the fight against the COVID-19 pandemic. “The AI, which can recognize complex hand movements and can even detect when people aren’t using soap, was under development before the coronavirus outbreak for Japanese companies implementing stricter hygiene regulations … It is based on crime surveillance technology that can detect suspicious body movements.” (Reuters, 19 June 2020) Genta Suzuki, a senior researcher at the Japanese information technology company, told the news agency that the AI can’t identify people from their hands, but it could be coupled with identity recognition technology so companies could keep track of employees’ washing habits. Maybe in the future it won’t be our parents who will show us how to wash ourselves properly, but robots and AI systems. Or they save themselves this detour and clean us directly.

Billions of Trees, Planted by Drones

Flash Forest, a Canadian start-up, plans to plant 40,000 trees in the north of Toronto within a few days. It uses drones, i.e. technology that also plays a role in detecting and fighting forest fires. By 2028, it aims to have planted a full 1 billion trees. “The company, like a handful of other startups that are also using tree-planting drones, believes that technology can help the world reach ambitious goals to restore forests to stem biodiversity loss and fight climate change. The Intergovernmental Panel on Climate Change says that it’s necessary to plant 1 billion hectares of trees – a forest roughly the size of the entire United States – to limit global warming to 1.5 degrees Celsius.” (Fast Company, 15 May 2020) It is without doubt a good idea to use drones for planting. But you have to remember that unmanned aerial vehicles (UAV) of this type have a bad energy balance. Above all, however, birds and other creatures must not be frightened away and must not be hurt (see, e.g., this article). In this context, insights from animal-machine interaction and machine ethics can be used.

AI as a Secret Weapon Against COVID-19?

Artificial intelligence is underestimated in some aspects, but overestimated in many. It is currently seen as a secret weapon against COVID-19. But it most probably is not. The statement of Alex Engler, a David M. Rubenstein Fellow, is clear: “Although corporate press releases and some media coverage sing its praises, AI will play only a marginal role in our fight against Covid-19. While there are undoubtedly ways in which it will be helpful – and even more so in future pandemics – at the current moment, technologies like data reporting, telemedicine, and conventional diagnostic tools are far more impactful.” (Wired, 26 April 2020) Above all, however, it is social distancing that interrupts the transmission paths and thus curbs the spread of the virus. And it’s drugs that will solve the problem this year or next. So there is a need for behavioural adjustment and medical research. Artificial intelligence is not really needed. Alex Engler identified the necessary heuristics for a healthy skepticism of AI claims around Covid-19 and explained them in Wired magazine.