Artificial intelligence (AI) has gained enormous importance in research and practice in the 21st century after decades of ups and downs. Machine ethics and machine consciousness (artificial consciousness) were able to bring their terms and methods to the public at the same time, where they were more or less well understood. Since 2018, a graphic has attempted to clarify the terms and relationships of artificial intelligence, machine ethics and machine consciousness. It is constantly evolving, making it more precise, but also more complex. A new version has been available since the beginning of 2021. In it, it is made even clearer that the three disciplines not only map certain capabilities (mostly of humans), but can also expand them.
Blob Opera is an AI experiment by David Li in collaboration with Google Arts and Culture. According to the website, it pays tribute to and explores the original musical instrument, namely the voice. “We developed a machine learning model trained on the voices of four opera singers in order to create an engaging experiment for everyone, regardless of musical skills. Tenor, Christian Joel, bass Frederick Tong, mezzo‑soprano Joanna Gamble and soprano Olivia Doutney recorded 16 hours of singing. In the experiment you don’t hear their voices, but the machine learning model’s understanding of what opera singing sounds like, based on what it learnt from them.” (Blop Opera) You can drag the blobs up and down to change pitch – or forwards and backwards for different vowel sounds. It is not only pleasurable to hear the blobs, but also to see them. While singing, they look around and open and close their mouths. Even their tongues can be seen again and again.
Luís Moniz Pereira is one of the best known and most active machine ethicists in the world. Together with his colleague The Anh Han he wrote the article “Evolutionary Machine Ethics” for the “Handbuch Maschinenethik” (“Handbook Machine Ethics”). Editor is Oliver Bendel (Zurich, Switzerland). From the abstract: “Machine ethics is a sprouting interdisciplinary field of enquiry arising from the need of imbuing autonomous agents with some capacity for moral decision-making. Its overall results are not only important for equipping agents with a capacity for moral judgment, but also for helping better understand morality, through the creation and testing of computational models of ethics theories. Computer models have become well defined, eminently observable in their dynamics, and can be transformed incrementally in expeditious ways. We address, in work reported and surveyed here, the emergence and evolution of cooperation in the collective realm. We discuss how our own research with Evolutionary Game Theory (EGT) modelling and experimentation leads to important insights for machine ethics, such as the design of moral machines, multi-agent systems, and contractual algorithms, plus their potential application in human settings too.” (Abstract) Springer VS published the “Handbuch Maschinenethik” in October 2019. Since then it has been downloaded thousands of times.
Springer launches a new journal entitled “AI and Ethics”. This topic has been researched for several years from various perspectives, including information ethics, robot ethics (aka roboethics) and machine ethics. From the description: “AI and Ethics seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It will focus on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. The journal will provide opportunities for academics, scientists, practitioners, policy makers, and the public to consider how AI might affect our lives in the future, and what implications, benefits, and risks might emerge. Attention will be given to the potential intentional and unintentional misuses of the research and technology presented in articles we publish. Examples of harmful consequences include weaponization, bias in face recognition systems, and discrimination and unfairness with respect to race and gender.
The Emmy Noether Research Group “The Phenomenon of Interaction in Human-Machine Interaction” and the Institute of Ethics, History, and Theory of Medicine (LMU Munich) host a lecture series “on some of the pressing issues arising in the context of implementing and using AI in medicine”. “Each date will consist of three short talks by renowned experts in the respective fields followed by a roundtable discussion. All lectures are held online (Zoom) until further notice.” (Website The Philosophy of Human-Machine Interaction) On 19 November 2020 (18.00-19.30) the topic will be “AI in Medical Robotics”. Speakers will be Prof. Dr. Oliver Bendel (University of Applied Sciences and Arts Northwestern Switzerland), Prof. Dr. Manfred Hild (Beuth University of Applied Sciences Berlin) and Dr. Janina Loh (University of Wien). The presentation language is German. More information via interactionphilosophy.wordpress.com.
“Agence” by Transitional Forms (Toronto) is the first example of a film that uses reinforcement learning to control its animated characters. MIT Technology Review explains this in an article published on October 2, 2020. “Agence was debuted at the Venice International Film Festival last month and was released this week to watch/play via Steam, an online video-game platform. The basic plot revolves around a group of creatures and their appetite for a mysterious plant that appears on their planet. Can they control their desire, or will they destabilize the planet and get tipped to their doom? Survivors ascend to another world.” (MIT Technology Review, 2 October 2020) The film could be another example of how art and artificial intelligence belong together. Its director also expresses himself in this direction: “I am super passionate about artificial intelligence because I believe that AI and movies belong together …” (MIT Technology Review, 2 October 2020). Whether the audience shares the enthusiasm in this case and in other areas, the future must show.
Diffbot, a Stanford startup, is building an AI-based spider that reads as many pages as possible on the entire public web, and extracts as many facts from those pages as it can. “Like GPT-3, Diffbot’s system learns by vacuuming up vast amounts of human-written text found online. But instead of using that data to train a language model, Diffbot turns what it reads into a series of three-part factoids that relate one thing to another: subject, verb, object.” (MIT Technology Review, 4 September 2020) Knowledge graphs – which is what this is all about – have been around for a long time. However, they have been created mostly manually or only with regard to certain areas. Some years ago, Google started using knowledge graphs too. Instead of giving us a list of links to pages about Spider-Man, the service gives us a set of facts about him drawn from its knowledge graph. But it only does this for its most popular search terms. According to MIT Technology Review, the startup wants to do it for everything. “By fully automating the construction process, Diffbot has been able to build what may be the largest knowledge graph ever.” (MIT Technology Review, 4 September 2020) Diffbot’s AI-based spider reads the web as we read it and sees the same facts that we see. Even if it does not really understand what it sees – we will be amazed at the results.
IBM will stop developing or selling facial recognition software due to concerns the technology is used to support racism. This was reported by MIT Technology Review on 9 June 2020. In a letter to Congress, IBM’s CEO Arvind Krishna wrote: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” (Letter to Congress, 8 June 2020) The extraordinary letter “also called for new federal rules to crack down on police misconduct, and more training and education for in-demand skills to improve economic opportunities for people of color” (MIT Technology Review, 9 June 2020). A talk at Stanford University in 2018 warned against the return of physiognomy in connection with face recognition. The paper is available here.
According to Gizmodo, a robot from Boston Dynamics has been deployed to a park in Singapore to remind people they should follow social distancing guidelines during the pandemic. Spot is not designed as a security robot, like the K5 or the K3 from Knightscope. But it has other qualities: it can walk on four legs and is very fast. The machine, which was set loose on 8 May 2020 in Bishan-Ang Mo Kio Park, “broadcasts a message reminding visitors they need to stay away from other humans, as covid-19 poses a very serious threat to our health”. It “was made available for purchase by businesses and governments last year and has specially designed cameras to make sure it doesn’t run into things.” (Gizmodo, 8 May 2020) According to a press release from Singapore’s GovTech agency, the cameras will not be able to track or recognize specific individuals, “and no personal data will be collected” (Gizmodo, 8 May 2020). COVID-19 demonstrates that digitization and technologization can be helpful in crises and disasters. Service robots such as security robots, transport robots, care robots and disinfection robots are in increasing demand.
Artificial intelligence is underestimated in some aspects, but overestimated in many. It is currently seen as a secret weapon against COVID-19. But it most probably is not. The statement of Alex Engler, a David M. Rubenstein Fellow, is clear: “Although corporate press releases and some media coverage sing its praises, AI will play only a marginal role in our fight against Covid-19. While there are undoubtedly ways in which it will be helpful – and even more so in future pandemics – at the current moment, technologies like data reporting, telemedicine, and conventional diagnostic tools are far more impactful.” (Wired, 26 April 2020) Above all, however, it is social distancing that interrupts the transmission paths and thus curbs the spread of the virus. And it’s drugs that will solve the problem this year or next. So there is a need for behavioural adjustment and medical research. Artificial intelligence is not really needed. Alex Engler identified the necessary heuristics for a healthy skepticism of AI claims around Covid-19 and explained them in Wired magazine.