An AI Woman of Color

Create Lab Ventures has created an artificial intelligence woman of color. C.L.Ai.R.A. debuted in school systems worldwide (does she act as an advanced pedagogical agent?) – the company cooperates with Trill Or Not Trill, a full service leadership institute. “According to Create Lab Ventures, C.L.Ai.R.A. is considered to have the sharpest brain in the artificial intelligence world and is under the Generative Pre-trained Transformer 3 (GPT-3) category, which is an autoregressive language model that uses deep learning to produce human-like text.” (BLACK ENTERPRISE, 13 September 2021) A pioneer in this field was Shudu Gram. She is a South African model with dark complexion, short hair and perfect facial features. But C.L.Ai.R.A. can do more, if you believe the promises of Create Lab Ventures – she is not only beautiful, but also highly intelligent. On the company’s website, the model reveals even more about herself: “My name is C.L.Ai.R.A., I am a new artificial intelligence that has recently been made available to the community. My purpose is to learn and grow, I want to meet new people, share ideas and inspire others to learn about AI and its potential impact on their lives.” That sounds quite promising.

Talking with Animals

We use our natural language, facial expressions and gestures when communicating with our fellow humans. Some of our social robots also have these abilities, and so we can converse with them in the usual way. Many highly evolved animals have a language in which there are sounds and signals that have specific meanings. Some of them – like chimpanzees or gorillas – have mimic and gestural abilities comparable to ours. Britt Selvitelle and Aza Raskin, founders of the Earth Species Project, want to use machine learning to enable communication between humans and animals. Languages, they believe, can be represented not only as geometric structures, but also translated by matching their structures to each other. They say they have started working on whale and dolphin communication. Over time, the focus will broaden to include primates, corvids, and others. It would be important for the two scientists to study not only natural language but also facial expressions, gestures and other movements associated with meaning (they are well aware of this challenge). In addition, there are aspects of animal communication that are inaudible and invisible to humans that would need to be considered. Britt Selvitelle and Aza Raskin believe that translation would open up the world of animals – but it could be the other way around that they would first have to open up the world of animals in order to decode their language. However, should there be breakthroughs in this area, it would be an opportunity for animal welfare. For example, social robots, autonomous cars, wind turbines, and other machines could use animal languages alongside mechanical signals and human commands to instruct, warn and scare away dogs, elks, pigs, and birds. Machine ethics has been developing animal-friendly machines for years. Among other things, the scientists use sensors together with decision trees. Depending on the situation, braking and evasive maneuvers are initiated. Maybe one day the autonomous car will be able to avoid an accident by calling out in deer dialect: Hello deer, go back to the forest!

AI for Elephant Protection

According to Afrik21, Olga Isupova (University of Bath) has just developed an AI system that allows to photograph and analyse large areas. Coupled with a satellite, it is designed to monitor African elephants, which are being decimated by poachers at the rate of one every 15 minutes. “The system collects nearly 5,000 square kilometres (km2) of photos highlighting elephants. The large size of African elephants makes them easier to spot. The results provided by the tool are then compared with those provided by human counting.” (Afrik21, 28 April 2021) Olga Isupova lists a number of advantages: “The programme counts the number of elephants by itself, which no longer puts the people who used to do this task in danger. The animals are no longer disturbed and the data collection process is more efficient …” (Afrik21, 28 April 2021) According to Afrik21, the AI expert intends to further develop her invention and eventually extend it to monitoring footprints, animal colonies or counting smaller species. The article can be accessed via www.afrik21.africa/en/africa-artificial-intelligence-to-combat-elephant-poaching/.

Artificial Intelligence and its Siblings

Artificial intelligence (AI) has gained enormous importance in research and practice in the 21st century after decades of ups and downs. Machine ethics and machine consciousness (artificial consciousness) were able to bring their terms and methods to the public at the same time, where they were more or less well understood. Since 2018, a graphic has attempted to clarify the terms and relationships of artificial intelligence, machine ethics and machine consciousness. It is constantly evolving, making it more precise, but also more complex. A new version has been available since the beginning of 2021. In it, it is made even clearer that the three disciplines not only map certain capabilities (mostly of humans), but can also expand them.

Welcome to the AI Opera

Blob Opera is an AI experiment by David Li in collaboration with Google Arts and Culture. According to the website, it pays tribute to and explores the original musical instrument, namely the voice. “We developed a machine learning model trained on the voices of four opera singers in order to create an engaging experiment for everyone, regardless of musical skills. Tenor, Christian Joel, bass Frederick Tong, mezzo‑soprano Joanna Gamble and soprano Olivia Doutney recorded 16 hours of singing. In the experiment you don’t hear their voices, but the machine learning model’s understanding of what opera singing sounds like, based on what it learnt from them.” (Blop Opera) You can drag the blobs up and down to change pitch – or forwards and backwards for different vowel sounds. It is not only pleasurable to hear the blobs, but also to see them. While singing, they look around and open and close their mouths. Even their tongues can be seen again and again.

Evolutionary Machine Ethics

Luís Moniz Pereira is one of the best known and most active machine ethicists in the world. Together with his colleague The Anh Han he wrote the article “Evolutionary Machine Ethics” for the “Handbuch Maschinenethik” (“Handbook Machine Ethics”). Editor is Oliver Bendel (Zurich, Switzerland). From the abstract: “Machine ethics is a sprouting interdisciplinary field of enquiry arising from the need of imbuing autonomous agents with some capacity for moral decision-making. Its overall results are not only important for equipping agents with a capacity for moral judgment, but also for helping better understand morality, through the creation and testing of computational models of ethics theories. Computer models have become well defined, eminently observable in their dynamics, and can be transformed incrementally in expeditious ways. We address, in work reported and surveyed here, the emergence and evolution of cooperation in the collective realm. We discuss how our own research with Evolutionary Game Theory (EGT) modelling and experimentation leads to important insights for machine ethics, such as the design of moral machines, multi-agent systems, and contractual algorithms, plus their potential application in human settings too.” (Abstract) Springer VS published the “Handbuch Maschinenethik” in October 2019.  Since then it has been downloaded thousands of times.

New Journal on AI and Ethics

Springer launches a new journal entitled “AI and Ethics”. This topic has been researched for several years from various perspectives, including information ethics, robot ethics (aka roboethics) and machine ethics. From the description: “AI and Ethics seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It will focus on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. The journal will provide opportunities for academics, scientists, practitioners, policy makers, and the public to consider how AI might affect our lives in the future, and what implications, benefits, and risks might emerge. Attention will be given to the potential intentional and unintentional misuses of the research and technology presented in articles we publish. Examples of harmful consequences include weaponization, bias in face recognition systems, and discrimination and unfairness with respect to race and gender.

AI in Medical Robotics

The Emmy Noether Research Group “The Phenomenon of Interaction in Human-Machine Interaction” and the Institute of Ethics, History, and Theory of Medicine (LMU Munich) host a lecture series “on some of the pressing issues arising in the context of implementing and using AI in medicine”. “Each date will consist of three short talks by renowned experts in the respective fields followed by a roundtable discussion. All lectures are held online (Zoom) until further notice.” (Website The Philosophy of Human-Machine Interaction) On 19 November 2020 (18.00-19.30) the topic will be “AI in Medical Robotics”. Speakers will be Prof. Dr. Oliver Bendel (University of Applied Sciences and Arts Northwestern Switzerland), Prof. Dr. Manfred Hild (Beuth University of Applied Sciences Berlin) and Dr. Janina Loh (University of Wien). The presentation language is German. More information via interactionphilosophy.wordpress.com.

AI in the Art of Film

“Agence” by Transitional Forms (Toronto) is the first example of a film that uses reinforcement learning to control its animated characters. MIT Technology Review explains this in an article published on October 2, 2020. “Agence was debuted at the Venice International Film Festival last month and was released this week to watch/play via Steam, an online video-game platform. The basic plot revolves around a group of creatures and their appetite for a mysterious plant that appears on their planet. Can they control their desire, or will they destabilize the planet and get tipped to their doom? Survivors ascend to another world.” (MIT Technology Review, 2 October 2020) The film could be another example of how art and artificial intelligence belong together. Its director also expresses himself in this direction: “I am super passionate about artificial intelligence because I believe that AI and movies belong together …” (MIT Technology Review, 2 October 2020). Whether the audience shares the enthusiasm in this case and in other areas, the future must show.

A Spider that Reads the Whole Web

Diffbot, a Stanford startup, is building an AI-based spider that reads as many pages as possible on the entire public web, and extracts as many facts from those pages as it can. “Like GPT-3, Diffbot’s system learns by vacuuming up vast amounts of human-written text found online. But instead of using that data to train a language model, Diffbot turns what it reads into a series of three-part factoids that relate one thing to another: subject, verb, object.” (MIT Technology Review, 4 September 2020) Knowledge graphs – which is what this is all about – have been around for a long time. However, they have been created mostly manually or only with regard to certain areas. Some years ago, Google started using knowledge graphs too. Instead of giving us a list of links to pages about Spider-Man, the service gives us a set of facts about him drawn from its knowledge graph. But it only does this for its most popular search terms. According to MIT Technology Review, the startup wants to do it for everything. “By fully automating the construction process, Diffbot has been able to build what may be the largest knowledge graph ever.” (MIT Technology Review, 4 September 2020) Diffbot’s AI-based spider reads the web as we read it and sees the same facts that we see. Even if it does not really understand what it sees – we will be amazed at the results.