GPT-4 was launched by OpenAI on March 14, 2023. “GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.” (Website OpenAI) On its website, the company explains the multimodal options in more detail: “GPT-4 can accept a prompt of text and images, which – parallel to the text-only setting – lets the user specify any vision or language task. Specifically, it generates text outputs (natural language, code, etc.) given inputs consisting of interspersed text and images.” (Website OpenAI) The example that OpenAI gives is impressive. An image with multiple panels was uploaded. The prompt is: “What is funny about this image? Describe it panel by panel”. This is exactly what GPT-4 does and then comes to the conclusion: “The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port.” (Website OpenAI) The technical report is available via cdn.openai.com/papers/gpt-4.pdf.
The @ve Project
On January 19, 2023, the final presentation was held for the @ve project, which started in September 2022. The chatbot runs on the website www.ave-bot.ch and on Telegram. Like ChatGPT, it is based on GPT-3 from OpenAI (@ve is not GPT-3.5, but GPT-3.0). The project was initiated by Prof. Dr. Oliver Bendel, who wants to devote more time to dead, extinct, and endangered languages. @ve was developed by Karim N’diaye, who studied business informatics at the Hochschule für Wirtschaft FHNW. You can talk to her in Latin, i.e. in a dead language that thus comes alive in a way, and ask her questions about grammar. It was tested by a relevant expert. One benefit, according to Karim N’diaye, is that you can communicate in Latin around the clock, thinking about what and how to write. One danger, he says, is that there are repeated errors in the answers. For example, sometimes the word order is not correct. In addition, it is possible that the meaning is twisted. This can happen with a human teacher, and the learner should always be alert and look for errors. Without a doubt, @ve is a tool that can be profitably integrated into Latin classes. There, students can report what they have experienced with it at home, and they can have a chat with it on the spot, alone or in a group, accompanied by the teacher. A follow-up project on an endangered language has already been announced (Illustration: Karim N’diaye/Unsplash).
AI for Well-being
As part of the AAAI 2023 Spring Symposia in San Francisco, the symposium “Socially Responsible AI for Well-being” is organized by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The AAAI website states: “For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing ‘What is socially responsible?’ in several potential situations of well-being in the coming AI age.” (Website AAAI) According to the organizers, the first perspective is “(Individually) Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to design Responsible AI for well-being. The second perspective is “Socially Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to implement social aspects in Responsible AI for well-being. More information via www.aaai.org/Symposia/Spring/sss23.php#ss09.
AI-based Q-bear
Why is your baby crying? And what if artificial intelligence (AI) could answer that question for you? “If there was a flat little orb the size of a dessert plate that could tell you exactly what your baby needs in that moment? That’s what Q-bear is trying to do.” (Mashable, January 3, 2023) That’s what tech magazine Mashable wrote in a recent article. At CES 2023, the Taiwanese company qbaby.ai demonstrated its AI-powered tool which aims to help parents resolve their needs in a more targeted way. “The soft silicone-covered device, which can be fitted in a crib or stroller, uses Q-bear’s patented tech to analyze a baby’s cries to determine one of four needs from its ‘discomfort index’: hunger, a dirty diaper, sleepiness, and need for comfort. Q-bear’s translation comes within 10 seconds of a baby crying, and the company says it will become more accurate the more you use the device.” (Mashable, January 3, 2023) Whether the tool really works remains to be seen – presumably, baby cries can be interpreted more easily than animal languages. Perhaps the use of the tool is ultimately counterproductive because parents forget to trust their own intuition. The article “CES 2023: The device that tells you why your baby is crying” can be accessed via mashable.com/article/ces-2023-why-is-my-baby-crying.
Proceedings of “How Fair is Fair? Achieving Wellbeing AI”
On November 17, 2022, the proceedings of “How Fair is Fair? Achieving Wellbeing AI” (organizers: Takashi Kido and Keiki Takadama) were published on CEUR-WS. The AAAI 2022 Spring Symposium was held at Stanford University from March 21-23, 2022. There are seven full papers of 6 – 8 pages in the electronic volume: “Should Social Robots in Retail Manipulate Customers?” by Oliver Bendel and Liliana Margarida Dos Santos Alves (3rd place of the Best Presentation Awards), “The SPACE THEA Project” by Martin Spathelf and Oliver Bendel (2nd place of the Best Presentation Awards), “Monitoring and Maintaining Student Online Classroom Participation Using Cobots, Edge Intelligence, Virtual Reality, and Artificial Ethnographies” by Ana Djuric, Meina Zhu, Weisong Shi, Thomas Palazzolo, and Robert G. Reynolds, “AI Agents for Facilitating Social Interactions and Wellbeing” by Hiro Taiyo Hamada and Ryota Kanai (1st place of the Best Presentation Awards) , “Sense and Sensitivity: Knowledge Graphs as Training Data for Processing Cognitive Bias, Context and Information Not Uttered in Spoken Interaction” by Christina Alexandris, “Fairness-aware Naive Bayes Classifier for Data with Multiple Sensitive Features” by Stelios Boulitsakis-Logothetis, and “A Thermal Environment that Promotes Efficient Napping” by Miki Nakai, Tomoyoshi Ashikaga, Takahiro Ohga, and Keiki Takadama. In addition, there are several short papers and extended abstracts. The proceedings can be accessed via ceur-ws.org/Vol-3276/.
Best Presentation Awards at AAAI 2022 Spring Symposium
The AAAI 2022 Spring Symposium “How Fair is Fair? Achieving Wellbeing AI” was held March 21-23, 2022 at Stanford University. In the Best Presentation Awards, Oliver Bendel and Liliana Alves took 3rd place (“Should Social Robots in Retail Manipulate”), and Martin Spathelf and Oliver Bendel took 2nd place (“The SPACE THEA Project”). In 1st place was Hiroaki Hamada (“AI agents for facilitating social interactions and wellbeing”). Oliver Bendel had won first place at the AAAI 2019 Spring Symposium “Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness” with his paper “Are Robot Tax, Basic Income or Basic Property Solutions to the Social Problems of Automation?”, along with two other researchers and their teams. Both symposia – from 2019 and from 2022 – were hosted by Takashi Kido and Keiki Takadama from Japan. They are among the pioneers in the field of Responsible AI.
Achieving Wellbeing AI
The AAAI 2022 Spring Symposium “How Fair is Fair? Achieving Wellbeing AI” will be held March 21-23 at Stanford University. The symposium website states: “What are the ultimate outcomes of artificial intelligence? AI has the incredible potential to improve the quality of human life, but it also presents unintended risks and harms to society. The goal of this symposium is (1) to combine perspectives from the humanities and social sciences with technical approaches to AI and (2) to explore new metrics of success for wellbeing AI, in contrast to ‚productive AI‘, which prioritizes economic incentives and values.” (Website “How Fair is Fair”) After two years of pandemics, the AAAI Spring Symposia are once again being held in part locally. However, several organizers have opted to hold them online. “How fair is fair” is a hybrid event. On site speakers include Takashi Kido, Oliver Bendel, Robert Reynolds, Stelios Boulitsakis-Logothetis, and Thomas Goolsby. The complete program is available via sites.google.com/view/hfif-aaai-2022/program.
Ethics of Conversational Agents
The Ethics of Conversational User Interfaces workshop at the ACM CHI 2022 conference “will consolidate ethics-related research of the past and set the agenda for future CUI research on ethics going forward”. “This builds on previous CUI workshops exploring theories and methods, grand challenges and future design perspectives, and collaborative interactions.” (CfP CUI) From the Call for Papers: “In what ways can we advance our research on conversational user interfaces (CUIs) by including considerations on ethics? As CUIs, like Amazon Alexa or chatbots, become commonplace, discussions on how they can be designed in an ethical manner or how they change our views on ethics of technology should be topics we engage with as a community.” (CfP CUI) Paper submission deadline is 24 February 2022. The workshop is scheduled to take place in New Orleans on 21 April 2022. More information is available via www.conversationaluserinterfaces.org/workshops/CHI2022/.
AI and Society
The AAAI Spring Symposia at Stanford University are among the community’s most important get-togethers. The years 2016, 2017, and 2018 were memorable highlights for machine ethics, robot ethics, ethics by design, and AI ethics, with the symposia “Ethical and Moral Considerations in Non-Human Agents” (2016), “Artificial Intelligence for the Social Good” (2017), and “AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents” (2018) … As of 2019, the proceedings are no longer provided directly by the Association for the Advancement of Artificial Intelligence, but by the organizers of each symposium. As of summer 2021, the entire 2018 volume of the conference has been made available free of charge. It can be found via www.aaai.org/Library/Symposia/Spring/ss18.php or directly here. It includes contributions by Philip C. Jackson, Mark R. Waser, Barry M. Horowitz, John Licato, Stefania Costantini, Biplav Srivastava, and Oliver Bendel, among others.
AI for Elephant Protection
According to Afrik21, Olga Isupova (University of Bath) has just developed an AI system that allows to photograph and analyse large areas. Coupled with a satellite, it is designed to monitor African elephants, which are being decimated by poachers at the rate of one every 15 minutes. “The system collects nearly 5,000 square kilometres (km2) of photos highlighting elephants. The large size of African elephants makes them easier to spot. The results provided by the tool are then compared with those provided by human counting.” (Afrik21, 28 April 2021) Olga Isupova lists a number of advantages: “The programme counts the number of elephants by itself, which no longer puts the people who used to do this task in danger. The animals are no longer disturbed and the data collection process is more efficient …” (Afrik21, 28 April 2021) According to Afrik21, the AI expert intends to further develop her invention and eventually extend it to monitoring footprints, animal colonies or counting smaller species. The article can be accessed via www.afrik21.africa/en/africa-artificial-intelligence-to-combat-elephant-poaching/.