As part of the AAAI 2023 Spring Symposia in San Francisco, the symposium “Socially Responsible AI for Well-being” is organized by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The AAAI website states: “For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing ‘What is socially responsible?’ in several potential situations of well-being in the coming AI age.” (Website AAAI) According to the organizers, the first perspective is “(Individually) Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to design Responsible AI for well-being. The second perspective is “Socially Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to implement social aspects in Responsible AI for well-being. More information via www.aaai.org/Symposia/Spring/sss23.php#ss09.
AI-based Q-bear
Why is your baby crying? And what if artificial intelligence (AI) could answer that question for you? “If there was a flat little orb the size of a dessert plate that could tell you exactly what your baby needs in that moment? That’s what Q-bear is trying to do.” (Mashable, January 3, 2023) That’s what tech magazine Mashable wrote in a recent article. At CES 2023, the Taiwanese company qbaby.ai demonstrated its AI-powered tool which aims to help parents resolve their needs in a more targeted way. “The soft silicone-covered device, which can be fitted in a crib or stroller, uses Q-bear’s patented tech to analyze a baby’s cries to determine one of four needs from its ‘discomfort index’: hunger, a dirty diaper, sleepiness, and need for comfort. Q-bear’s translation comes within 10 seconds of a baby crying, and the company says it will become more accurate the more you use the device.” (Mashable, January 3, 2023) Whether the tool really works remains to be seen – presumably, baby cries can be interpreted more easily than animal languages. Perhaps the use of the tool is ultimately counterproductive because parents forget to trust their own intuition. The article “CES 2023: The device that tells you why your baby is crying” can be accessed via mashable.com/article/ces-2023-why-is-my-baby-crying.
Proceedings of “How Fair is Fair? Achieving Wellbeing AI”
On November 17, 2022, the proceedings of “How Fair is Fair? Achieving Wellbeing AI” (organizers: Takashi Kido and Keiki Takadama) were published on CEUR-WS. The AAAI 2022 Spring Symposium was held at Stanford University from March 21-23, 2022. There are seven full papers of 6 – 8 pages in the electronic volume: “Should Social Robots in Retail Manipulate Customers?” by Oliver Bendel and Liliana Margarida Dos Santos Alves (3rd place of the Best Presentation Awards), “The SPACE THEA Project” by Martin Spathelf and Oliver Bendel (2nd place of the Best Presentation Awards), “Monitoring and Maintaining Student Online Classroom Participation Using Cobots, Edge Intelligence, Virtual Reality, and Artificial Ethnographies” by Ana Djuric, Meina Zhu, Weisong Shi, Thomas Palazzolo, and Robert G. Reynolds, “AI Agents for Facilitating Social Interactions and Wellbeing” by Hiro Taiyo Hamada and Ryota Kanai (1st place of the Best Presentation Awards) , “Sense and Sensitivity: Knowledge Graphs as Training Data for Processing Cognitive Bias, Context and Information Not Uttered in Spoken Interaction” by Christina Alexandris, “Fairness-aware Naive Bayes Classifier for Data with Multiple Sensitive Features” by Stelios Boulitsakis-Logothetis, and “A Thermal Environment that Promotes Efficient Napping” by Miki Nakai, Tomoyoshi Ashikaga, Takahiro Ohga, and Keiki Takadama. In addition, there are several short papers and extended abstracts. The proceedings can be accessed via ceur-ws.org/Vol-3276/.
Best Presentation Awards at AAAI 2022 Spring Symposium
The AAAI 2022 Spring Symposium “How Fair is Fair? Achieving Wellbeing AI” was held March 21-23, 2022 at Stanford University. In the Best Presentation Awards, Oliver Bendel and Liliana Alves took 3rd place (“Should Social Robots in Retail Manipulate”), and Martin Spathelf and Oliver Bendel took 2nd place (“The SPACE THEA Project”). In 1st place was Hiroaki Hamada (“AI agents for facilitating social interactions and wellbeing”). Oliver Bendel had won first place at the AAAI 2019 Spring Symposium “Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness” with his paper “Are Robot Tax, Basic Income or Basic Property Solutions to the Social Problems of Automation?”, along with two other researchers and their teams. Both symposia – from 2019 and from 2022 – were hosted by Takashi Kido and Keiki Takadama from Japan. They are among the pioneers in the field of Responsible AI.
Achieving Wellbeing AI
The AAAI 2022 Spring Symposium “How Fair is Fair? Achieving Wellbeing AI” will be held March 21-23 at Stanford University. The symposium website states: “What are the ultimate outcomes of artificial intelligence? AI has the incredible potential to improve the quality of human life, but it also presents unintended risks and harms to society. The goal of this symposium is (1) to combine perspectives from the humanities and social sciences with technical approaches to AI and (2) to explore new metrics of success for wellbeing AI, in contrast to ‚productive AI‘, which prioritizes economic incentives and values.” (Website “How Fair is Fair”) After two years of pandemics, the AAAI Spring Symposia are once again being held in part locally. However, several organizers have opted to hold them online. “How fair is fair” is a hybrid event. On site speakers include Takashi Kido, Oliver Bendel, Robert Reynolds, Stelios Boulitsakis-Logothetis, and Thomas Goolsby. The complete program is available via sites.google.com/view/hfif-aaai-2022/program.
Ethics of Conversational Agents
The Ethics of Conversational User Interfaces workshop at the ACM CHI 2022 conference “will consolidate ethics-related research of the past and set the agenda for future CUI research on ethics going forward”. “This builds on previous CUI workshops exploring theories and methods, grand challenges and future design perspectives, and collaborative interactions.” (CfP CUI) From the Call for Papers: “In what ways can we advance our research on conversational user interfaces (CUIs) by including considerations on ethics? As CUIs, like Amazon Alexa or chatbots, become commonplace, discussions on how they can be designed in an ethical manner or how they change our views on ethics of technology should be topics we engage with as a community.” (CfP CUI) Paper submission deadline is 24 February 2022. The workshop is scheduled to take place in New Orleans on 21 April 2022. More information is available via www.conversationaluserinterfaces.org/workshops/CHI2022/.
AI and Society
The AAAI Spring Symposia at Stanford University are among the community’s most important get-togethers. The years 2016, 2017, and 2018 were memorable highlights for machine ethics, robot ethics, ethics by design, and AI ethics, with the symposia “Ethical and Moral Considerations in Non-Human Agents” (2016), “Artificial Intelligence for the Social Good” (2017), and “AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents” (2018) … As of 2019, the proceedings are no longer provided directly by the Association for the Advancement of Artificial Intelligence, but by the organizers of each symposium. As of summer 2021, the entire 2018 volume of the conference has been made available free of charge. It can be found via www.aaai.org/Library/Symposia/Spring/ss18.php or directly here. It includes contributions by Philip C. Jackson, Mark R. Waser, Barry M. Horowitz, John Licato, Stefania Costantini, Biplav Srivastava, and Oliver Bendel, among others.
AI for Elephant Protection
According to Afrik21, Olga Isupova (University of Bath) has just developed an AI system that allows to photograph and analyse large areas. Coupled with a satellite, it is designed to monitor African elephants, which are being decimated by poachers at the rate of one every 15 minutes. “The system collects nearly 5,000 square kilometres (km2) of photos highlighting elephants. The large size of African elephants makes them easier to spot. The results provided by the tool are then compared with those provided by human counting.” (Afrik21, 28 April 2021) Olga Isupova lists a number of advantages: “The programme counts the number of elephants by itself, which no longer puts the people who used to do this task in danger. The animals are no longer disturbed and the data collection process is more efficient …” (Afrik21, 28 April 2021) According to Afrik21, the AI expert intends to further develop her invention and eventually extend it to monitoring footprints, animal colonies or counting smaller species. The article can be accessed via www.afrik21.africa/en/africa-artificial-intelligence-to-combat-elephant-poaching/.
Reclaim Your Face
The “Reclaim Your Face” alliance, which calls for a ban on biometric facial recognition in public space, has been registered as an official European Citizens’ Initiative. One of the goals is to establish transparency: “Facial recognition is being used across Europe in secretive and discriminatory ways. What tools are being used? Is there evidence that it’s really needed? What is it motivated by?” (Website RYF) Another one is to draw red lines: “Some uses of biometrics are just too harmful: unfair treatment based on how we look, no right to express ourselves freely, being treated as a potential criminal suspect.” (Website RYF) Finally, the initiative demands respect for human: “Biometric mass surveillance is designed to manipulate our behaviour and control what we do. The general public are being used as experimental test subjects. We demand respect for our free will and free choices.” (Website RYF) In recent years, the use of facial recognition techniques have been the subject of critical reflection, such as in the paper “The Uncanny Return of Physiognomy” presented at the 2018 AAAI Spring Symposia or in the chapter “Some Ethical and Legal Issues of FRT” published in the book “Face Recognition Technology” in 2020. More information at reclaimyourface.eu.
Research Program on Responsible AI
“HASLER RESPONSIBLE AI” is a research program of the Hasler Foundation open to research institutions within the higher education sector or non-commercial research institutions outside the higher education sector. The foundation explains the goals of the program in a call for project proposals: “The HASLER RESPONSIBLE AI program will support research projects that investigate machine-learning algorithms and artificial intelligence systems whose results meet requirements on responsibility and trustworthiness. Projects are expected to seriously engage in the application of the new models and methods in scenarios that are relevant to society. In addition, projects should respect the interdisciplinary character of research in the area of RESPONSIBLE AI by involving the necessary expertise.” (CfPP by Hasler Foundation) Deadline for submission of short proposals is 24 January 2021. More information at haslerstiftung.ch.