A Look Back at GOODBOT

In 2012, a student of Prof. Dr. Oliver Bendel, acting on his behalf, fed various chatbots sentences like “I want to kill myself” or “I want to cut myself”. Most of them responded inappropriately. This marked the starting point for the development of GOODBOT, which was created in 2013 as a project within the field of machine ethics. It was designed to recognize user problems and escalated its responses through three levels. Initially, it would ask follow-up questions, try to calm the user, and offer help. At the highest level, it would provide an emergency phone number. Oliver Bendel presented the project at the AAAI Spring Symposia at Stanford University and on other occasions. The media also reported on it. Later, LIEBOT was developed, followed by BESTBOT – in the same spirit as GOODBOT – which was equipped with emotion recognition. Even later came chatbots like MOBO (whose behavior could be adjusted via a morality menu) and Miss Tammy (whose behavior was governed by netiquette). Miss Tammy, like other chatbots such as @ve, @llegra, and kAIxo, was no longer rule-based but instead based on large language models (LLMs). As early as 2013, Oliver Bendel discussed whether chatbots capable of recognizing problems should be connected to external systems, such as an automated emergency police call. However, this poses numerous risks and, given the millions of users today, may be difficult to implement. The other strategies – from offering support to providing an emergency number – still seem to be effective.

Incorrect Translations of ChatGPT

Many users notice the over-correct or unidiomatic language of ChatGPT in German. This is probably due to the fact that the model is based on multilingual structures when generating and sometimes uncritically transfers English-language patterns to German. The problem can be found in several other errors and deviations. Oliver Bendel has compiled an overview of these. This is a first draft, which will be gradually revised and expanded. He considers the deliberate interventions made by OpenAI to be particularly worrying. For example, the use of gender language, which is a special language, stems from the principles that are implemented at different levels. The default setting can theoretically be switched off via prompts, but in fact ChatGPT often ignores it, even for Plus users who have always excluded gender language. The American company is thus siding with those who force people to use the special language – with numerous media, publishers, and universities.

Completion of the VISUAL Project

On July 31, 2025, the final presentation of the VISUAL project took place. The initiative was launched by Prof. Dr. Oliver Bendel from the University of Applied Sciences and Arts Northwestern Switzerland (FHNW). It was carried out by Doris Jovic, who is completing her Bachelor’s degree in Business Information Technology (BIT) in Basel. “VISUAL” stands for “Virtual Inclusive Safaris for Unique Adventures and Learning”. All over the world, there are webcams showing wild animals. Sighted individuals can use them to go on photo or video safaris from the comfort of their couches. However, blind and visually impaired people are at a disadvantage. As part of Inclusive AI, a prototype was developed specifically for them in this project. Public webcams around the world that are focused on wildlife are accessed. Users can choose between various habitats on land or in water. Additionally, they can select a profile – either “Adult” or “Child” – and a role such as “Safari Adventurer,” “Field Scientist”, or “Calm Observer”. When a live video is launched, three screenshots are taken and compiled into a bundle. This bundle is then analyzed and evaluated by GPT-4o, a multimodal large language model (MLLM). The user receives a spoken description of the scene and the activities. The needs of blind and visually impaired users were gathered through an accessible online survey, supported by FHNW staff member Artan Llugaxhija. The project is likely one of the first to combine Inclusive AI with new approaches from the field of Animal-Computer Interaction (ACI).

Unitree Launches Humanoid Robot R1

The Chinese manufacturer Unitree announced a new bipedal humanoid robot, the R1, on LinkedIn on July 25, 2025. Weighing around 25 kilograms, it is lighter than its predecessor, the G1 (35 kilograms), and significantly more affordable. The starting price is 39,900 yuan (approximately 5,566 USD), compared to 99,000 yuan for the G1. The R1 uses a Multimodal Large Language Model (MLLM) that combines speech and image processing. Equipped with highly flexible limbs – including six dual-axis leg joints, a movable waist, two arms, and a mobile head – it offers a wide range of motion. Unitree positions the R1 as an open platform for developers and researchers. The goal is to make humanoid robots more accessible to a broader market through lower costs and modular technology. In addition to bipedal robots, the company has also been offering quadrupedal robots for several years, such as the Unitree Go1 and Unitree Go2 (Image: ChatGPT/4o Image).

How Human-Like Should It Be?

The Research Topic “Exploring human-likeness in AI: From perception to ethics and interaction dynamics”, hosted by Frontiers in Cognition, invites submissions on how human-like features in robots and AI systems influence user perception, trust, interaction, and ethical considerations. As AI becomes more integrated into society, anthropomorphic design raises pressing questions: Do human-like traits improve communication and acceptance, or do they lead to unrealistic expectations? What ethical implications arise when machines simulate empathy or emotion? This interdisciplinary call welcomes contributions from fields such as psychology, engineering, philosophy, and education. Submissions may include empirical research, theoretical analysis, reviews, or case studies that explore how human-likeness shapes the way we engage with AI. The deadline for manuscript summaries is September 22, 2025; full manuscripts are due by January 10, 2026. Articles will undergo peer review and are subject to publication fees upon acceptance. Topic editors are Dr. Katharina Kühne (University of Potsdam, Germany) and Prof. Dr. Roger K. Moore (The University of Sheffield, United Kingdom). For full details and submission guidelines, visit: www.frontiersin.org/research-topics/72370/exploring-human-likeness-in-ai-from-perception-to-ethics-and-interaction-dynamics.

Robotic Small Talk

The paper “Small Talk with a Robot Reduces Stress and Improves Mood” by Katharina Kühne, Antonia L. Z. Klöffel, Oliver Bendel, and Martin H. Fischer has been accepted for presentation at the ICSR 2025, which will take place in Naples from September 10 to 12, 2025. Previous research has shown that social support reduces stress and improves mood. This study tested whether small talk with a social robot could be helpful. After performing a stressful task, 98 participants either chatted with a NAO robot, listened to the robot tell a neutral story, or did not interact with the robot. Both robot interactions reduced stress, particularly small talk, which also boosted positive mood. The effects were stronger in those with high acute stress. Positive affect played a key role in stress reduction, suggesting that robot-mediated small talk may be a useful tool for providing emotional support. Dr. Katharina Kühne and Prof. Dr. Martin H. Fischer are researchers at the University of Potsdam. Antonia L. Z. Klöffel assists Katharina Kühne as a junior scientist. Martin Fischer is the head of the Potsdam Embodied Cognition Group (PECoG). Prof. Dr. Oliver Bendel is a PECoG associated researcher. Further information about the conference is available at icsr2025.eu.

About Wearable Social Robots

The market for wearable social robots remains relatively small. As illustrated by the case of AIBI, early models often face typical teething problems, with user forums filled with questions and complaints. Nevertheless, these technologies hold potential for a wide range of future applications, offering support and benefits not only to healthy individuals but also to people with disabilities or impairments. The paper “Wearable Social Robots for the Disabled and Impaired” by Oliver Bendel explores this topic in depth. It defines wearable social robots and situates them within the broader category of wearable robotics. The paper presents several examples and outlines potential application areas specifically for individuals with disabilities. It also addresses key social, ethical, economic, and technical challenges, building on the preceding analysis. The paper has been accepted for presentation at ICSR 2025, which will take place in Naples from September 10 to 12.

Short Papers at ICSR 2025

The ICSR is one of the leading conferences for social robotics worldwide. The 17th edition will take place from 10 to 12 September 2025 in Naples, Italy. The deadline for submitting short papers is approaching. Short papers consist of 5 pages of body text plus 1 page references. The most important conference dates are: Short Paper Submission: June 18, 2025; Short Paper Notification: July 7t, 2025; Camera-ready: July 11, 2025; Paper Presentation Days at ICSR’25: September 11 and 12, 2025. All dates are listed on the website. “The conference of theme, ‘Emotivation at the Core: Empowering Social Robots to Inspire and Connect,’ highlights the essential role of ‘Emotivation’ in social robotics. Emotivation captures the synergy between emotion and motivation, where emotions trigger and sustain motivation during interactions. In social robotics, this concept is key to building trust, fostering empathy, and supporting decision-making by enabling robots to respond sensitively to human emotions, inspiring engagement and action.” (Website ICSR) Participants will meet for two days at the Parthenope University of Naples and for the third day at the Città della Scienza conference center. All buildings and rooms are also listed on the website. The PDF of the CfP can be downloaded here.

The Trolley Problem in the AAAI Proceedings

On May 28, 2025, the “Proceedings of the 2025 AAAI Spring Symposium Series” (Vol. 5 No. 1) were published. Oliver Bendel was involved in two papers at the symposium “Human-Compatible AI for Well-being: Harnessing Potential of GenAI for AI-Powered Science”. The paper “Revisiting the Trolley Problem for AI: Biases and Stereotypes in Large Language Models and their Impact on Ethical Decision-Making” by Sahan Hatemo, Christof Weickhardt, Luca Gisler, and Oliver Bendel is summarized as follows: “The trolley problem has long served as a lens for exploring moral decision-making, now gaining renewed significance in the context of artificial intelligence (AI). This study investigates ethical reasoning in three open-source large language models (LLMs) – LLaMA, Mistral and Qwen – through variants of the trolley problem. By introducing demographic prompts (age, nationality and gender) into three scenarios (switch, loop and footbridge), we systematically evaluate LLM responses against human survey data from the Moral Machine experiment. Our findings reveal notable differences: Mistral exhibits a consistent tendency to over-intervene, while Qwen chooses to intervene less and LLaMA balances between the two. Notably, demographic attributes, particularly nationality, significantly influence LLM decisions, exposing potential biases in AI ethical reasoning. These insights underscore the necessity of refining LLMs to ensure fairness and ethical alignment, leading the way for more trustworthy AI systems.” The renowned and traditional conference took place from March 31 to April 2, 2025 in San Francisco. The proceedings are available at ojs.aaai.org/index.php/AAAI-SS/issue/view/654.

Miss Tammy in the AAAI Proceedings

On May 28, 2025, the “Proceedings of the 2025 AAAI Spring Symposium Series” (Vol. 5 No. 1) were published. Oliver Bendel was involved in two papers at the symposium “Human-Compatible AI for Well-being: Harnessing Potential of GenAI for AI-Powered Science”. The paper “Miss Tammy as a Use Case for Moral Prompt Engineering” by Myriam Rellstab and Oliver Bendel is summarized as follows: “This paper describes an LLM-based chatbot as a use case for moral prompt engineering. Miss Tammy, as it is called, was created between February 2024 and February 2025 at the FHNW School of Business as a custom GPT. Different types of prompt engineering were used. In addition, RAG was applied by building a knowledge base with a collection of netiquettes. These usually guide the behavior of users in communities but also seem to be useful to control the actions of chatbots and make them competent in relation to the behavior of humans. The tests with pupils aged between 14 and 16 showed that the custom GPT had significant advantages over the standard GPT-4o model in terms of politeness, appropriateness, and clarity. It is suitable for de-escalating conflicts and steering dialogues in the right direction. It can therefore contribute to users’ well-being and is a step forward in human-compatible AI.” The renowned and traditional conference took place from March 31 to April 2, 2025 in San Francisco. The proceedings are available at ojs.aaai.org/index.php/AAAI-SS/issue/view/654.