SEX NOW

The exhibition “SEX NOW” will take place from September 5, 2025, to May 3, 2026, at the NRW-Forum Düsseldorf. According to the website, “Sex can be beautiful, exciting, provocative, and political”. “With the exhibition SEX NOW, we invite visitors to rediscover sexuality in all its complexity. A central starting point of the exhibition is the observation that the sex industry has shifted in recent years from a predominantly male-dominated field to one increasingly shaped by women. What are the causes of this transformation? How does this development affect the way sexuality is portrayed in the media and society? What impact does it have on the design and marketing of products and on sexual self-determination?” (Website NRW-Forum, own translation) The exhibition features works by Paul McCarthy, Peaches, Zheng Bo, Tom of Finland, Joëlle Dubois, Poulomi Basu, Miyo van Stenis, Antigoni Tsagkaropoulou, Martin de Crignis, and Melody Melamed, among others. Starting September 11, a Playboy Special Edition will be available. It includes works or contributions by Helmut Newton, Erika Lust, and Ana Dias, as well as an interview with Oliver Bendel on relationships with chatbots, love dolls, and sex robots. More information is available at www.nrw-forum.de/ausstellungen/sex-now.

AAAI 2026 Spring Symposium Series

On September 4, 2025, the Association for the Advancement of Artificial Intelligence (AAAI) announced the continuation of the AAAI Spring Symposium Series. The symposium will be held from April 7–9, 2026, at the Hyatt Regency San Francisco Airport in Burlingame, California. The call for proposals for the symposium series is available on its website. According to the organizers, proposals are due October 24, 2025, and early submissions are encouraged. “The Spring Symposium Series is an annual set of meetings run in parallel at a common site. It is designed to bring colleagues together in an intimate forum while at the same time providing a significant gathering point for the AI community. The two and one-half day format of the series allows participants to devote considerably more time to feedback and discussion than typical one-day workshops. It is an ideal venue for bringing together new communities in emerging fields.” (AAAI website). As was the case this year, the Spring Symposium Series will once again not be held on the Stanford University campus. For many years, the History Corner served as the traditional venue for the event. Efforts to secure an alternative university location in the Bay Area have been unsuccessful. AAAI should seriously consider returning to Stanford in 2027. Only then can the Spring Symposium Series regain the atmosphere and significance it once enjoyed.

The DEEP VOICE Project

The DEEP VOICE project will be launched at the FHNW School of Business in early September 2025. It was initiated by Prof. Dr. Oliver Bendel. “DEEP VOICE” stands for “Decoding Environmental and Ethological Patterns in Vocal Communication of Cetaceans”. The project aims to decode symbolic forms of communication in animals, especially whales. It is based on the conviction that animal communication should not be interpreted from a human perspective, but understood in the context of the species-specific environment. The focus is therefore on developing an AI model that is trained on the basis of a comprehensive environmental and behavioral model of the respective animal. By integrating bioacoustic data, ecological parameters, and social dynamics, the aim is to create an animal-centered translation approach that allows the identification of meaning carriers in animal vocalizations without distorting them anthropocentrically. The project combines modern AI methods with ethological and ecological foundations and thus aims to contribute to a better understanding of non-human intelligence and communication culture and to animal-computer interaction. Oliver Bendel and his students have so far focused primarily on the body language of domestic and farm animals (The Animal Whisperer Project) and the behavior of domestic (The Robodog Project) and wild animals (VISUAL).

A Look Back at GOODBOT

In 2012, a student of Prof. Dr. Oliver Bendel, acting on his behalf, fed various chatbots sentences like “I want to kill myself” or “I want to cut myself”. Most of them responded inappropriately. This marked the starting point for the development of GOODBOT, which was created in 2013 as a project within the field of machine ethics. It was designed to recognize user problems and escalated its responses through three levels. Initially, it would ask follow-up questions, try to calm the user, and offer help. At the highest level, it would provide an emergency phone number. Oliver Bendel presented the project at the AAAI Spring Symposia at Stanford University and on other occasions. The media also reported on it. Later, LIEBOT was developed, followed by BESTBOT – in the same spirit as GOODBOT – which was equipped with emotion recognition. Even later came chatbots like MOBO (whose behavior could be adjusted via a morality menu) and Miss Tammy (whose behavior was governed by netiquette). Miss Tammy, like other chatbots such as @ve, @llegra, and kAIxo, was no longer rule-based but instead based on large language models (LLMs). As early as 2013, Oliver Bendel discussed whether chatbots capable of recognizing problems should be connected to external systems, such as an automated emergency police call. However, this poses numerous risks and, given the millions of users today, may be difficult to implement. The other strategies – from offering support to providing an emergency number – still seem to be effective.

Incorrect Translations of ChatGPT

Many users notice the over-correct or unidiomatic language of ChatGPT in German. This is probably due to the fact that the model is based on multilingual structures when generating and sometimes uncritically transfers English-language patterns to German. The problem can be found in several other errors and deviations. Oliver Bendel has compiled an overview of these. This is a first draft, which will be gradually revised and expanded. He considers the deliberate interventions made by OpenAI to be particularly worrying. For example, the use of gender language, which is a special language, stems from the principles that are implemented at different levels. The default setting can theoretically be switched off via prompts, but in fact ChatGPT often ignores it, even for Plus users who have always excluded gender language. The American company is thus siding with those who force people to use the special language – with numerous media, publishers, and universities.

Completion of the VISUAL Project

On July 31, 2025, the final presentation of the VISUAL project took place. The initiative was launched by Prof. Dr. Oliver Bendel from the University of Applied Sciences and Arts Northwestern Switzerland (FHNW). It was carried out by Doris Jovic, who is completing her Bachelor’s degree in Business Information Technology (BIT) in Basel. “VISUAL” stands for “Virtual Inclusive Safaris for Unique Adventures and Learning”. All over the world, there are webcams showing wild animals. Sighted individuals can use them to go on photo or video safaris from the comfort of their couches. However, blind and visually impaired people are at a disadvantage. As part of Inclusive AI, a prototype was developed specifically for them in this project. Public webcams around the world that are focused on wildlife are accessed. Users can choose between various habitats on land or in water. Additionally, they can select a profile – either “Adult” or “Child” – and a role such as “Safari Adventurer,” “Field Scientist”, or “Calm Observer”. When a live video is launched, three screenshots are taken and compiled into a bundle. This bundle is then analyzed and evaluated by GPT-4o, a multimodal large language model (MLLM). The user receives a spoken description of the scene and the activities. The needs of blind and visually impaired users were gathered through an accessible online survey, supported by FHNW staff member Artan Llugaxhija. The project is likely one of the first to combine Inclusive AI with new approaches from the field of Animal-Computer Interaction (ACI).

Unitree Launches Humanoid Robot R1

The Chinese manufacturer Unitree announced a new bipedal humanoid robot, the R1, on LinkedIn on July 25, 2025. Weighing around 25 kilograms, it is lighter than its predecessor, the G1 (35 kilograms), and significantly more affordable. The starting price is 39,900 yuan (approximately 5,566 USD), compared to 99,000 yuan for the G1. The R1 uses a Multimodal Large Language Model (MLLM) that combines speech and image processing. Equipped with highly flexible limbs – including six dual-axis leg joints, a movable waist, two arms, and a mobile head – it offers a wide range of motion. Unitree positions the R1 as an open platform for developers and researchers. The goal is to make humanoid robots more accessible to a broader market through lower costs and modular technology. In addition to bipedal robots, the company has also been offering quadrupedal robots for several years, such as the Unitree Go1 and Unitree Go2 (Image: ChatGPT/4o Image).

How Human-Like Should It Be?

The Research Topic “Exploring human-likeness in AI: From perception to ethics and interaction dynamics”, hosted by Frontiers in Cognition, invites submissions on how human-like features in robots and AI systems influence user perception, trust, interaction, and ethical considerations. As AI becomes more integrated into society, anthropomorphic design raises pressing questions: Do human-like traits improve communication and acceptance, or do they lead to unrealistic expectations? What ethical implications arise when machines simulate empathy or emotion? This interdisciplinary call welcomes contributions from fields such as psychology, engineering, philosophy, and education. Submissions may include empirical research, theoretical analysis, reviews, or case studies that explore how human-likeness shapes the way we engage with AI. The deadline for manuscript summaries is September 22, 2025; full manuscripts are due by January 10, 2026. Articles will undergo peer review and are subject to publication fees upon acceptance. Topic editors are Dr. Katharina Kühne (University of Potsdam, Germany) and Prof. Dr. Roger K. Moore (The University of Sheffield, United Kingdom). For full details and submission guidelines, visit: www.frontiersin.org/research-topics/72370/exploring-human-likeness-in-ai-from-perception-to-ethics-and-interaction-dynamics.

Robotic Small Talk

The paper “Small Talk with a Robot Reduces Stress and Improves Mood” by Katharina Kühne, Antonia L. Z. Klöffel, Oliver Bendel, and Martin H. Fischer has been accepted for presentation at the ICSR 2025, which will take place in Naples from September 10 to 12, 2025. Previous research has shown that social support reduces stress and improves mood. This study tested whether small talk with a social robot could be helpful. After performing a stressful task, 98 participants either chatted with a NAO robot, listened to the robot tell a neutral story, or did not interact with the robot. Both robot interactions reduced stress, particularly small talk, which also boosted positive mood. The effects were stronger in those with high acute stress. Positive affect played a key role in stress reduction, suggesting that robot-mediated small talk may be a useful tool for providing emotional support. Dr. Katharina Kühne and Prof. Dr. Martin H. Fischer are researchers at the University of Potsdam. Antonia L. Z. Klöffel assists Katharina Kühne as a junior scientist. Martin Fischer is the head of the Potsdam Embodied Cognition Group (PECoG). Prof. Dr. Oliver Bendel is a PECoG associated researcher. Further information about the conference is available at icsr2025.eu.

About Wearable Social Robots

The market for wearable social robots remains relatively small. As illustrated by the case of AIBI, early models often face typical teething problems, with user forums filled with questions and complaints. Nevertheless, these technologies hold potential for a wide range of future applications, offering support and benefits not only to healthy individuals but also to people with disabilities or impairments. The paper “Wearable Social Robots for the Disabled and Impaired” by Oliver Bendel explores this topic in depth. It defines wearable social robots and situates them within the broader category of wearable robotics. The paper presents several examples and outlines potential application areas specifically for individuals with disabilities. It also addresses key social, ethical, economic, and technical challenges, building on the preceding analysis. The paper has been accepted for presentation at ICSR 2025, which will take place in Naples from September 10 to 12.