SEX NOW

The exhibition “SEX NOW” will take place from September 5, 2025, to May 3, 2026, at the NRW-Forum Düsseldorf. According to the website, “Sex can be beautiful, exciting, provocative, and political”. “With the exhibition SEX NOW, we invite visitors to rediscover sexuality in all its complexity. A central starting point of the exhibition is the observation that the sex industry has shifted in recent years from a predominantly male-dominated field to one increasingly shaped by women. What are the causes of this transformation? How does this development affect the way sexuality is portrayed in the media and society? What impact does it have on the design and marketing of products and on sexual self-determination?” (Website NRW-Forum, own translation) The exhibition features works by Paul McCarthy, Peaches, Zheng Bo, Tom of Finland, Joëlle Dubois, Poulomi Basu, Miyo van Stenis, Antigoni Tsagkaropoulou, Martin de Crignis, and Melody Melamed, among others. Starting September 11, a Playboy Special Edition will be available. It includes works or contributions by Helmut Newton, Erika Lust, and Ana Dias, as well as an interview with Oliver Bendel on relationships with chatbots, love dolls, and sex robots. More information is available at www.nrw-forum.de/ausstellungen/sex-now.

AAAI 2026 Spring Symposium Series

On September 4, 2025, the Association for the Advancement of Artificial Intelligence (AAAI) announced the continuation of the AAAI Spring Symposium Series. The symposium will be held from April 7–9, 2026, at the Hyatt Regency San Francisco Airport in Burlingame, California. The call for proposals for the symposium series is available on its website. According to the organizers, proposals are due October 24, 2025, and early submissions are encouraged. “The Spring Symposium Series is an annual set of meetings run in parallel at a common site. It is designed to bring colleagues together in an intimate forum while at the same time providing a significant gathering point for the AI community. The two and one-half day format of the series allows participants to devote considerably more time to feedback and discussion than typical one-day workshops. It is an ideal venue for bringing together new communities in emerging fields.” (AAAI website). As was the case this year, the Spring Symposium Series will once again not be held on the Stanford University campus. For many years, the History Corner served as the traditional venue for the event. Efforts to secure an alternative university location in the Bay Area have been unsuccessful. AAAI should seriously consider returning to Stanford in 2027. Only then can the Spring Symposium Series regain the atmosphere and significance it once enjoyed.

Decoding Animal Language with AI

Recent advancements in artificial intelligence (AI) and bioacoustics have opened a unique opportunity to explore and decode animal communication. With the growing availability of bioacoustic data and sophisticated machine learning models, researchers are now in a position to make significant strides in understanding non-human animal languages. However, realizing this potential requires a deliberate integration of AI and ethology. The AI for Non-Human Animal Communication workshop at NeurIPS 2025 will focus on the challenges of processing complex bioacoustic data and interpreting animal signals. The workshop will feature keynote talks, a poster session, and a panel discussion, all aimed at advancing the use of AI to uncover the mysteries of animal communication and its implications for biodiversity and ecological conservation. The workshop is inviting submissions for short papers and proposals related to the use of AI in animal communication. Topics of interest include bioacoustics, multimodal learning, ecological monitoring, species-specific studies, and the ethical considerations of applying AI in animal research. Papers should present novel research, methodologies, or technologies in these areas, and will undergo a double-blind review process. The paper submission deadline is September 5, 2025, with notifications of acceptance by September 22, 2025. More information is available at aiforanimalcomms.org.

The DEEP VOICE Project

The DEEP VOICE project will be launched at the FHNW School of Business in early September 2025. It was initiated by Prof. Dr. Oliver Bendel. “DEEP VOICE” stands for “Decoding Environmental and Ethological Patterns in Vocal Communication of Cetaceans”. The project aims to decode symbolic forms of communication in animals, especially whales. It is based on the conviction that animal communication should not be interpreted from a human perspective, but understood in the context of the species-specific environment. The focus is therefore on developing an AI model that is trained on the basis of a comprehensive environmental and behavioral model of the respective animal. By integrating bioacoustic data, ecological parameters, and social dynamics, the aim is to create an animal-centered translation approach that allows the identification of meaning carriers in animal vocalizations without distorting them anthropocentrically. The project combines modern AI methods with ethological and ecological foundations and thus aims to contribute to a better understanding of non-human intelligence and communication culture and to animal-computer interaction. Oliver Bendel and his students have so far focused primarily on the body language of domestic and farm animals (The Animal Whisperer Project) and the behavior of domestic (The Robodog Project) and wild animals (VISUAL).

A Look Back at GOODBOT

In 2012, a student of Prof. Dr. Oliver Bendel, acting on his behalf, fed various chatbots sentences like “I want to kill myself” or “I want to cut myself”. Most of them responded inappropriately. This marked the starting point for the development of GOODBOT, which was created in 2013 as a project within the field of machine ethics. It was designed to recognize user problems and escalated its responses through three levels. Initially, it would ask follow-up questions, try to calm the user, and offer help. At the highest level, it would provide an emergency phone number. Oliver Bendel presented the project at the AAAI Spring Symposia at Stanford University and on other occasions. The media also reported on it. Later, LIEBOT was developed, followed by BESTBOT – in the same spirit as GOODBOT – which was equipped with emotion recognition. Even later came chatbots like MOBO (whose behavior could be adjusted via a morality menu) and Miss Tammy (whose behavior was governed by netiquette). Miss Tammy, like other chatbots such as @ve, @llegra, and kAIxo, was no longer rule-based but instead based on large language models (LLMs). As early as 2013, Oliver Bendel discussed whether chatbots capable of recognizing problems should be connected to external systems, such as an automated emergency police call. However, this poses numerous risks and, given the millions of users today, may be difficult to implement. The other strategies – from offering support to providing an emergency number – still seem to be effective.

A Delivery Robot in Zurich Oerlikon

Since August 2025, food delivery service Just Eat has been testing the use of delivery robots in Zurich Oerlikon, in collaboration with ETH spin-off Rivr. Several Swiss media outlets, including Inside IT und Tages-Anzeiger, reported on this on August 21, 2025. For two months, a four-legged robot with wheels will be delivering orders from the restaurant Zekis World. At first, a human operator will accompany each delivery run. What happens after that remains unclear. Although the robot is frequently referred to as autonomous in media reports, it’s also said to be monitored or even remotely controlled from a central hub. This setup is reminiscent of the Segway delivery robot that’s been operating in the U.S. for years, as well as Starship Technologies’ delivery robot, which Swiss Post tested near Bern in 2016. However, those models are more conventional in designe – ssentially wheeled boxes. The sleeker and more advanced Zurich robot, by contrast, travels at 15 km/h (about 9 mph), can handle obstacles like curbs and stairs, and uses an AI system for navigation. Its delivery container is insulated and leak-proof. The trial is reportedly a European first. If successful, Just Eat plans to expand the rollout to additional cities and retail applications. According to Inside IT, Rivr CEO Marko Bjelonic views the project as an important step toward autonomous deliveries in urban environments. However, some experts advise caution, especially in areas with heavy foot and vehicle traffic. Encounters with dogs and other animals must also be taken into account – initial research on this topic has been conducted in the context of animal-machine interaction.

The Robodog Project is Done

“The Robodog Project: Bao Meets Pluto” examined how domestic dogs respond to the Unitree Go2 quadruped robot – nicknamed Bao by project initiator Prof. Dr. Oliver Bendel – and how their owners perceive such robots in shared public spaces. The project began in late March 2025 and was completed in early August 2025. The study addressed three questions: (1) How do dogs behaviorally respond to a quadruped robot across six conditions: stationary, walking, and jumping without an additional dog head, and stationary, walking, and jumping with an additional 3D-printed dog head? (2) What are owners’ expectations and concerns? (3) What regulatory frameworks could support safe integration? Twelve dogs were observed in six structured interaction phases; their behavior was video-coded using BORIS.Another dog participated in a preliminary test but not in the actual study. Pre-exposure interviews with eight owners, as well as an expert interview with a biologist and dog trainer, provided additional insights. Led by Selina Rohr, the study found most dogs were cautious but not aggressive. Curiosity increased during robot movement, while visual modifications had little impact. However, a 3D-printed dog head seemed to interest the dogs quite a bit when the robot was in standing mode. Dogs often sought guidance from their owners, underlining the role of human mediation. Owners were cautiously open but emphasized concerns around safety, unpredictability, and liability. The findings support drone-like regulation for robot use in public spaces.

When Animals and Robots Meet

The volume “Animals, Ethics, and Engineering: Intersections and Implications”, edited by Rosalyn W. Berne, was published on 7 August 2025. The authors include Clara Mancini, Fiona French, Abraham Gibson, Nic Carey, Kurt Reymers, and Oliver Bendel. The title of Oliver Bendel’s contribution is “An Investigation into the Encounter Between Social Robots and Animals”. The abstract reads: “Increasingly, social robots and certain service robots encounter, whether this is planned or not, domestic, farm, or wild animals. They react differently, some interested, some disinterested, some lethargic, some panicked. Research needs to turn more to animal-robot relationships, and to work with engineers to design these relationships in ways that promote animal welfare and reduce animal suffering. This chapter is about social robots that are designed for animals, but also those that – for different, rather unpredictable reasons – meet, interact, and communicate with animals. It also considers animal-friendly machines that have emerged in the context of machine ethics. In the discussion section, the author explores the question of which of the presented robots are to be understood as social robots and what their differences are in their purpose and in their relationship to animals. In addition, social and ethical aspects are addressed.” The book was produced by Jenny Publishing and can be ordered via online stores.

Incorrect Translations of ChatGPT

Many users notice the over-correct or unidiomatic language of ChatGPT in German. This is probably due to the fact that the model is based on multilingual structures when generating and sometimes uncritically transfers English-language patterns to German. The problem can be found in several other errors and deviations. Oliver Bendel has compiled an overview of these. This is a first draft, which will be gradually revised and expanded. He considers the deliberate interventions made by OpenAI to be particularly worrying. For example, the use of gender language, which is a special language, stems from the principles that are implemented at different levels. The default setting can theoretically be switched off via prompts, but in fact ChatGPT often ignores it, even for Plus users who have always excluded gender language. The American company is thus siding with those who force people to use the special language – with numerous media, publishers, and universities.

Completion of the VISUAL Project

On July 31, 2025, the final presentation of the VISUAL project took place. The initiative was launched by Prof. Dr. Oliver Bendel from the University of Applied Sciences and Arts Northwestern Switzerland (FHNW). It was carried out by Doris Jovic, who is completing her Bachelor’s degree in Business Information Technology (BIT) in Basel. “VISUAL” stands for “Virtual Inclusive Safaris for Unique Adventures and Learning”. All over the world, there are webcams showing wild animals. Sighted individuals can use them to go on photo or video safaris from the comfort of their couches. However, blind and visually impaired people are at a disadvantage. As part of Inclusive AI, a prototype was developed specifically for them in this project. Public webcams around the world that are focused on wildlife are accessed. Users can choose between various habitats on land or in water. Additionally, they can select a profile – either “Adult” or “Child” – and a role such as “Safari Adventurer,” “Field Scientist”, or “Calm Observer”. When a live video is launched, three screenshots are taken and compiled into a bundle. This bundle is then analyzed and evaluated by GPT-4o, a multimodal large language model (MLLM). The user receives a spoken description of the scene and the activities. The needs of blind and visually impaired users were gathered through an accessible online survey, supported by FHNW staff member Artan Llugaxhija. The project is likely one of the first to combine Inclusive AI with new approaches from the field of Animal-Computer Interaction (ACI).