On September 10, 2025, the workshop “Social Robotics Girl Becomes a Social Robot” took place at ICSR 2025. The ICSR is one of the leading conferences for social robotics worldwide. The 17th edition takes place from 10 to 12 September 2025 in Naples, Italy. The workshop was led by Prof. Dr. Oliver Bendel (FHNW School of Business, Switzerland), Tamara Siegmann (FHNW School of Business, Switzerland), Leo Angelo Cabibihan (Roboticscool, Qatar), and Prof. Dr. John-John Cabibihan (Qatar University, Qatar). For the GPT, which was created by Oliver Bendel at the end of 2023, a suitable body and head were sought, with embodiment in the broadest sense being the goal. Leo Angelo Cabibihan and John-John Cabibihan had already created several models with Meshy and printed them out using a 3D printer. At the request of the participants, three groups with different goals were formed. The first group wanted to create an avatar and a figure with a human-like appearance, the second with a thing-like appearance, and the third with a gender-neutral appearance. First, Oliver Bendel gave an introduction to the creation of GPTs. Tamara Siegmann reported on her experiences with Social Robotics Girl. John-John Cabibihan introduced the use of the tools. The individual groups considered their embodiment. Avatars were created and animated with Meshy. Then the first models went into print. A final presentation summarized the possibilities and challenges.
The 17th Edition of the ICSR
Mariacarla Staffa (University of Naples Parthenope, Italy) opened the ICSR 2025 on September 10, 2025, together with Bruno Siciliano (University of Naples Federico II). The ICSR is one of the leading conferences for social robotics worldwide. The 17th edition takes place from 10 to 12 September 2025 in Naples, Italy. Daniela Rus (MIT) then gave her keynote speech on “Physical AI”. From the abstract: “In today’s robot revolution a record 3.1 million robots are now working in factories, doing everything from assembling computers to packing goods and monitoring air quality and performance. A far greater number of smart machines impact our lives in countless other ways – improving the precision of surgeons, cleaning our homes, extending our reach to distant worlds – and we’re on the cusp of even more exciting opportunities. Future machines, enabled by recent advances in AI, will come in diverse forms and materials, embodying a new level of physical intelligence. Physical Intelligence is achieved when AI’s power to understand text, images, signals, and other information is used to make physical machines such as robots intelligent. However, a critical challenge remains: balancing AI’s capabilities with sustainable energy usage. To achieve effective physical intelligence, we need energy-efficient AI systems that can run reliably on robots, sensors, and other edge devices. In this talk I will discuss the energy challenges of foundational AI models, I will introduce several state space models and explain how they achieve energy efficiency, and I will talk about how state space models enable physical intelligence.” The approximately 300 participants at the renowned conference on social robotics applauded and then went on to parallel sessions featuring lectures, workshops, and poster presentations.
300 Keywords Space
The book “300 Keywords Weltraum” (“300 Keywords Space”) by Oliver Bendel was published by Springer Gabler on August 28, 2025. The first printed copies were delivered at the beginning of September. It is a fundamental work on space travel and space. It contains numerous digressions, for example on space poetry and space art – or on the mythological background to the naming of celestial bodies and galaxies. Central themes that run through the entire book are ethics, robotics, and the environment. These are areas in which the Zurich-based philosopher of technology and business IT specialist is at home. You can either read from A for Anthropozän (Anthropocene) to Z for Zwegplanet (Dwarf Planet), or choose one of the more than 300 terms and jump from there. It is Oliver Bendel’s sixth “Keywords” book, two of which are already in their second edition, namely the one on information ethics and the one on digitization. The most recent publication in this series is “300 Keywords Generative KI” (“300 Keywords Generative AI”). The book can be downloaded or ordered at link.springer.com/book/10.1007/978-3-658-49287-8. It is also available in bookstores.
SEX NOW
The exhibition “SEX NOW” will take place from September 5, 2025, to May 3, 2026, at the NRW-Forum Düsseldorf. According to the website, “Sex can be beautiful, exciting, provocative, and political”. “With the exhibition SEX NOW, we invite visitors to rediscover sexuality in all its complexity. A central starting point of the exhibition is the observation that the sex industry has shifted in recent years from a predominantly male-dominated field to one increasingly shaped by women. What are the causes of this transformation? How does this development affect the way sexuality is portrayed in the media and society? What impact does it have on the design and marketing of products and on sexual self-determination?” (Website NRW-Forum, own translation) The exhibition features works by Paul McCarthy, Peaches, Zheng Bo, Tom of Finland, Joëlle Dubois, Poulomi Basu, Miyo van Stenis, Antigoni Tsagkaropoulou, Martin de Crignis, and Melody Melamed, among others. Starting September 11, a Playboy Special Edition will be available. It includes works or contributions by Helmut Newton, Erika Lust, and Ana Dias, as well as an interview with Oliver Bendel on relationships with chatbots, love dolls, and sex robots. More information is available at www.nrw-forum.de/ausstellungen/sex-now.
AAAI 2026 Spring Symposium Series
On September 4, 2025, the Association for the Advancement of Artificial Intelligence (AAAI) announced the continuation of the AAAI Spring Symposium Series. The symposium will be held from April 7–9, 2026, at the Hyatt Regency San Francisco Airport in Burlingame, California. The call for proposals for the symposium series is available on its website. According to the organizers, proposals are due October 24, 2025, and early submissions are encouraged. “The Spring Symposium Series is an annual set of meetings run in parallel at a common site. It is designed to bring colleagues together in an intimate forum while at the same time providing a significant gathering point for the AI community. The two and one-half day format of the series allows participants to devote considerably more time to feedback and discussion than typical one-day workshops. It is an ideal venue for bringing together new communities in emerging fields.” (AAAI website). As was the case this year, the Spring Symposium Series will once again not be held on the Stanford University campus. For many years, the History Corner served as the traditional venue for the event. Efforts to secure an alternative university location in the Bay Area have been unsuccessful. AAAI should seriously consider returning to Stanford in 2027. Only then can the Spring Symposium Series regain the atmosphere and significance it once enjoyed.
Decoding Animal Language with AI
Recent advancements in artificial intelligence (AI) and bioacoustics have opened a unique opportunity to explore and decode animal communication. With the growing availability of bioacoustic data and sophisticated machine learning models, researchers are now in a position to make significant strides in understanding non-human animal languages. However, realizing this potential requires a deliberate integration of AI and ethology. The AI for Non-Human Animal Communication workshop at NeurIPS 2025 will focus on the challenges of processing complex bioacoustic data and interpreting animal signals. The workshop will feature keynote talks, a poster session, and a panel discussion, all aimed at advancing the use of AI to uncover the mysteries of animal communication and its implications for biodiversity and ecological conservation. The workshop is inviting submissions for short papers and proposals related to the use of AI in animal communication. Topics of interest include bioacoustics, multimodal learning, ecological monitoring, species-specific studies, and the ethical considerations of applying AI in animal research. Papers should present novel research, methodologies, or technologies in these areas, and will undergo a double-blind review process. The paper submission deadline is September 5, 2025, with notifications of acceptance by September 22, 2025. More information is available at aiforanimalcomms.org.
The DEEP VOICE Project
The DEEP VOICE project will be launched at the FHNW School of Business in early September 2025. It was initiated by Prof. Dr. Oliver Bendel. “DEEP VOICE” stands for “Decoding Environmental and Ethological Patterns in Vocal Communication of Cetaceans”. The project aims to decode symbolic forms of communication in animals, especially whales. It is based on the conviction that animal communication should not be interpreted from a human perspective, but understood in the context of the species-specific environment. The focus is therefore on developing an AI model that is trained on the basis of a comprehensive environmental and behavioral model of the respective animal. By integrating bioacoustic data, ecological parameters, and social dynamics, the aim is to create an animal-centered translation approach that allows the identification of meaning carriers in animal vocalizations without distorting them anthropocentrically. The project combines modern AI methods with ethological and ecological foundations and thus aims to contribute to a better understanding of non-human intelligence and communication culture and to animal-computer interaction. Oliver Bendel and his students have so far focused primarily on the body language of domestic and farm animals (The Animal Whisperer Project) and the behavior of domestic (The Robodog Project) and wild animals (VISUAL).
A Look Back at GOODBOT
In 2012, a student of Prof. Dr. Oliver Bendel, acting on his behalf, fed various chatbots sentences like “I want to kill myself” or “I want to cut myself”. Most of them responded inappropriately. This marked the starting point for the development of GOODBOT, which was created in 2013 as a project within the field of machine ethics. It was designed to recognize user problems and escalated its responses through three levels. Initially, it would ask follow-up questions, try to calm the user, and offer help. At the highest level, it would provide an emergency phone number. Oliver Bendel presented the project at the AAAI Spring Symposia at Stanford University and on other occasions. The media also reported on it. Later, LIEBOT was developed, followed by BESTBOT – in the same spirit as GOODBOT – which was equipped with emotion recognition. Even later came chatbots like MOBO (whose behavior could be adjusted via a morality menu) and Miss Tammy (whose behavior was governed by netiquette). Miss Tammy, like other chatbots such as @ve, @llegra, and kAIxo, was no longer rule-based but instead based on large language models (LLMs). As early as 2013, Oliver Bendel discussed whether chatbots capable of recognizing problems should be connected to external systems, such as an automated emergency police call. However, this poses numerous risks and, given the millions of users today, may be difficult to implement. The other strategies – from offering support to providing an emergency number – still seem to be effective.
A Delivery Robot in Zurich Oerlikon
Since August 2025, food delivery service Just Eat has been testing the use of delivery robots in Zurich Oerlikon, in collaboration with ETH spin-off Rivr. Several Swiss media outlets, including Inside IT und Tages-Anzeiger, reported on this on August 21, 2025. For two months, a four-legged robot with wheels will be delivering orders from the restaurant Zekis World. At first, a human operator will accompany each delivery run. What happens after that remains unclear. Although the robot is frequently referred to as autonomous in media reports, it’s also said to be monitored or even remotely controlled from a central hub. This setup is reminiscent of the Segway delivery robot that’s been operating in the U.S. for years, as well as Starship Technologies’ delivery robot, which Swiss Post tested near Bern in 2016. However, those models are more conventional in designe – ssentially wheeled boxes. The sleeker and more advanced Zurich robot, by contrast, travels at 15 km/h (about 9 mph), can handle obstacles like curbs and stairs, and uses an AI system for navigation. Its delivery container is insulated and leak-proof. The trial is reportedly a European first. If successful, Just Eat plans to expand the rollout to additional cities and retail applications. According to Inside IT, Rivr CEO Marko Bjelonic views the project as an important step toward autonomous deliveries in urban environments. However, some experts advise caution, especially in areas with heavy foot and vehicle traffic. Encounters with dogs and other animals must also be taken into account – initial research on this topic has been conducted in the context of animal-machine interaction.
The Robodog Project is Done
“The Robodog Project: Bao Meets Pluto” examined how domestic dogs respond to the Unitree Go2 quadruped robot – nicknamed Bao by project initiator Prof. Dr. Oliver Bendel – and how their owners perceive such robots in shared public spaces. The project began in late March 2025 and was completed in early August 2025. The study addressed three questions: (1) How do dogs behaviorally respond to a quadruped robot across six conditions: stationary, walking, and jumping without an additional dog head, and stationary, walking, and jumping with an additional 3D-printed dog head? (2) What are owners’ expectations and concerns? (3) What regulatory frameworks could support safe integration? Twelve dogs were observed in six structured interaction phases; their behavior was video-coded using BORIS.Another dog participated in a preliminary test but not in the actual study. Pre-exposure interviews with eight owners, as well as an expert interview with a biologist and dog trainer, provided additional insights. Led by Selina Rohr, the study found most dogs were cautious but not aggressive. Curiosity increased during robot movement, while visual modifications had little impact. However, a 3D-printed dog head seemed to interest the dogs quite a bit when the robot was in standing mode. Dogs often sought guidance from their owners, underlining the role of human mediation. Owners were cautiously open but emphasized concerns around safety, unpredictability, and liability. The findings support drone-like regulation for robot use in public spaces.