As part of the ToBIT event series at the University of Applied Sciences and Arts Northwestern Switzerland (FHNW), four students working with Prof. Dr. Oliver Bendel are addressing four topics within his research area: “The Decoding of Symbolic Languages of Animals”, “The Decoding of Animal Body Language”, “The Decoding of Animal Facial Expressions and Behavior”, and “The Decoding of Extraterrestrial Languages”. The first three topics are also being explored – or have already been explored – in dedicated projects. DEEP VOICE focuses on whale communication. The Animal Whisperer Project comprised three apps that analyzed and evaluated the body language of cows, horses, and dogs, while VISUAL provided blind and visually impaired individuals with audio descriptions of images from wildlife webcams. In ANIFACE, a system was designed to identify individual bears in the Alps using facial recognition. Projects involving the reception of extraterrestrial signals and communication with alien life forms were discussed in the book “300 Keywords Weltraum“. The students presented their interim results on November 26, 2025. The final ToBIT event will take place in January 2026.
ICSR – Call for Competition Entries
The 18th International Conference on Social Robotics (ICSR + Art 2026) will take place in London, UK, from 1-4 July 2026. ICSR is the leading international forum that brings together researchers, academics, and industry professionals from across disciplines to advance the field of social robotics. As part of this edition, the ICSR 2026 Competition invites visionary concepts and prototypes for social robots that collaborate, care, and connect with people beyond the laboratory. Designers, engineers, artists, researchers, and pupils or students (school, college, and university) are invited to submit projects ranging from functional solutions to artistic or hybrid works. The competition features two categories: the Robot Design Competition, focusing on innovation in functionality, interaction, and application; and the Robot Art Competition, highlighting creative fusions of fashion, art, performance, and robotics. Hybrid projects may apply to both awards. Each entry must be described in a summary of up to two pages (preferably following Springer LNAI formatting), including an abstract of no more than 50 words and sufficient detail to judge novelty and impact. A single optional video link (maximum three minutes) and images or renderings are encouraged. Submissions should indicate whether they apply for the Design Award, the Art Award, or both, and be uploaded via the competition form at: icsr2026.uk/competition/. The competition submission deadline is 1 March 2026; finalists will be notified on 15 April 2026, and winners will be announced on 3 July 2026 during the closing ceremony of ICSR 2026.
AI Systems Harm the German Language
Users who translate texts from English or another language into German and are not native speakers of the target language should be cautious when using services such as DeepL and ChatGPT. 1. For both, the default setting is not the standard language, as one might assume, but a special language that is rejected by the majority of the language community and does not follow the official rules. These are determined for all German-speaking countries by the Rechtschreibrat. DeepL and ChatGPT follow their own rules or the inconsistent ideas of activists. The German language generated by DeepL and ChatGPT is often dysfunctional, incorrect, and imprecise. Formal inaccuracies can lead to inaccuracies in content. 2. If AI systems do not know words, they may simply replace them with completely different ones. In one test, DeepL translated “Animal-Computer Interaction” as “Mensch-Computer-Interaktion” (“Human-Computer Interaction”). This made the text factually incorrect. 3. Overall, especially with ChatGPT, English language structures are transferred to German. This results in unnatural-sounding lists, unlinked compounds (“Deep Learning Modelle” or “Deep Learning-Modelle” instead of “Deep-Learning-Modelle”), and unnecessary or incorrect hyphens (“nicht-amtliche Regeln” instead of “nichtamtliche Regeln”).
About Authentic Laughter
From November 2025 to February 2026, Sahan Hatemo of the FHNW School of Computer Science, Dr. Katharina Kühne of the University of Potsdam, and Prof. Dr. Oliver Bendel of the FHNW School of Business are conducting a research study. As part of this project, they are launching a sub-study that includes a short computer-based task and a brief questionnaire. Participants are asked to listen to a series of laughter samples and evaluate whether each one sounds authentic or not. The task involves 50 samples in total and typically takes about ten minutes to complete. Participation is possible via PC, laptop, or smartphone. Before starting, participants should ensure that their device’s sound is turned on and that they are in a quiet, distraction-free environment. The computer-based task and the brief questionnaire can be accessed at research.sc/participant/login/dynamic/3BE7321C-B5FD-4C4B-AF29-9A435EC39944.
Start of the ECHO Project
On October 24, 2025, the kick-off meeting for the ECHO project took place at the FHNW School of Business. Two weeks later, on November 7, the proposal was approved. Project collaborator is BIT student Lucas Chingis Marty, who is writing his thesis on this topic. The initiator is Prof. Dr. Oliver Bendel. ECHO is an MLLM-based chatbot that introduces children, young people, and laypeople to the world of music. It can listen to, describe, and evaluate pieces and songs. To do this, it is equipped with a powerful audio analysis module. It refers to key, melody, and harmony, among other things. ECHO makes music accessible and understandable without requiring any prior knowledge. The aim is to promote curiosity, listening comprehension, and artistic taste. The prototype is expected to be available in February 2026.
GPT-AfterDark is Coming
According to several media reports on 15 October 2025, ChatGPT is set to get an erotic function. This is likely to include features such as dirty talk – via text and voice – but possibly also instructions for all kinds of positions and tips and tricks for sex toys and sex robots. This follows in the footsteps of other chatbots such as Replika. However, these often have an avatar to make them irresistible. This is not the case with ChatGPT, apart from the small round tiles of the GPTs, the “custom versions” that anyone can easily create. Among these, incidentally, is a SexGPT by Dominick Pandolfo – ‘Provides sexual health information’, so quite harmless. Artificial Life’s virtual girlfriend already existed at the turn of the millennium, also in linguistic and visual form. If OpenAI does not improve this, users will build something themselves, which is already being done today, albeit not necessarily in a sexual sense. Meshy AI and Co. can be used to generate and animate three-dimensional avatars. It will be interesting to see whether the German ChatGPT version uses gender language in its erotic function – as it does in the default setting. Some people may find this arousing, others may not. When asked what this version of ChatGPT could be called, the chatbot itself suggested: ChatGPT Red, GPT-AfterDark, or DeepLure. If that doesn’t turn you on, there’s no helping you.
Alter3 in Venice
The installation entitled “Am I a Strange Loop?” will be on display at the 2025 Architecture Biennale in Venice. It raises the question of whether artificial intelligence can develop a form of self-awareness. The installation features the humanoid robot Alter3, which has mimic, gestural, and verbal abilities. It uses GPT-4 or GPT-5. Visitors can communicate with it in different languages via a microphone. The installation draws on ideas from physicist, computer scientist, and cognitive scientist Douglas Hofstadter, who assumed that consciousness arises when a system reflects on itself. Alter3 is an impressive robot with a silicone face and silicone hands, but otherwise has a machine-like presence. GPT-4, GPT-5, or other language models cannot create either world consciousness or self-awareness.
Initial Thoughts on Wearable Social Robots
Wearable social robots are very small yet extremely powerful systems that can be worn around the neck, on the body, or in a shoulder bag or handbag. They are not only companions to humans, but become part of them by expanding their senses and means of expression. The article entitled “This robot suits you well!” (subtitle: “On the phenomenon of wearable social robots”) by Oliver Bendel defines the term “wearable social robots”, presents areas of application, and discusses social and ethical challenges. Recommendations for developers and users are also provided. It becomes clear that wearable social robots represent novel tools and extensions or enhancements of humans, whose capabilities go beyond those of apps on smartphones. The article was published on September 25, 2025, in Wiley Industry News, not only in German but also in English. It can be accessed at www.wileyindustrynews.com/de/fachbeitraege/dieser-roboter-steht-ihnen-aber-gut or www.wileyindustrynews.com/en/contributions/that-robot-suits-you-well.
ICSR + Art 2026 in London
Oliver Bendel on Wearable Social Robots
At the last session of the ICSR on September 12, 2025, Oliver Bendel presented his full paper titled “Wearable Social Robots for the Disabled and Impaired”. He began by defining the term wearable social robots, which he sees as a special form and combination of wearable robots and social robots. One example is AIBI, a small robot that he briefly wore around his neck during the talk. Wearable social robots can include functions for games and entertainment, information and learning, navigation and description, and combating loneliness and anxiety. Potential user groups include pupils and students, prison inmates, astronauts, and disabled and impaired persons. Franziska and Julia demonstrated in videos how they use AIBI as a companion and for social support. With this paper, Oliver Bendel continued his work in the field of inclusive AI and inclusive Robotics. The ICSR is one of the leading conferences for social robotics worldwide, and its 17th edition took place from September 10 to 12, 2025, in Naples, Italy. Mariacarla Staffa (University of Naples Parthenope, Italy), John-John Cabibihan (Qatar University, Qatar), and Bruno Siciliano (University of Naples Federico II) served as the main organizers. Over the course of the three days, 300 participants attended, contributing once again to the advancement of social robotics.