Towards Inclusive AI and Inclusive Robotics

The article “Wearable Social Robots for the Disabled and Impaired” by Oliver Bendel was published on December 23, 2025. It is part of the volume “Social Robotics + AI: 17th International Conference, ICSR+AI 2025, Naples, Italy, September 10–12, 2025, Proceedings, Part III.” From the abstract: “Wearable social robots can be found on a chain around the neck, on clothing, or in a shirt or jacket pocket. Due to their constant availability and responsiveness, they can support the disabled and impaired in a variety of ways and improve their lives. This article first identifies and summarizes robotic and artificial intelligence functions of wearable social robots. It then derives and categorizes areas of application. Following this, the opportunities and risks, such as those relating to privacy and intimacy, are highlighted. Overall, it emerges that wearable social robots can be useful for this group, for example, by providing care and information anywhere and at any time. However, significant improvements are still needed to overcome existing shortcomings.” The technology philosopher presented the paper on September 12, 2025, in Naples. It can be downloaded from link.springer.com/chapter/10.1007/978-981-95-2398-6_8.

The Hippo in the Mud

On November 10, 2025, the article “There’s a Large Hippo Resting in the Mud” by Oliver Bendel and Doris Jovic was published introducing the VISUAL project. “VISUAL” stands for “Virtual Inclusive Safaris for Unique Adventures and Learning”. All over the world, there are webcams showing wild animals. Sighted people can use them to go on photo and video safaris comfortably from their sofas. Blind and visually impaired people are at a disadvantage here. As part of Inclusive AI, the project developed a prototype specifically for them. Public webcams around the world that are directed at wild animals are tapped. Users can choose between several habitats on land or in water. They can also select “Adult” or “Child” as a profile and choose a role (“Safari Adventurer”, “Field Scientist”, “Calm Observer”). When the live video is accessed, three screenshots are taken and combined into a bundle. This bundle is analyzed and evaluated by GPT-4o, an MLLM. The user then hears a spoken description of the scene and the activities. The project is likely one of the first to combine Inclusive AI with new approaches in Animal-Computer Interaction (ACI). The article was published in Wiley Industry News and can be accessed at: wileyindustrynews.com/en/contributions/theres-a-large-hippo-resting-in-the-mud. It should be noted that it is also available in German.