Ten years ago, the CYBATHLON was held for the first time. It marked the beginning of a fascinating and inspiring project centered on inclusive AI and inclusive robotics. The competition – bringing together people with disabilities and impairments to compete with and against one another – was founded by Prof. Dr. Robert Riener of ETH Zurich. At the 2016 CYBATHLON, SRF host Tobias Müller spoke five times with Prof. Dr. Oliver Bendel, a technology philosopher from Zurich, who provided context and evaluation regarding the use of implants, prosthetics, and robots. On one occasion, Prof. Dr. Lino Guzzella, then President of ETH Zurich, also took part; on another, Robert Riener joined the discussion. The most recent edition of the CYBATHLON took place in 2024. A total of 67 teams from 24 nations competed across eight disciplines at the SWISS Arena in Kloten near Zurich, as well as at seven interconnected hubs in the United States, Canada, South Africa, Hungary, Thailand, and South Korea. The website currently states: “While CYBATHLON’s journey at ETH Zürich ends here, the story is far from over. The next edition of the event may take place in Asia in 2028, marking an exciting new chapter for this unique global competition.” (CYBATHLON website) This would allow a success story to continue – once again not in Europe, but in Asia (Photo: ETH Zürich, CYBATHLON/Alessandro della Bella).
Inclusive AI and Inclusive Robotics are Highly Valued
At CES 2026, some of the most compelling examples of Inclusive AI and Inclusive Robotics came not from consumer gadgets, but from European assistive technologies designed to expand human autonomy. This was reported by FAZ and other media outlets in January 2026. These innovations show how AI-driven perception and robotics can be centered on accessibility – and still scale beyond niche use cases. Romanian startup Dotlumen exemplifies Inclusive AI through its “.lumen Glasses for the Blind,” a wearable system that replaces a guide dog with real-time, on-device intelligence. Using multiple cameras, sensors, and GPU-based computer vision, the glasses interpret sidewalks, obstacles, and spatial structures and translate them into intuitive haptic signals. The company calls this approach “Pedestrian Autonomous Driving” – a concept that directly bridges human navigation and mobile robotics. Notably, the same algorithms are now being adapted for autonomous delivery robots, underscoring the overlap between assistive AI and broader robotic autonomy. A complementary approach comes from France-based Artha (Seehaptic), whose haptic belt uses AI-powered scene understanding to convert visual space into tactile feedback. By shifting navigation cues from sound to touch, the system reduces cognitive load and leverages sensory substitution – an inclusive design principle with implications for human-machine interfaces in robotics. Together, these technologies illustrate a European model of Inclusive AI: privacy-preserving, embodied, and focused on real-world autonomy. What begins as assistive tech increasingly becomes a foundation for the next generation of intelligent, human-centered robotics (Photo: ETH Zürich, CYBATHLON/Alessandro della Bella).
Towards Inclusive AI and Inclusive Robotics
The article “Wearable Social Robots for the Disabled and Impaired” by Oliver Bendel was published on December 23, 2025. It is part of the volume “Social Robotics + AI: 17th International Conference, ICSR+AI 2025, Naples, Italy, September 10–12, 2025, Proceedings, Part III.” From the abstract: “Wearable social robots can be found on a chain around the neck, on clothing, or in a shirt or jacket pocket. Due to their constant availability and responsiveness, they can support the disabled and impaired in a variety of ways and improve their lives. This article first identifies and summarizes robotic and artificial intelligence functions of wearable social robots. It then derives and categorizes areas of application. Following this, the opportunities and risks, such as those relating to privacy and intimacy, are highlighted. Overall, it emerges that wearable social robots can be useful for this group, for example, by providing care and information anywhere and at any time. However, significant improvements are still needed to overcome existing shortcomings.” The technology philosopher presented the paper on September 12, 2025, in Naples. It can be downloaded from link.springer.com/chapter/10.1007/978-981-95-2398-6_8.
The Hippo in the Mud
On November 10, 2025, the article “There’s a Large Hippo Resting in the Mud” by Oliver Bendel and Doris Jovic was published introducing the VISUAL project. “VISUAL” stands for “Virtual Inclusive Safaris for Unique Adventures and Learning”. All over the world, there are webcams showing wild animals. Sighted people can use them to go on photo and video safaris comfortably from their sofas. Blind and visually impaired people are at a disadvantage here. As part of Inclusive AI, the project developed a prototype specifically for them. Public webcams around the world that are directed at wild animals are tapped. Users can choose between several habitats on land or in water. They can also select “Adult” or “Child” as a profile and choose a role (“Safari Adventurer”, “Field Scientist”, “Calm Observer”). When the live video is accessed, three screenshots are taken and combined into a bundle. This bundle is analyzed and evaluated by GPT-4o, an MLLM. The user then hears a spoken description of the scene and the activities. The project is likely one of the first to combine Inclusive AI with new approaches in Animal-Computer Interaction (ACI). The article was published in Wiley Industry News and can be accessed at: wileyindustrynews.com/en/contributions/theres-a-large-hippo-resting-in-the-mud. It should be noted that it is also available in German.