The AAAI Spring Symposia are Back Again

On the second day of the AAAI Spring Symposia, one could already get the impression that the traditional conference has returned to its former greatness. The Covid pandemic had damaged it. In 2023, there were still too few participants for some symposia. Many stayed home and watched the sessions online. It was difficult for everyone involved. But the problems had already started in 2019. At that time, the Association for the Advancement of Artificial Intelligence had decided not to publish the proceedings centrally any more, but to leave it to the individual organizers. Some of them were negligent or disinterested and left the scientists alone with their demands. In 2024, the association took over the publication process again, which led to very positive reactions in the community. Last but not least, of course, the boost from generative AI helped. In 2024, you can see many happy and exuberant AI experts at Stanford University, with mild temperatures and lots of sunshine.

Generative AI at Stanford University

On March 26, 2024, Oliver Bendel (School of Business FHNW) gave two talks on generative AI at Stanford University. The setting was the AAAI Spring Symposia, more precisely the symposium “Impact of GenAI on Social and Individual Well-being (AAAI2024-GenAI)”. One presentation was based on the paper “How Can Generative AI Enhance the Well-being of the Blind?” by Oliver Bendel himself. It was about the GPT-4-based feature Be My AI in the Be My Eyes app. The other presentation was based on the paper “How Can GenAI Foster Well-being in Self-regulated Learning?” by Stefanie Hauske (ZHAW) and Oliver Bendel. The topic was GPTs used for self-regulated learning. Both talks were received with great interest by the audience. All papers of the AAAI Spring Symposia will be published in spring. The proceedings are edited by the Association for the Advancement of Artificial Intelligence itself.

Start of the European AI Office

The European AI Office was established in February 2024. The European Commission’s website states: “The European AI Office will be the center of AI expertise across the EU. It will play a key role in implementing the AI Act – especially for general-purpose AI – foster the development and use of trustworthy AI, and international cooperation.” (European Commission, February 22, 2024) And further: “The European AI Office will support the development and use of trustworthy AI, while protecting against AI risks. The AI Office was established within the European Commission as the center of AI expertise and forms the foundation for a single European AI governance system.” (European Commission, February 22, 2024) According to the EU, it wants to ensure that AI is safe and trustworthy. The AI Act is the world’s first comprehensive legal framework for AI that guarantees the health, safety and fundamental rights of people and provides legal certainty for companies in the 27 member states.

New Channel on Animal Law and Ethics

The new YouTube channel “GW Animal Law Program” went online at the end of November 2023. It collects lectures and recordings on animal law and ethics. Some of them are from the online event “Artificial Intelligence & Animals”, which took place on 16 September 2023. The speakers were Prof. Dr. Oliver Bendel (FHNW University of Applied Sciences Northwestern Switzerland), Yip Fai Tse (University Center for Human Values, Center for Information Technology Policy, Princeton University), and Sam Tucker (CEO VegCatalyst, AI-Powered Marketing, Melbourne). Other videos include “Tokitae, Reflections on a Life: Evolving Science & the Need for Better Laws” by Kathy Hessler, “Alternative Pathways for Challenging Corporate Humanewashing” by Brooke Dekolf, and “World Aquatic Animal Day 2023: Alternatives to the Use of Aquatic Animals” by Amy P. Wilson. In his talk, Oliver Bendel presents the basics and prototypes of animal-computer interaction and animal-machine interaction, including his own projects in the field of machine ethics. The YouTube channel can be accessed at www.youtube.com/@GWAnimalLawProgram/featured.

AAAI 2024 Spring Symposium Series

The Association for the Advancement of Artificial Intelligence (AAAI) is thrilled to host its 2024 Spring Symposium Series at Stanford University from March 25-27, 2024. With a diverse array of symposia, each hosting 40-75 participants, the event is a vibrant platform for exploring the frontiers of AI. Of the eight symposia, only three are highlighted here: Firstly, the “Bi-directionality in Human-AI Collaborative Systems” symposium promises to delve into the dynamic interactions between humans and AI, exploring how these collaborations can evolve and improve over time. Secondly, the “Impact of GenAI on Social and Individual Well-being” addresses the profound effects. of generative AI technologies on society and individual lives. Lastly, “Increasing Diversity in AI Education and Research” focuses on a crucial issue in the tech world: diversity. It aims to highlight and address the need for more inclusive approaches in AI education and research, promoting a more equitable and diverse future in the field. Each of these symposia offers unique insights and discussions, making the AAAI 2024 Spring Symposium Series a key event for those keen to stay at the cutting edge of AI development and its societal implications. More information is available at aaai.org/conference/spring-symposia/sss24/#ss01.

Machine Learning for Lucid Dreaming

A start-up promises that lucid dreaming will soon be possible for everyone. This was reported by the German magazine Golem on November 10, 2023. The company is Prophetic by Eric Wollberg (CEO) and Wesley Louis Berry III (CTO). In a lucid dream, the dreamers are aware that they are dreaming (Image: DALL-E 3). They can shape the dream according to their will and also exit the dream. Everyone has the ability to experience lucid dreams. One can learn to induce this form of dreaming, but one can also have this form of dreaming as a child and unlearn it again as an adult. The Halo headband, a non-invasive neural device, is designed to make lucid dreaming possible. “The combination of ultrasound and machine learning models (created using EEG & fMRI data) allows us to detect when dreamers are in REM to induce and stabilize lucid dreams.” (Website Prophetic) According to Golem, the neuronal device will be available starting in 2025.

Be My AI

Be My AI is a GPT-4-based extension of the Be My Eyes app. Blind users take a photo of their surroundings or an object and then receive detailed descriptions, which are spoken in a synthesized voice. They can also ask further questions about details and contexts (Image: DALL-E 3). Be My AI can be used in a variety of situations, including reading labels, translating text, setting up appliances, organizing clothing, and understanding the beauty of a landscape. It also offers written responses in 29 languages, making it accessible to a wider audience. While the app has its advantages, it’s not a replacement for essential mobility aids such as white canes or guide dogs. Users are encouraged to provide feedback to help improve the app as it continues to evolve. The app will become even more powerful when it starts to analyze videos instead of photos. This will allow the blind person to move through his or her environment and receive constant descriptions and assessments of moving objects and changing situations. More information is available at www.bemyeyes.com/blog/announcing-be-my-ai.

All that Groks is God

Elon Musk has named his new language model Grok. The word comes from the science fiction novel “Stranger in a Strange Land” (1961) by Robert A. Heinlein. This famous novel features two characters who have studied the word. Valentine Michael Smith (aka Michael Smith or “Mike”, the “Man from Mars”) is the main character. He is a human who was born on Mars. Dr “Stinky” Mahmoud is a semanticist. After Mike, he is the second person who speaks the Martian language but does not “grok” it. In one passage, Mahmoud explains to Mike: “‘Grok’ means ‘identically equal.’ The human cliché. ‘This hurts me worse than it does you’ has a Martian flavor. The Martians seem to know instinctively what we learned painfully from modern physics, that observer interacts with observed through the process of observation. ‘Grok’ means to understand so thoroughly that the observer becomes a part of the observed – to merge, blend, intermarry, lose identity in group experience. It means almost everything that we mean by religion, philosophy, and science – and it means as little to us as color means to a blind man.” Mike says a little later in the dialog: “God groks.” In another place, there is a similar statement: “… all that groks is God …”. In a way, this fits in with what is written on the website of Elon Musk’s AI start-up: “The goal of xAI is to understand the true nature of the universe.” The only question is whether this goal will remain science fiction or become reality.

ChatGPT can See, Hear, and Speak

OpenAI reported on September 25, 2023 in its blog: “We are beginning to roll out new voice and image capabilities in ChatGPT. They offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about.” (OpenAI Blog, 25 September 2023) The company gives some examples of using ChatGPT in everyday life: “Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it. When you’re home, snap pictures of your fridge and pantry to figure out what’s for dinner (and ask follow up questions for a step by step recipe). After dinner, help your child with a math problem by taking a photo, circling the problem set, and having it share hints with both of you.” (OpenAI Blog, 25 September 2023) But the application can not only see, it can also hear and speak: “You can now use voice to engage in a back-and-forth conversation with your assistant. Speak with it on the go, request a bedtime story for your family, or settle a dinner table debate.” (OpenAI Blog, 25 September 2023) More information via openai.com/blog/chatgpt-can-now-see-hear-and-speak.

A Universal Translator Comes

The idea of a Babel Fish comes from the legendary novel or series of novels “The Hitchhiker’s Guide to the Galaxy”. Douglas Adams alluded to the Tower of Babel. In 1997, Yahoo launched a web service for the automatic translation of texts under this name. Various attempts to implement the Babel Fish in hardware and software followed. Meta’s SeamlessM4T software can handle almost a hundred languages. In a blog post, the American company refers to the work of Douglas Adams. “M4T” stands for “Massively Multilingual and Multimodal Machine Translation”. Again, it is a language model that makes spectacular things possible. It has been trained on four million hours of raw audio. A demo is available at seamless.metademolab.com/demo. The first step is to record a sentence. The sentence is displayed as text. Then select the language you want to translate into, for example Japanese. The sentence is displayed again in text form and, if desired, in spoken language. A synthetic voice is used. You can also use your own voice, but this is not yet integrated into the application. A paper by Meta AI and UC Berkeley can be downloaded here.