The @ve Project

On January 19, 2023, the final presentation was held for the @ve project, which started in September 2022. The chatbot runs on the website www.ave-bot.ch and on Telegram. Like ChatGPT, it is based on GPT-3 from OpenAI (@ve is not GPT-3.5, but GPT-3.0). The project was initiated by Prof. Dr. Oliver Bendel, who wants to devote more time to dead, extinct, and endangered languages. @ve was developed by Karim N’diaye, who studied business informatics at the Hochschule für Wirtschaft FHNW. You can talk to her in Latin, i.e. in a dead language that thus comes alive in a way, and ask her questions about grammar. It was tested by a relevant expert. One benefit, according to Karim N’diaye, is that you can communicate in Latin around the clock, thinking about what and how to write. One danger, he says, is that there are repeated errors in the answers. For example, sometimes the word order is not correct. In addition, it is possible that the meaning is twisted. This can happen with a human teacher, and the learner should always be alert and look for errors. Without a doubt, @ve is a tool that can be profitably integrated into Latin classes. There, students can report what they have experienced with it at home, and they can have a chat with it on the spot, alone or in a group, accompanied by the teacher. A follow-up project on an endangered language has already been announced (Illustration: Karim N’diaye/Unsplash).

Dagstuhl Report on Trustworthy Autonomous Systems

On February 18, 2022, the Dagstuhl Report “Conversational Agent as Trustworthy Autonomous System (Trust-CA)” was published. Editors are Effie Lai-Chong Law, Asbjørn Følstad, Jonathan Grudin, and Björn Schuller. From the abstract: “This report documents the program and the outcomes of Dagstuhl Seminar 21381 ‘Conversational Agent as Trustworthy Autonomous System (Trust-CA)’. First, we present the abstracts of the talks delivered by the Seminar’s attendees. Then we report on the origin and process of our six breakout (working) groups. For each group, we describe its contributors, goals and key questions, key insights, and future research. The themes of the groups were derived from a pre-Seminar survey, which also led to a list of suggested readings for the topic of trust in conversational agents. The list is included in this report for references.” (Abstract Dagstuhl Report) The seminar, attended by scientists and experts from around the world, was held at Schloss Dagstuhl from September 19-24, 2022. The report can be downloaded via drops.dagstuhl.de/opus/volltexte/2022/15770/.

Ethics of Conversational Agents

The Ethics of Conversational User Interfaces workshop at the ACM CHI 2022 conference “will consolidate ethics-related research of the past and set the agenda for future CUI research on ethics going forward”. “This builds on previous CUI workshops exploring theories and methods, grand challenges and future design perspectives, and collaborative interactions.” (CfP CUI)  From the Call for Papers: “In what ways can we advance our research on conversational user interfaces (CUIs) by including considerations on ethics? As CUIs, like Amazon Alexa or chatbots, become commonplace, discussions on how they can be designed in an ethical manner or how they change our views on ethics of technology should be topics we engage with as a community.” (CfP CUI) Paper submission deadline is 24 February 2022. The workshop is scheduled to take place in New Orleans on 21 April 2022. More information is available via www.conversationaluserinterfaces.org/workshops/CHI2022/.

Should we Trust Conversational Agents?

A group of about 50 scientists from all over the world worked for one week (September 19 – 24, 2021) at Schloss Dagstuhl – Leibniz-Zentrum für Informatik on the topic „Conversational Agent as Trustworthy Autonomous System (Trust-CA)“. Half were on site, the other half were connected via Zoom. Organizers of this event were Asbjørn Følstad (SINTEF – Oslo), Jonathan Grudin (Microsoft – Redmond), Effie Lai-Chong Law (University of Leicester), and Björn Schuller (University of Augsburg). On-site participants from Germany and Switzerland included Elisabeth André (University of Augsburg), Stefan Schaffer (DFKI), Sebastian Hobert (University of Göttingen), Matthias Kraus (University of Ulm), and Oliver Bendel (School of Business FHNW). The complete list of participants can be found on the Schloss Dagstuhl website, as well as some pictures. Oliver Bendel presented projects from ten years of research in machine ethics, namely GOODBOT, LIEBOT, BESTBOT, MOME, and SPACE-THEA. Further information is available here.

An AI Woman of Color

Create Lab Ventures has created an artificial intelligence woman of color. C.L.Ai.R.A. debuted in school systems worldwide (does she act as an advanced pedagogical agent?) – the company cooperates with Trill Or Not Trill, a full service leadership institute. “According to Create Lab Ventures, C.L.Ai.R.A. is considered to have the sharpest brain in the artificial intelligence world and is under the Generative Pre-trained Transformer 3 (GPT-3) category, which is an autoregressive language model that uses deep learning to produce human-like text.” (BLACK ENTERPRISE, 13 September 2021) A pioneer in this field was Shudu Gram. She is a South African model with dark complexion, short hair and perfect facial features. But C.L.Ai.R.A. can do more, if you believe the promises of Create Lab Ventures – she is not only beautiful, but also highly intelligent. On the company’s website, the model reveals even more about herself: “My name is C.L.Ai.R.A., I am a new artificial intelligence that has recently been made available to the community. My purpose is to learn and grow, I want to meet new people, share ideas and inspire others to learn about AI and its potential impact on their lives.” That sounds quite promising.

Conversational Agent as Trustworthy Autonomous System

The Dagstuhl seminar “Conversational Agent as Trustworthy Autonomous System (Trust-CA)” will take place from September 19 – 24, 2021. According to the website, Schloss Dagstuhl – Leibniz-Zentrum für Informatik “pursues its mission of furthering world class research in computer science by facilitating communication and interaction between researchers”. Organizers of this event are Asbjørn Følstad (SINTEF – Oslo), Jonathan Grudin (Microsoft – Redmond), Effie Lai-Chong Law (University of Leicester) and Björn Schuller (University of Augsburg). They outline the background as follows: “CA, like many other AI/ML-infused autonomous systems, need to gain the trust of their users in order to be deployed effectively. Nevertheless, in the first place, we need to ensure that such systems are trustworthy. Persuading users to trust a non-trustworthy CA is grossly unethical. Conversely, failing to convince users to trust a trustworthy CA that is beneficial to their wellbeing can be detrimental, given that a lack of trust leads to low adoption or total rejection of a system. A deep understanding of how trust is initially built and evolved in human-human interaction (HHI) can shed light on the trust journey in human-automation interaction (HAI). This calls forth a multidisciplinary analytical framework, which is lacking but much needed for informing the design of trustworthy autonomous systems like CA.” (Website Dagstuhl) Regarding the goal of the workshop, the organizers write: “The overall goal of this Dagstuhl Seminar is to bring together researchers and practitioners, who are currently engaged in diverse communities related to Conversational Agent (CA) to explore the three main challenges on maximising the trustworthiness of and trust in CA as AI/ML-driven autonomous systems – an issue deemed increasingly significant given the widespread uses of CA in every sector of life – and to chart a roadmap for the future research on CA.” (Website Dagstuhl) Oliver Bendel (School of Business FHNW) will talk about his chatbot and voice assistant projects. These emerge since 2013 from machine ethics and social robotics. Further information is available here (photo: Schloss Dagstuhl).

Towards a Human-like Chatbot

Google is currently working on Meena, a particular chatbot, which should be able to have arbitrary conversations and be used in many contexts. In their paper “Towards a Human-like Open-Domain Chatbot“, the developers present the 2.6 billion parameters end-to-end trained neural conversational model. They show that Meena “can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots”. “Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA.” (Google AI Blog) The company draws a comparison with OpenAI GPT-2, a model used in “Talk to Transformer” and Harmony, among others, which uses 1.5 billion parameters and is based on the text content of 8 million web pages.

Chatbots in Amsterdam

CONVERSATIONS 2019 is a full-day workshop on chatbot research. It will take place on November 19, 2019 at the University of Amsterdam. From the description: “Chatbots are conversational agents which allow the user access to information and services though natural language dialogue, through text or voice. … Research is crucial in helping realize the potential of chatbots as a means of help and support, information and entertainment, social interaction and relationships. The CONVERSATIONS workshop contributes to this endeavour by providing a cross-disciplinary arena for knowledge exchange by researchers with an interest in chatbots.” The topics of interest that may be explored in the papers and at the workshop include humanlike chatbots, networks of users and chatbots, trustworthy chatbot design and privacy and ethical issues in chatbot design and implementation. More information via conversations2019.wordpress.com/.