The Coronavirus Chatbot

The Centers for Disease Control and Prevention of the United States Department of Health and Human Services have launched a chatbot that will help people decide what to do if they have potential Coronavirus symptoms such as fever, cough, or shortness of breath. This was reported by the magazine MIT Technology Review on 24 March 2020. “The hope is the self-checker bot will act as a form of triage for increasingly strained health-care services.” (MIT Technology Review, 24 March 2020) According to the magazine, the chatbot asks users questions about their age, gender, and location, and about any symptoms they’re experiencing. It also inquires whether they may have met someone diagnosed with COVID-19. On the basis of the users’ replies, it recommends the best next step. “The bot is not supposed to replace assessment by a doctor and isn’t intended to be used for diagnosis or treatment purposes, but it could help figure out who most urgently needs medical attention and relieve some of the pressure on hospitals.” (MIT Technology Review, 24 March 2020) The service is intended for people who are currently located in the US. International research is being done not only on useful but also on moral chatbots.

A Morality Markup Language

There are several markup languages for different applications. The best known is certainly the Hypertext Markup Language (HTML). AIML has established itself in the field of Artificial Intelligence (AI). For synthetic voices SSML is used. The question is whether the possibilities with regard to autonomous systems are exhausted. In the article “The Morality Menu” by Prof. Dr. Oliver Bendel, a Morality Markup Language (MOML) was proposed for the first time. In 2019, a student research project supervised by the information and machine ethicist investigated the possibilities of existing languages with regard to moral aspects and whether a MOML is justified. The results were presented in January 2020. A bachelor thesis at the School of Business FHNW will go one step further from the end of March 2020. In it, the basic features of a Morality Markup Language are to be developed. The basic structure and specific commands will be proposed and described. The application areas, advantages and disadvantages of such a markup language are to be presented. The client of the work is Prof. Dr. Oliver Bendel, supervisor Dr. Elzbieta Pustulka.

SPACE THEA

Space travel includes travel and transport to, through and from space for civil or military purposes. The take-off on earth is usually done with a launch vehicle. The spaceship, like the lander, is manned or unmanned. The target can be the orbit of a celestial body, a satellite, planet or comet. Man has been to the moon several times, now man wants to go to Mars. The astronaut will not greet the robots that are already there as if he or she had been lonely for months. For on the spaceship he or she had been in the best of company. SPACE THEA spoke to him or her every day. When she noticed that he or she had problems, she changed her tone of voice, the voice became softer and happier, and what she said gave the astronaut hope again. How SPACE THEA really sounds and what she should say is the subject of a research project that will start in spring 2020 at the School of Business FHNW. Under the supervision of Prof. Dr. Oliver Bendel, a student is developing a voicebot that shows empathy towards an astronaut. The scenario is a proposal that can also be rejected. Maybe in these times it is more important to have a virtual assistant for crises and catastrophes in case one is in isolation or quarantine. However, the project in the fields of social robotics and machine ethics is entitled “THE EMPHATIC ASSISTANT IN SPACE (SPACE THEA)”. The results – including the prototype – will be available by the end of 2020.

Stanford University Must Stay in Bed

Stanford University announced that it would cancel in-person classes for the final two weeks of the winter quarter in response to the expanding outbreak of COVID-19. Even before that, the school had set its sights on larger events. These included the AAAI Spring Symposium Series, a legendary conference on artificial intelligence, which in recent years has also had a major impact on machine ethics and robot ethics or roboethics. The AAAI organization announced by email: “It is with regret that we must notify you of the cancellation of the physical meeting of the AAAI Spring Symposium at Stanford, March 23-25, due to the current situation surrounding the COVID-19 outbreak. Stanford has issued the following letter at news.stanford.edu/2020/03/03/message-campus-community-covid-19/, which strongly discourages and likely results in cancellation of any meeting with more than 150 participants.” What happens with the papers and talks is still unclear. Possibly they will be part of the AAAI Fall Symposium in Washington. The symposium “Applied AI in Healthcare: Safety, Community, and the Environment”, one of eight events, had to be cancelled as well – among other things, innovative approaches and technologies that are also relevant for crises and disasters such as COVID-19 would have been discussed there.

The BESTBOT on Mars

Living, working, and sleeping in small spaces next to the same people for months or years would be stressful for even the fittest and toughest astronauts. Neel V. Patel underlines this fact in a current article for MIT Technology Review. If they are near Earth, they can talk to psychologists. But if they are far away, it will be difficult. Moreover, in the future there could be astronauts in space whose clients cannot afford human psychological support. “An AI assistant that’s able to intuit human emotion and respond with empathy could be exactly what’s needed, particularly on future missions to Mars and beyond. The idea is that it could anticipate the needs of the crew and intervene if their mental health seems at risk.” (MIT Technology Review, 14 January 2020) NASA wants to develop such an assistant together with the Australian tech firm Akin. They could build on research by Oliver Bendel. Together with his teams, he has developed the GOODBOT in 2013 and the BESTBOT in 2018. Both can detect users’ problems and react adequately to them. The more recent chatbot even has face recognition in combination with emotion recognition. If it detects discrepancies with what the user has said or written, it will make this a subject of discussion. The BESTBOT on Mars – it would like that.

The Old, New Neons

The company Neon picks up an old concept with its Neons, namely that of avatars. Twenty years ago, Oliver Bendel distinguished between two different types in the Lexikon der Wirtschaftsinformatik. With reference to the second, he wrote: “Avatars, on the other hand, can represent any figure with certain functions. Such avatars appear on the Internet – for example as customer advisors and newsreaders – or populate the adventure worlds of computer games as game partners and opponents. They often have an anthropomorphic appearance and independent behaviour or even real characters …” (Lexikon der Wirtschaftsinformatik, 2001, own translation) It is precisely this type that the company, which is part of the Samsung Group and was founded by Pranav Mistry, is now adapting, taking advantage of today’s possibilities. “These are virtual figures that are generated entirely on the computer and are supposed to react autonomously in real time; Mistry spoke of a latency of less than 20 milliseconds.” (Heise Online, 8 January 2019, own translation) The neons are supposed to show emotions (as do some social robots that are conquering the market) and thus facilitate and strengthen bonds. “The AI-driven character is neither a language assistant a la Bixby nor an interface to the Internet. Instead, it is a friend who can speak several languages, learn new skills and connect to other services, Mistry explained at CES.” (Heise Online, 8 January 2019, own translation)