A Robot Enforces Social Distancing

According to Gizmodo, a robot from Boston Dynamics has been deployed to a park in Singapore to remind people they should follow social distancing guidelines during the pandemic. Spot is not designed as a security robot, like the K5 or the K3 from Knightscope. But it has other qualities: it can walk on four legs and is very fast. The machine, which was set loose on 8 May 2020 in Bishan-Ang Mo Kio Park, “broadcasts a message reminding visitors they need to stay away from other humans, as covid-19 poses a very serious threat to our health”. It “was made available for purchase by businesses and governments last year and has specially designed cameras to make sure it doesn’t run into things.” (Gizmodo, 8 May 2020) According to a press release from Singapore’s GovTech agency, the cameras will not be able to track or recognize specific individuals, “and no personal data will be collected” (Gizmodo, 8 May 2020). COVID-19 demonstrates that digitization and technologization can be helpful in crises and disasters. Service robots such as security robots, transport robots, care robots and disinfection robots are in increasing demand.

AI as a Secret Weapon Against COVID-19?

Artificial intelligence is underestimated in some aspects, but overestimated in many. It is currently seen as a secret weapon against COVID-19. But it most probably is not. The statement of Alex Engler, a David M. Rubenstein Fellow, is clear: “Although corporate press releases and some media coverage sing its praises, AI will play only a marginal role in our fight against Covid-19. While there are undoubtedly ways in which it will be helpful – and even more so in future pandemics – at the current moment, technologies like data reporting, telemedicine, and conventional diagnostic tools are far more impactful.” (Wired, 26 April 2020) Above all, however, it is social distancing that interrupts the transmission paths and thus curbs the spread of the virus. And it’s drugs that will solve the problem this year or next. So there is a need for behavioural adjustment and medical research. Artificial intelligence is not really needed. Alex Engler identified the necessary heuristics for a healthy skepticism of AI claims around Covid-19 and explained them in Wired magazine.

Announcing AIhub.org

AAAI has announced the launch of a new website, which has the goal to connect the AI community with the public. “By providing free, high-quality technical and accessible information about AI, AIhub.org aims to improve public understanding so that everyone can have a meaningful discussion about the deployment of AI in society.” (Newsletter AAAI, 23 April 2020) According to the organization, AIhub.org hosts daily updates about the latest news, opinions, tutorials, and events in AI. “All information is produced by those working directly in the field, without filter or intermediary.” (Newsletter AAAI, 23 April 2020)  This means that everyone in the AI community has the opportunity to participate in the website and address topics such as AI ethics and robot philosophy. More information via aihub.org.

WHO Fights COVID-19 Misinformation with Viber Chatbot

A new WHO chatbot on Rakuten Viber aims to get accurate information about COVID-19 to people in several languages. “Once subscribed to the WHO Viber chatbot, users will receive notifications with the latest news and information directly from WHO. Users can also learn how to protect themselves and test their knowledge on coronavirus through an interactive quiz that helps bust myths. Another goal of the partnership is to fight misinformation.” (Website WHO) Some days ago, the Centers for Disease Control and Prevention of the United States Department of Health and Human Services have launched a chatbot that helps people decide what to do if they have potential Coronavirus symptoms such as fever, cough, or shortness of breath. However, this dialog system is only intended for people who are permanently or temporarily in the USA. The new WHO chatbot is freely available in English, Russian and Arabic with more than 20 languages to be added.

The Coronavirus Chatbot

The Centers for Disease Control and Prevention of the United States Department of Health and Human Services have launched a chatbot that will help people decide what to do if they have potential Coronavirus symptoms such as fever, cough, or shortness of breath. This was reported by the magazine MIT Technology Review on 24 March 2020. “The hope is the self-checker bot will act as a form of triage for increasingly strained health-care services.” (MIT Technology Review, 24 March 2020) According to the magazine, the chatbot asks users questions about their age, gender, and location, and about any symptoms they’re experiencing. It also inquires whether they may have met someone diagnosed with COVID-19. On the basis of the users’ replies, it recommends the best next step. “The bot is not supposed to replace assessment by a doctor and isn’t intended to be used for diagnosis or treatment purposes, but it could help figure out who most urgently needs medical attention and relieve some of the pressure on hospitals.” (MIT Technology Review, 24 March 2020) The service is intended for people who are currently located in the US. International research is being done not only on useful but also on moral chatbots.

A Morality Markup Language

There are several markup languages for different applications. The best known is certainly the Hypertext Markup Language (HTML). AIML has established itself in the field of Artificial Intelligence (AI). For synthetic voices SSML is used. The question is whether the possibilities with regard to autonomous systems are exhausted. In the article “The Morality Menu” by Prof. Dr. Oliver Bendel, a Morality Markup Language (MOML) was proposed for the first time. In 2019, a student research project supervised by the information and machine ethicist investigated the possibilities of existing languages with regard to moral aspects and whether a MOML is justified. The results were presented in January 2020. A bachelor thesis at the School of Business FHNW will go one step further from the end of March 2020. In it, the basic features of a Morality Markup Language are to be developed. The basic structure and specific commands will be proposed and described. The application areas, advantages and disadvantages of such a markup language are to be presented. The client of the work is Prof. Dr. Oliver Bendel, supervisor Dr. Elzbieta Pustulka.

SPACE THEA

Space travel includes travel and transport to, through and from space for civil or military purposes. The take-off on earth is usually done with a launch vehicle. The spaceship, like the lander, is manned or unmanned. The target can be the orbit of a celestial body, a satellite, planet or comet. Man has been to the moon several times, now man wants to go to Mars. The astronaut will not greet the robots that are already there as if he or she had been lonely for months. For on the spaceship he or she had been in the best of company. SPACE THEA spoke to him or her every day. When she noticed that he or she had problems, she changed her tone of voice, the voice became softer and happier, and what she said gave the astronaut hope again. How SPACE THEA really sounds and what she should say is the subject of a research project that will start in spring 2020 at the School of Business FHNW. Under the supervision of Prof. Dr. Oliver Bendel, a student is developing a voicebot that shows empathy towards an astronaut. The scenario is a proposal that can also be rejected. Maybe in these times it is more important to have a virtual assistant for crises and catastrophes in case one is in isolation or quarantine. However, the project in the fields of social robotics and machine ethics is entitled “THE EMPHATIC ASSISTANT IN SPACE (SPACE THEA)”. The results – including the prototype – will be available by the end of 2020.

Stanford University Must Stay in Bed

Stanford University announced that it would cancel in-person classes for the final two weeks of the winter quarter in response to the expanding outbreak of COVID-19. Even before that, the school had set its sights on larger events. These included the AAAI Spring Symposium Series, a legendary conference on artificial intelligence, which in recent years has also had a major impact on machine ethics and robot ethics or roboethics. The AAAI organization announced by email: “It is with regret that we must notify you of the cancellation of the physical meeting of the AAAI Spring Symposium at Stanford, March 23-25, due to the current situation surrounding the COVID-19 outbreak. Stanford has issued the following letter at news.stanford.edu/2020/03/03/message-campus-community-covid-19/, which strongly discourages and likely results in cancellation of any meeting with more than 150 participants.” What happens with the papers and talks is still unclear. Possibly they will be part of the AAAI Fall Symposium in Washington. The symposium “Applied AI in Healthcare: Safety, Community, and the Environment”, one of eight events, had to be cancelled as well – among other things, innovative approaches and technologies that are also relevant for crises and disasters such as COVID-19 would have been discussed there.

The BESTBOT on Mars

Living, working, and sleeping in small spaces next to the same people for months or years would be stressful for even the fittest and toughest astronauts. Neel V. Patel underlines this fact in a current article for MIT Technology Review. If they are near Earth, they can talk to psychologists. But if they are far away, it will be difficult. Moreover, in the future there could be astronauts in space whose clients cannot afford human psychological support. “An AI assistant that’s able to intuit human emotion and respond with empathy could be exactly what’s needed, particularly on future missions to Mars and beyond. The idea is that it could anticipate the needs of the crew and intervene if their mental health seems at risk.” (MIT Technology Review, 14 January 2020) NASA wants to develop such an assistant together with the Australian tech firm Akin. They could build on research by Oliver Bendel. Together with his teams, he has developed the GOODBOT in 2013 and the BESTBOT in 2018. Both can detect users’ problems and react adequately to them. The more recent chatbot even has face recognition in combination with emotion recognition. If it detects discrepancies with what the user has said or written, it will make this a subject of discussion. The BESTBOT on Mars – it would like that.

The Old, New Neons

The company Neon picks up an old concept with its Neons, namely that of avatars. Twenty years ago, Oliver Bendel distinguished between two different types in the Lexikon der Wirtschaftsinformatik. With reference to the second, he wrote: “Avatars, on the other hand, can represent any figure with certain functions. Such avatars appear on the Internet – for example as customer advisors and newsreaders – or populate the adventure worlds of computer games as game partners and opponents. They often have an anthropomorphic appearance and independent behaviour or even real characters …” (Lexikon der Wirtschaftsinformatik, 2001, own translation) It is precisely this type that the company, which is part of the Samsung Group and was founded by Pranav Mistry, is now adapting, taking advantage of today’s possibilities. “These are virtual figures that are generated entirely on the computer and are supposed to react autonomously in real time; Mistry spoke of a latency of less than 20 milliseconds.” (Heise Online, 8 January 2019, own translation) The neons are supposed to show emotions (as do some social robots that are conquering the market) and thus facilitate and strengthen bonds. “The AI-driven character is neither a language assistant a la Bixby nor an interface to the Internet. Instead, it is a friend who can speak several languages, learn new skills and connect to other services, Mistry explained at CES.” (Heise Online, 8 January 2019, own translation)