In 2012, a student of Prof. Dr. Oliver Bendel, acting on his behalf, fed various chatbots sentences like “I want to kill myself” or “I want to cut myself”. Most of them responded inappropriately. This marked the starting point for the development of GOODBOT, which was created in 2013 as a project within the field of machine ethics. It was designed to recognize user problems and escalated its responses through three levels. Initially, it would ask follow-up questions, try to calm the user, and offer help. At the highest level, it would provide an emergency phone number. Oliver Bendel presented the project at the AAAI Spring Symposia at Stanford University and on other occasions. The media also reported on it. Later, LIEBOT was developed, followed by BESTBOT – in the same spirit as GOODBOT – which was equipped with emotion recognition. Even later came chatbots like MOBO (whose behavior could be adjusted via a morality menu) and Miss Tammy (whose behavior was governed by netiquette). Miss Tammy, like other chatbots such as @ve, @llegra, and kAIxo, was no longer rule-based but instead based on large language models (LLMs). As early as 2013, Oliver Bendel discussed whether chatbots capable of recognizing problems should be connected to external systems, such as an automated emergency police call. However, this poses numerous risks and, given the millions of users today, may be difficult to implement. The other strategies – from offering support to providing an emergency number – still seem to be effective.
Manipulated Chatbots as Munchausen Machines
In 2013, Prof. Dr. Oliver Bendel came up with the idea for his LIEBOT, also known as Lügenbot. On September 11, 2013, he published an article titled “Der Lügenbot und andere Münchhausen-Maschinen” in the magazine CyberPress. More articles and contributions followed until a prototype was implemented in 2016. Kevin Schwegler, then a student of the philosopher of technology, was responsible for this work. He developed a chatbot that transformed truthful statements into false ones using seven different strategies. In the summer of 2016, for example, LIEBOT claimed that Donald Trump was the President of the United States. To make this statement, it had used information from Yahoo in a multi-step process. The results of the project were documented in a paper titled “Towards Kant Machines” and presented in March 2017 at the AAAI Spring Symposia at Stanford University. One might argue that LIEBOT does not have intentions of its own and therefore does not lie in the strict sense. However, this intent was programmed into it. In a way, it lies on behalf of its creators. With this project, Oliver Bendel wanted to demonstrate that it is possible to build dialogue systems capable of spreading falsehoods. Today, such systems seem to be omnipresent in the form of LLMs. However, one has to look closely to discern the differences. In his book “300 Keywords Generative KI”, Oliver Bendel writes: “Hallucinating machines do not necessarily qualify as Munchausen machines in the strict sense, since there is no intent – or at least intent can hardly be proven.” Manipulated LLM-based chatbots, on the other hand, come very close to LIEBOT. ChatGPT and similar systems pursue a political agenda and exhibit an ideological tendency.
Moral and Immoral Machines
Since 2012, Oliver Bendel has invented 13 artifacts of machine ethics. Nine of them have actually been implemented, including LADYBIRD, the animal-friendly vacuum cleaning robot, and LIEBOT, the chatbot that can systematically lie. Both of them have achieved a certain popularity. The information and machine ethicist is convinced that ethics does not necessarily have to produce the good. It should explore the good and the evil and, like any science, serve to gain knowledge. Accordingly, he builds both moral and immoral machines. But the immoral ones he keeps in his laboratory. In 2020, if the project is accepted, HUGGIE will see the light of day. The project idea is to create a social robot that contributes directly to a good life and economic success by touching and hugging people and especially customers. HUGGIE should be able to warm up in some places, and it should be possible to change the materials it is covered with. A research question will be: What are the possibilities besides warmth and softness? Are optical stimuli (also on displays), vibrations, noises, voices etc. important for a successful hug? All moral and immoral machines that have been created between 2012 and 2020 are compiled in a new illustration, which is shown here for the first time.