Developing emotionally intelligent robots that stimulate autonomy

From Master Projects
Jump to: navigation, search

About Developing emotionally intelligent robots that stimulate autonomy


1. Developing emotionally intelligent robots that stimulate autonomy

Within the SELEMCA project that focuses on robots for care, an internship on developing emotionally intelligent robots that stimulate autonomy is available. In this internship, two existing computational models are integrated. The internal logics will be proven by simulation experiments. If within the student’s reach, lab experiments are performed with real humans and/or working prototypes of applications are developed. Background of the SELEMCA project and the models follow next.

2. Services of Electro-Mechanical Care Agencies

SELEMCA (Services of Electro-Mechanical Care Agencies) is part of the national Crisp program to sustain the creative industries (e.g., arts, design, video, animation, games) with academic knowledge, tools, and methods so to come up with novel product-service combinations that boost the knowledge economy ( Research in SELEMCA focuses on three topics: Intelligence, Affect, and Creativity. Intelligence should be interpreted as both information and reasoning. Affect is limited to involvement-distance trade-offs, emotion generation and regulation. Creativity focuses on the process of insight, conceptual blending, and idea optimization. The application area and population under investigation are in the health domain, where we compare adolescents with the elderly. The aim is to keep patients empowered and self-supportive for as long as possible. The means are to develop technologies that behave as smart, sensitive, and ingenious humanoids. These agents, robots, avatars, coaches and so on work in interactive environments such as games, virtual reality, or inhabit augmented household objects (e.g., chairs, tables, and coffee machines). We cooperate in a consortium of academic, social, and business partners – of national as well as international origin.

3. How to be a friend: Silicon Coppélia

Silicon Coppélia is a software program that can simulate emotions and can regulate them as appropriate (e.g., Hoorn, Pontier, & Siddiqui, 2012). Coppélia simulates goal-related beliefs that give rise to affect. It simulates beliefs about the responsibility of other agencies (humans included) for helping or obstructing to arrive at a goal state. It also holds beliefs about the probability of achieving those goals. Coppélia has beliefs about the way the world is, which it holds for true. It also has beliefs that events influence certain states of the world. The beliefs and variables such as ethics, aesthetics, etc. govern the seven emotions that Coppélia can express: joy, distress, hope, fear, anger, guilt, and surprise. Silicon Coppélia can simulate emotions and changes of beliefs about the responsibility of other agents for her being happy or sad. The program moreover estimates the chance that goal-states may happen and can make irrational decisions when appropriate (e.g., “I should leave you now but I love you too much”).

Silicon Coppélia develops state predicates about the user (or any other agency) in a given situation and context. The features of the user are appraised for different aspects, such as ethics, aesthetics, etc. These features are compared with the personal goals of Coppélia, so that the user gains personal meaning for her. This way, the user becomes relevant to the Coppélia system, directing the intensity of her affective responses. The direction of Coppélia’s affect is regulated by valence. Perspective taking can be done through knowledge of the goals of others. This way, the goals of Coppélia can converge with those of the patient, caregiver, professional, or manager so that, for instance, she can coach the patient to keep his promise of doing daily exercises.

Through relevance, current, and future valence, appraisal frames are established that guide her intensions to use her (human) counterpart to achieve her goals (e.g., maintenance, winning a game, help the user). Appraisal frames also determine her friendliness towards the user (i.e. Coppélia’s involvement) balanced by distance. During affective decision making, Coppélia selects one of four actions from which she expects the highest satisfaction: positive approach (e.g., compliment the user), negative approach (e.g., criticize the user), change (e.g., instruct the user), or walk away. Coppélia’s affective decisions may change by modulating her responses.

4. How to be a trustworthy friend: Moral Coppélia

From our basic research (e.g., Van Vugt, Hoorn, & Konijn, 2009), we know that the ethical position of a synthetic agent heavily determines our feelings of friendship for it. A robot may look good and be intelligent but if it is a cover-up of spyware you will not feel friendly towards it. Being trustworthy turned out to be the core of judging the moral fiber of an agent system (ibid.).

Our robots handle four principles to reason about ethical dilemmas; principles that are also used by medical ethical committees. In order of importance, our systems evaluate the autonomy of the patient, whether treatment is beneficial and not harmful (non-maleficence), and whether justice is done. The robot can do this so well that it emulates the decisions of medical ethical professionals (Pontier & Hoorn, 2012). In expanding the ethical reasoner with Silicon Coppélia, it can show that due to differences in emotional attachment people in certain cases prefer to sacrifice five unknown patients to save the one they love (Pontier, Widdershoven, & Hoorn, 2012).

The Moral Coppélia software is important in many respects. First, by showing moral awareness and in being capable of making ethical decisions, the user may find the robot more trustworthy and therefore will develop feelings of friendship. Second, the robot can take several perspectives, explaining to the patient how the doctor makes decisions without being emotionally attached and explaining to the doctor how the concerned relative may feel about the doctor’s decisions. Pointing out ethical dilemmas in itself is a form of moral behavior that probably will make users feel more friendly towards the robot. The latest extension of this work is the development of the moral principle of autonomy into a computational model.

5. Other internships within the SELEMCA project

Within the SELEMCA project that focuses on robots for care, several other internships are also available:

• Developing creative robots: Computational ACASIA (creativity model)

• Developing robots that distinguish fiction from reality: Computational Epistemics of the Virtual (knowledge model of virtual encounters)

• Developing emotionally intelligent robots that stimulate autonomy: Integration of Silicon Coppélia (on emotion regulation) with model of Moral Autonomy

• Developing emotionally intelligent, creative robots: Integration of Silicon Coppélia with ACASIA model

• Explaining criminal behavior with a Moral Coppelia, a model of moral reasoning and affective decision making