Explaining how rational and affective processes explain criminal behavior

From Master Projects
Revision as of 21:02, 21 March 2013 by Tbosse (talk | contribs) (New page: {{Projectproposal |Contact person=Tibor Bosse |Contact person2=Johan F. Hoorn |Contact person3=Matthijs A. Pontier |Master areas=AI and Communication, Technical Artificial Intelligence, Hu...)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

About Explaining how rational and affective processes explain criminal behavior


1. Explaining how rational and affective processes explain criminal behavior

Within the SELEMCA project that focuses on robots for care, an internship on explaining criminal behavior with a model of moral reasoning and affective decision making is available. In this internship, an existing computational models is extended. The internal logics will be proven by simulation experiments. If within the student’s reach, lab experiments are performed with real humans and/or working prototypes of applications are developed. Background of the SELEMCA project and the model follows next.

2. Services of Electro-Mechanical Care Agencies

SELEMCA (Services of Electro-Mechanical Care Agencies) is part of the national Crisp program to sustain the creative industries (e.g., arts, design, video, animation, games) with academic knowledge, tools, and methods so to come up with novel product-service combinations that boost the knowledge economy (http://www.crispplatform.nl/projects/selemca). Research in SELEMCA focuses on three topics: Intelligence, Affect, and Creativity. Intelligence should be interpreted as both information and reasoning. Affect is limited to involvement-distance trade-offs, emotion generation and regulation. Creativity focuses on the process of insight, conceptual blending, and idea optimization. The application area and population under investigation are in the health domain, where we compare adolescents with the elderly. The aim is to keep patients empowered and self-supportive for as long as possible. The means are to develop technologies that behave as smart, sensitive, and ingenious humanoids. These agents, robots, avatars, coaches and so on work in interactive environments such as games, virtual reality, or inhabit augmented household objects (e.g., chairs, tables, and coffee machines). We cooperate in a consortium of academic, social, and business partners – of national as well as international origin.

3. Explaining criminal behavior with Moral Coppelia, a model of moral reasoning, affective decision making and emotion regulation

Criminologists have found that for criminal behavior, both affective (hot) and rational (cool) influences play a role in decision making. Feelings of fear and worry evoked by a criminal prospect, and perceived risk of sanction were found to mediate the relations between both dispositions and criminal choice. Activating a cognitive mode strengthened the relation between perceived risk and criminal choice, whereas activating an affective mode strengthened the relation between negative affect and criminal choice. These findings are aggregated in a model that explains criminal behavior.

Their findings and model seem to fit well to Moral Coppelia, our integrated model of moral reasoning, affective decision making and emotion regulation. In this project, the student will add the emotion regulation strategy to prefer options that lead to a desired emotional state to the affective decision making module of Moral Coppelia and test whether the model fits to the data gathered by the criminologists.

4. How to be a trustworthy friend: Moral Coppélia

From our basic research (e.g., Van Vugt, Hoorn, & Konijn, 2009), we know that the ethical position of a synthetic agent heavily determines our feelings of friendship for it. A robot may look good and be intelligent but if it is a cover-up of spyware you will not feel friendly towards it. Being trustworthy turned out to be the core of judging the moral fiber of an agent system (ibid.).

Our robots handle four principles to reason about ethical dilemmas; principles that are also used by medical ethical committees. In order of importance, our systems evaluate the autonomy of the patient, whether treatment is beneficial and not harmful (non-maleficence), and whether justice is done. The robot can do this so well that it emulates the decisions of medical ethical professionals (Pontier & Hoorn, 2012). In expanding the ethical reasoner with Silicon Coppélia, it can show that due to differences in emotional attachment people in certain cases prefer to sacrifice five unknown patients to save the one they love (Pontier, Widdershoven, & Hoorn, 2012).

The Moral Coppélia software is important in many respects. First, by showing moral awareness and in being capable of making ethical decisions, the user may find the robot more trustworthy and therefore will develop feelings of friendship. Second, the robot can take several perspectives, explaining to the patient how the doctor makes decisions without being emotionally attached and explaining to the doctor how the concerned relative may feel about the doctor’s decisions. Pointing out ethical dilemmas in itself is a form of moral behavior that probably will make users feel more friendly towards the robot. The latest extension of this work is the development of the moral principle of autonomy into a computational model.

5. Other internships within the SELEMCA project

Within the SELEMCA project that focuses on robots for care, several other internships are also available:

• Developing creative robots: Computational ACASIA (creativity model) • Developing robots that distinguish fiction from reality: Computational Epistemics of the Virtual (knowledge model of virtual encounters) • Developing emotionally intelligent robots that stimulate autonomy: Integration of Silicon Coppélia (on emotion regulation) with model of Moral Autonomy • Developing emotionally intelligent, creative robots: Integration of Silicon Coppélia with ACASIA model • Explaining criminal behavior with a Moral Coppelia, a model of moral reasoning and affective decision making