Emotion Research:

Cognitive Science / Artificial Intelligence



Computational Models of Affect

Although a number of classification dimensions exist (see Framework section of this homepage), we have selected the level of abstraction of the model as the primary organizing dimension for the different models. We use the following three levels:


A number of models exist which attempt to identify emotions from facial expressions or other behavioral or physiological measures. These models do not fit into the categorization above and are discussed separately at the end of this section.

An excellent earlier review of computational models of emotion can also be found in Pfeifer, 1988.



Architecture-Level Models

Models in this category exist at output model end of the spectrum and at the process model end.

Task-Level Models

Models in this category tend to reflect the output model perspective, in that their focus is often on enhancing system performance on a particular problem-solving task.


Mechanism-Level Models

The models in this category attempt to emulate some aspects of the mechanisms involved in emotional processing and are therefore at the process-level end of the modeling approach spectrum. They include symbolic, connectionist, and hybrid connectionist-symbolic approaches. We divide these models into those addressing higher-level phenomena, such as mood congruent recall, the effect of emotion on performance, and the cognitive appraisal process itself, and lower-level phenomena, such as classical conditioning, connectionist models of the interaction of cognition and affect and multiple processing systems (e.g., implicit and explicit processing), and network models of psychopathology. The latter tend to be implemented using connectionist architectures.


Computational Models of Emotion Recognition

Addressing the social role of emotion indirectly, at a different level of abstraction, a number of efforts have focused on the recognition of emotional states from facial expressions. Padgett, Cottrell and Adolphs (1996) have built a 2-layer feed-forward connectionist model that recognizes six basic emotions (happiness, surprise, sadness, anger, fear, and disgust) from static face images. The net is trained on blocks of features from the most expressive parts of the face: eyes and mouth. The model is able to perform categorical perception of the different emotions and preliminary experiments with human subjects indicate that it successfully emulates human perceptual behavior. Another example of this type of work is the Facial Expression Analysis Tool (FEAT) and Facial Action Composing Environment (FACE), developed at the University of Geneva (Kaiser et al., 1994). These systems use connectionist models to construct mappings between emotional states and distinct configurations of facial muscles. Fellous (1995) has explored principal component analysis and discriminant analysis in inferring emotional expressions from a series of five facial measurements. A number of more mathematically-oriented approaches to facial expression recognition are described in a survey article by Samal and Iyengar (1992).

Related Sites: MIT Affective Computing Group (Roz Picard),University of Geneva, Berkeley Psychophysiology Laboratory, UCSC


Editors: Eva Hudlicka [psychometrixassociates.com ]

Contributors:

A. Sloman, Cognition and Affect Project, Univ. of Birmingham, UK

Paul den Dulk (dendulk@psy.uva.nl)


Please send us your comments.

...Navigate The Emotion Site