• 연사: Prof. Jon Gratch
  • 소속: University of Southern California
  • 주제: Expression recognition ≠ emotion understanding: Challenges confronting the field of affective computing
  • 초록: Many assume that a person’s emotional state can be accurately inferred by surface cues such as facial expressions and voice quality, or through physiological signals such as skin conductance or heart rate variability. Indeed, this assumption is reflected in many commercial “affect recognition” tools. For example, companies provide software that promises to “understand how your customers and viewers feel when they can’t or won’t say so themselves.” However, research in in affective science highlights that the connection between surface cues, like facial expressions, and feelings of emotion are quite weak and highly context-specific. Even worse, these methods often fail to correctly classify these surface cues outside pristine laboratory conditions. In this talk, I will review some of the biases and potential solutions for expression recognition. I will then discuss how to move from expressions to understanding. Along the way, I will emphasize the problematic nature of the term “emotion recognition”: It leads users to over generalize the capabilities of the technology (in that expressions don’t necessarily indicate emotion) but also undersell its power (in that expressions can indication important information about many things besides emotion).

Leave a Reply

Your email address will not be published. Required fields are marked *