Emotion AI, increasingly used in mundane (e.g., entertainment) to high-stakes (e.g., education, healthcare, workplace) contexts, refers to technologies that claim to algorithmically recognize, detect, predict, and infer emotions, emotional states, moods, and even mental health status using a wide range of input data. While emotion AI is critiqued for its validity, bias, and surveillance concerns, it continues to be patented, developed, and used without public debate, resistance, or regulation. In this talk, I highlight some of my research group's work focusing on the workplace to discuss: 1) how emotion AI technologies are conceived of by their inventors and what values are embedded in their design, and 2) the perspectives of the humans who produce the data that make emotion AI possible, and whose experiences are shaped by these technologies: data subjects. I argue that emotion AI is not just technical, it is sociotechnical, political, and enacts/shifts power – it can contribute to marginalization and harm despite claimed benefits. I advocate that we (and regulators) need to shift how technological inventions are evaluated.
- Tags
-