Events

CS Special Seminar: Kat Roemmich "Emotional Privacy at Risk: AI, Human Dignity, and the Future of Governance"

This talk is arranged at the Department of Computer Science.
SpecialSeminar_AaltoEvent

Emotional Privacy at Risk: AI, Human Dignity, and the Future of Governance

Kat Roemmich
University of Michigan
Google Scholar

Abstract: As AI systems increasingly interpret and respond to human emotions, they pose profound challenges to privacy, agency, and human dignity. This talk presents a research program that systematically examines these challenges and proposes solutions grounded in both empirical observation and normative theory.

 Kat will outline a three-phase research trajectory that begins by identifying the ethical and privacy risks emotion AI poses in social media, workplace, and healthcare contexts through qualitative inquiry; continues with measuring and evaluating these risks using mixed-methods survey designs to elicit normative emotional privacy judgments; and culminates in the development of the Minimal Justice Framework (MJF)—a scalable normative methodology for assessing when AI-enabled data flows violate the “minimal justice” standard of human dignity. 

Kat's empirical findings demonstrate that the deployment of emotion AI in institutional contexts, such as the workplace, can reconfigure decision-making conditions, amplify coercive power asymmetries, and destabilize both labor rights and cognitive autonomy. Emotional privacy intrusions introduce novel risks and harms not adequately addressed by dominant privacy frameworks in HCI. In response, she extended privacy theory and operationalized it through MJF.

 MJF offers a novel AI governance paradigm for evaluating how technological risks affect human dignity—an inviolable principle recognized by international ethical consensus—and, by extension, the fundamental rights and entitlements grounded in that dignity. She will demonstrate MJF’s application to the EU AI Act, highlighting its potential as both a model for human rights impact assessments and a tool for anticipatory governance. Specifically, MJF clarifies harm thresholds by identifying novel risks to human dignity and specifying when technological impacts on core human capabilities—such as emotional autonomy, practical reason, and affiliation—cross concrete thresholds that constitute significant harm. It also prescribes technical, policy, and governance interventions to mitigate these dignity risks.

Bio: Kat Roemmich is a privacy and AI ethics researcher specializing in the societal and ethical impacts of emerging AI technologies, with particular expertise in emotion AI and AI governance. Integrating empirical inquiry with normative theory, her research critically examines how AI systems that interpret and respond to human emotions—especially in social, workplace, and healthcare contexts—introduce novel ethical and privacy risks that challenge prevailing theories of privacy. Bridging human-computer interaction (HCI) research with philosophical analysis, Kat investigates how technology-enabled emotional privacy intrusions reshape power dynamics and undermine human capacities for agency and dignity. She has developed innovative empirical methods to measure normative judgments of emotional privacy and created the Minimal Justice Framework (MJF)—a governance paradigm that integrates Helen Nissenbaum’s Contextual Integrity theory with Martha Nussbaum’s Capabilities Approach to establish thresholds for when AI systems undermine human dignity and prescribe socio-technical interventions to prevent or mitigate harm. Her award-winning research has been published at premier HCI venues (CHI and CSCW) and has influenced regulatory and policy discussions around emotion AI. Looking ahead, Kat’s research will advance collaborations with organizations, policymakers, and computer scientists to develop emotional privacy-preserving socio-technical systems and translate her research into actionable ethical standards for engineering and design, aligning AI innovation with the protection of fundamental human freedoms. Following an industry career in information systems and enterprise data management, Kat will defend her Ph.D. at the University of Michigan School of Information in July 2025.

Looking ahead, Kat outlines three emerging lines of inquiry designed to build socio-technical systems while validating and refining MJF:

  1. constructing a data donation repository of Data Subject Access Requests (DSARs) to trace the socio-political risks of sensitive emotion inference trading in commercial and political domains;


  2. developing resilience testing protocols to prevent adversarial emotional manipulation; and

  3. exploring whether and how MJF might adapt to evaluate potential proto-moral agency that could emerge in advanced generative AI systems.

By integrating contemporary philosophy, HCI methods, and AI governance models, this research agenda charts a path toward dignity-centered AI governance frameworks capable of ensuring that technological advancement protects and enhances—rather than erodes—the core human capacities that define and make meaningful our shared social world: a world we have reason to value.

Department of Computer Science

We are an internationally-oriented community and home to world-class research in modern computer science.

Read more
  • Updated:
  • Published:
Share
URL copied!