Events

What do pre-trained language models learn about you?

Monthly dialogues and critical perspectives on artificial intelligence, Human Computer Interaction (HCI), participatory design, and crisis-related research for societal impact.
Rochelle banner

A recording of the talk is available here:

Event is hybrid. Participants may attend in person (T6 A136, Computer Science Building, Aalto University) or online on Zoom.

Talk Abstract: 

Pre-trained language models have been the driving factor behind many NLP applications e.g. translation and search engines. To train such models, we rely on huge amounts of freely available data online. Yet, while the large-scale unsupervised training regime has become widely popular due to its successes, it has become increasingly unclear what these models are taught. As a result, a new research direction has emerged that focuses on the interpretability of such models. In particular, a lot of effort has been put towards detecting and mitigating harmful social biases in NLP systems (for instance pertaining to racial or gender bias). There are numerous instances in which systems have been shown to contain `biases', but how exactly does it hurt? When should we try to intervene? And how do such biases emerge in the first place? These are some of the difficult questions that are currently still under investigation.

In this talk Rochelle will give an overview of the problems that the NLP community faces, and discuss some of the common approaches for detecting and mitigating bias and their current limitations. Rochelle will then discuss her own work on studying what stereotypes are encoded in popular state-of-the-art language models, and how  these stereotypes were found to easily change due to new linguistic experiences.

Rochelle

Speaker Bio: 

Rochelle Choenni is a PhD researcher at the Institute for Logic Language and Computation (ILLC) at the University of Amsterdam, supervised by Dr. Ekaterina Shutova. In this seminar Rochelle will discuss research from her project “From Learning to Meaning: A new approach to Characterizing Sentences and Stereotypes”. Since September 2021, Rochelle began working as a Google PhD Fellow with Dr. Dan Garrette, studying how language-specific information is shared in multilingual models, and how that information can be leveraged for better generalization to low resource languages. Both projects have a strong focus on interpretability. Rochelle believes that we first have to better understand the language models we work with and their current limitations before we can make meaningful improvements.

Homepage

Critical AI & Crisis Interrogatives (CRAI-CIS) Seminar
  • Published:
  • Updated: