Events

Towards Privacy-Preserving Natural Language Processing

Monthly dialogues and critical perspectives on artificial intelligence, Human Computer Interaction (HCI), participatory design, and crisis-related research for societal impact.
ivan seminar banner

A recording of the talk can be found below:

Event is hybrid. Participants may attend in person (T2, Computer Science Building, Aalto University) or online on Zoom.

Speaker: Ivan Habernal
Department of Computer Science
Technische Universität Darmstadt, Germany

Talk Abstract: 

In this talk, I will explore the challenges and concerns surrounding privacy in natural language processing (NLP) and present potential solutions to address them. I will discuss the use of anonymization and differential privacy techniques to protect sensitive information while still enabling the training of accurate NLP models. Additionally, I will emphasize the importance of considering legal and ethical implications when implementing privacy-preserving solutions in NLP.

ivan habernal

Speaker Bio: 

Ivan Habernal leads the independent research group “Trustworthy Human Language Technologies” at the Department of Computer Science, Technische Universität Darmstadt, Germany. In winter term 2022/23 he also holds an interim professorship in Computational Linguistics at Ludwig Maximilian University of Munich. His current research areas include privacy-preserving NLP, legal argument mining, and explainable and trustworthy models. His research track covers argument mining and computational argumentation, crowdsourcing, and serious games, among others. More information can be found at https://www.trusthlt.org

Homepage

Critical AI & Crisis Interrogatives (CRAI-CIS) Seminar
  • Published:
  • Updated: