Special Seminar: Alex Lamb "Latent Data Augmentation and Modular Structure for Improved Generalization"

This talk is arranged at the Department of Computer Science.

Latent Data Augmentation and Modular Structure for Improved Generalization

Alex Lamb
University of Montreal

Wednesday, 17 March at 18:00
via Zoom: request the link by email [email protected]
Note! the link will be sent to the CS staff separately every day.

Abstract: Deep neural networks have seen dramatic improvements in performance, with much of this improvement being driven by new architectures and training algorithms with better inductive biases.  At the same time, the future of AI is systems which run in an open-ended way which run on data unlike what was seen during training and which can be drawn from a changing or adversarial distribution.  These problems also require a greater scale and time horizon for reasoning as well as consideration of a complex world system with many reused structures and subsystems.  This talk will survey some areas where deep networks can improve their biases as well as my research in this direction.  These algorithms dramatically change the behavior of deep networks, yet they are highly practical and easy to use, conforming to simple interfaces that allow them to easily be dropped into existing codebases.  

Bio: Introduction: I am currently a PhD student at the University of Montreal advised by Yoshua Bengio and a recipient of the Twitch PhD Fellowship 2020.  My research is on the intersection of developing new algorithms for machine learning and new applications.  In the area of algorithms, I'm particularly interested in (1) making deep networks more modular and richly structured and (2) improving the generalization performance of deep networks, especially across shifting domains.  I am particularly interested in techniques which use functional inspiration from the brain and psychology to improve performance on real tasks.  In terms of applications of Machine Learning, my most recent work has been on historical Japanese documents and has resulted in KuroNet, a publicly released service which generates automatic analysis and annotations to make classical Japanese documents (more) understandable to readers of modern Japanese.

Department of Computer Science

Read more
Mahine Learning researchers working at Department of Computer Science in Aalto University
  • Published:
  • Updated: