Defence of doctoral thesis in the field of Computer Science, Diego Parente Paiva Mesquita
Bayesian statistics and graph neural networks are powerful tools to deal with uncertainty and network data, respectively. Bayesian statistics rests on solid theoretical foundations but depends on sampling techniques that scale poorly. Graph neural networks (GNNs) are notorious for large-scale applications (e.g., bioinformatics and natural language processing), but are largely based on empirical intuitions. In a nutshell, this work i) broadens the scope of applications for Bayesian inference, and ii) deepens the understanding of core design principles of GNNs.
On the Bayesian side, this thesis develops two sampling techniques for distributed settings where communication is a premium, such as federated learning. These are especially useful when data arises in an inherently distributed fashion (e.g., from smartphones) and privacy constraints prevent us from disclosing it to a server. Naturally, these works might be essential to leverage mobile data in critical applications, such as personalized medicine.
We also developed the first principled methodology to combine Bayesian posteriors from different studies in a meta-analysis (i.e., a generalization study). In contrast to previous methods, we do not require the meta-analyst to provide data summaries. Our approach is agnostic to study-specific complexities, which are all encapsulated in their respective posteriors. We expect this tool will make meta-analyses easier to carry and that it will be an incentive for scientists to share their posterior distributions.
On the deep learning side, this thesis revisits popular design choices for GNNs and sheds new light on them. We show that it is possible to achieve state-of-the-art performance by adding minimal features to the most basic formulation of polynomial spectral GNNs. We also challenge the role of pooling layers in GNNs, showing that they are usually expendable. We believe these works will help researchers and practitioners better choose in which directions to employ their time and resources to build more accurate GNNs
Overall, the results in this thesis show that:
1. We can learn a broad class of Bayesian models from distributed data with a limited communication budget;
2. It is possible to combine the results of multiple Bayesian studies into a meta-analysis in a principled manner; 3. Simple GNNs, with minimalist designs, often perform as well as state-of-the-art models.
Opponent: Lecturer Dr. Antonio Vergari, University of Edinburgh, Scotland
Custos: Professor Samuel Kaski, Aalto University School of Science, Department of Computer Science
Contact details of the doctoral student: [email protected], +55 85 996286017
The public defence will be organised via Zoom. Link to the event
The dissertation is publicly displayed 10 days before the defence in the publication archive Aaltodoc of Aalto University.