Events

Public defence in Computer Science, M.Sc. (Tech) Sebastian Szyller

Public defence from the Aalto University School of Science, Department of Computer Science
Doctoral hat floating above a speaker's podium with a microphone

Title of the thesis: Ownership and Confidentiality in Machine Learning

Doctoral student: Sebastian Szyller
Opponent: Prof. Sébastien Gambs, Université du Québec à Montréal, Canada
Custos: Adjunct Prof. N Asokan, Aalto University School of Science, Department of Computer Science

In recent years, machine learning (ML) models have become increasingly popular. In particular, deep neural networks (DNNs) have been at the forefront of ML in the domains of vision, audio and language understanding. Alas, this has made DNNs targets for a wide array of attacks. DNNs’ complexity revealed a wider range of vulnerabilities compared to the much simpler models of the past. 

In order to effectively build and deploy ML models, model builders invest vast resources into gathering, sanitising and labelling the data, designing and training the models, as well as serving them effectively to their customers. ML models embody valuable intellectual property (IP), and thus business advantage that needs to be protected. Model extraction attacks aim to mimic the functionality of ML models, or even compromise their confidentiality. An adversary who extracts the model can leverage it for other attacks, continuously use the model without paying, or even undercut the original owner by providing a competing service at a lower cost. 

The dissertation explores the feasibility of model extraction attacks. It showcases novel attacks against classification and image-translation DNNs. To address the threat of model extraction, I propose two detection mechanisms able to identify ongoing attacks in certain settings with the following caveat: detection and prevention cannot stop a well-equipped adversary from extracting the model. Hence, I focus on reliable ownership verification. By identifying extracted models and tracing them back to the victim, ownership verification can deter model extraction. I demonstrate it by introducing the first watermarking scheme designed specifically against extraction attacks. Finally, I identify the problem of conflicting interactions among protection mechanisms. ML models are vulnerable to various attacks, and thus, may need to be deployed with multiple protection mechanisms at once. I show that combining ownership verification with protection mechanisms against other security/privacy concerns can result in conflicts.

Thesis available for public display 10 days prior to the defence at: https://aaltodoc.aalto.fi/doc_public/eonly/riiputus/

Contact details:


Doctoral theses in the School of Science: https://aaltodoc.aalto.fi/handle/123456789/52

  • Published:
  • Updated:
Share
URL copied!