mia

mia

Library for running membership inference attacks (MIA) against machine learning models

These are attacks against privacy of the training data. In MIA, an attacker tries to guess whether a given example was used during training of a target model or not, only by querying the model. See more in the paper by Shokri et al. Currently, you can use the library to evaluate the robustness of your Keras or PyTorch models to MIA.

Attack
Key facts
Maturity
Support
C4DT
Inactive
Lab
Active
  • Technical

Security and Privacy Engineering Laboratory

Security and Privacy Engineering Laboratory
Carmela Troncoso

Prof. Carmela Troncoso

The Security and Privacy Engineering Laboratory develops tools and methodologies to help engineers building systems that respect societal values, such as security, privacy or non discrimination. Currently, they are working on
  • Machine Learning impact on society
  • Evaluating privacy in complex systems
  • Engineering privacy-preserving systems

This page was last edited on 2022-07-07.