Title: Hands on Adversarial Machine Learning

Instructor: Yacin Nadji

Abstract: Machine learning has become commonplace in software engineering and will continue to grow in importance. Currently, most work focuses on improving classifier accuracy. However, as more and more models interact with the real world, practitioners must consider how resilient their models are against adversarial manipulation. Successful attacks can have serious implications, like crashing a car, misclassifying malicious code, or enabling fraud.

In this workshop, you will learn how to think like an adversary so that you can build more resilient machine learning systems. You'll discover how to use free and open source tools to construct attacks against and defenses for machine learning models, as well as how to holistically identify potential points of attack an adversary could exploit. You'll leave able to critically examine a machine learning system for weaknesses, mount attacks to surface problems, and implement and evaluate practical defenses.

Level: Intermediate

Pre-Requisites: Familiarity with Python (or similar programming language) and basic Machine Learning. For the latter, students that have preprocessed data and trained & evaluated a model will be in good shape to tackle the material.

Required Materials: Laptop capable of running Docker or Jupyter notebooks.