CS 598: Special Topics on Adversarial Machine Learning

Fall 2020

Instructor
Bo Li
lbo@illinois.edu
4310 Siebel Center

Lectures
Tue/Thu 3:30-4:45pm
Zoom

Teaching Assistant
Fan Wu, fanw6@illinois.edu

Office Hours
Bo Li: Tue/Thu 4:45-5:30 pm

Forums
Piazza

Course Overview

This course first provides introduction for topics on machine learning, security, privacy, adversarial machine learning, and game theory. Then from the research perspective, we will discuss the novelty and potential extension for each topic and related work. Students will understand different machine learning algorithms and analyze their implementation and security vulnerabilities through a series of readings and projects, as well as develop ability to conduct research projects on related topics.

Prerequisites: CS 446 Machine Learning, CS 461 Computer Security I, (CS 463 Computer Security II)

Please contact the instructor if you have questions regarding the material or concerns about whether your background is suitable for the course.

Course Schedule

The following table outlines the schedule for the course. We will update it as the semester progresses. Please refer to the class syllabus for more details.

Date Lecture Readings Related Materials
8/25 Course Overview Background ideas about general adversarial machine learning, including the fundamental causes of the problem and current research status Slides
8/27 Evasion Attacks Against Machine Learning Models (Against Classifiers) Intriguing properties of neural networks
Explaining and harnessing adversarial examples
Towards Evaluating the Robustness of Neural Networks
Slides
9/1 ​Evasion Attacks Against Machine Learning Models (Non-traditional Attacks) ​​Generating Adversarial Examples with Adversarial Networks
​Spatially Transformed Adversarial Examples
​​Robust Physical-World Attacks on Deep Learning Models
​​Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Slides
9/3 Evasion Attacks Against Machine Learning Models (Against Detectors/Generative Models/RL) ​​Houdini: Fooling Deep Structured Prediction Models
Adversarial Examples for Semantic Segmentation and Object Detection
Adversarial Examples for Generative Models
Adversarial Attacks on Neural Network Policies
Slides
9/8 Evasion Attacks Against Machine Learning Models (Blackbox Attacks) Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Exploring the Space of Black-box Attacks on Deep Neural Networks
​​Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Slides
9/10 Detection Against Adversarial Attacks ​Pre-process input: Exploring the Space of Adversarial Images
​​Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
​​SafetyNet: Detecting and Rejecting Adversarial Examples Robustly
Slides
9/15 Defenses Against Adversarial Attacks (Empirical) Distillation as a defense to adversarial perturbations against DNNs
​Towards Deep learning models resistant to adversarial attacks
​PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations 
Slides
9/17 Defenses Against Adversarial Attacks (Theoretic) Certified Defenses Against Adversarial Examples
​Provable Defenses against adversarial examples via the convex outer adversarial polytope
Certified Adversarial Robustness via Randomized Smoothing
​On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Slides
9/22 Poisoning Attacks Against Machine Learning Models Optimization based poisoning attack methods against: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks Slides
9/24 Proposal Report Slides
9/29 Poisoning Attacks Analysis Universal Multi-Party Poisoning Attacks
Trojaning Attack on Neural Networks
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
Data Poisoning Attack against Knowledge Graph Embedding
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Slides
10/1 Defenses Against Poisoning Attacks ​Certifed Defenses for Data poisoning attacks
​Robust Logistic Regression and Classification
​Robust High-Dimensional Linear Regression
Slides
10/6 (Robust) Data Valuation Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms
Towards Efficient Data Valuation Based on the Shapley Value
Understanding Black-box Predictions via Influence Function
Slides
10/8 Guest Lecture
10/13 Robustness of Graph Neural Networks Semi-supervised classification with graph convolutional networks
Robust Graph Convolutional Networks Against Adversarial Attacks
Batch Virtual Adversarial Training for Graph Convolutional Networks
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
Slides 1 Slides 2
10/15 Beyond Images: Adversarial Attacks on NLP/Audio/Video/Graphs Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples
Adversarial Examples for Evaluating Reading Comprehension Systems
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
​CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition
​Adversarial Attacks on Node Embeddings via Graph Poisoning
Adversarial Attack on Graph Structured Data
Slides 1 Slides 2
10/20 Generative Adversarial Networks (Empirical) Generative Adversarial Nets
​Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Conditional Generative Adversarial Nets
​​Video-to-Video Synthesis
Slides 1 Slides 2
10/22 Generative Adversarial Networks (Theoretic) Generalization and equilibrium in generative adversarial nets (GANs)
​Do GANs Actually Learn the Distribution?
Theoretical limitations of Encoder-Decoder GAN architectures
​Certifying some distributional robustness with principled adversarial training
Slides 1 Slides 2
10/27 Status Report
10/29 Guest Lecture
11/3 Privacy in Machine Learning Models (Attacks) ​Membership Inference attacks against machine learning models
​The secret sharer: Measuring unintended neural network memorization & extracting secrets
​​Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
11/5 Differentially Private Machine Learning Models Deep Learning with Differential Privacy
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
​Scalable Private Learning With PATE
​​Plausible deniability for privacy-preserving data synthesis
11/10 Differential Privacy on Graphs ​​Analyzing Graphs with Node Differential Privacy
Generating Synthetic Decentralized Social Graphs with Local Differential Privacy
Detecting Communities under Differential Privacy
11/12 Fairness of Machine Learning Delayed impact of fair machine learning
Fairness without demographics in repeated loss minimization
​​On Formalizing Fairness in Prediction with Machine Learning
Avoiding Discrimination through Causal Reasoning
11/17 Game Theoretic Analysis for Adversarial Learning ​Adversarial Learning
​Adversarial Classification
​Feature Cross-Substitution in Adversarial Classification
Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings
11/19 Robust Reinforcement Learning and Improve learning robustness with unlabeled data Robust Adversarial reinforcement learning
​​Adversarially Robust Policy Learning: Active Construction of Physically-Plausible Perturbations
​Inverse Reward Design 
​​Unlabeled Data Improves Adversarial Robustness
Are Labels Required for Improving Adversarial Robustness?
Fall Break
12/1 Robustness In Distributed Learning DBA: Distributed Backdoor Attacks Against Federated Learning
Towards Realistic Byzantine-Robust Federated Learning
12/3 Final Report I
12/8 Final Report II

Grading

The course will involve one paper presentation, three reading reviews, and a final project. Unless otherwise noted by the instructor, all work in this course is to be completed independently. If you are ever uncertain of how to complete an assignment, you can go to office hours or engage in high-level discussions about the problem with your classmates on the Piazza boards.

Grades will be assigned as follows:

Course Expectations

The expectations for the course are that students will attend every class, do any readings assigned for class, and actively and constructively participate in class discussions. Class participation will be a measure of contributing to the discourse both in class, through discussion and questions, and outside of class through contributing and responding to the Piazza forum.

More information about course requirements will be made available leading up to the start of classes.

Ethics Statement

This course will include topics related computer security and privacy. As part of this investigation we may cover technologies whose abuse could infringe on the rights of others. As computer scientists, we rely on the ethical use of these technologies. Unethical use includes circumvention of an existing security or privacy mechanisms for any purpose, or the dissemination, promotion, or exploitation of vulnerabilities of these services. Any activity outside the letter or spirit of these guidelines will be reported to the proper authorities and may result in dismissal from the class and possibly more severe academic and legal sanctions.

Academic Integrity Policy

The University of Illinois at Urbana-Champaign Student Code should also be considered as a part of this syllabus. Students should pay particular attention to Article 1, Part 4: Academic Integrity. Read the Code at the following URL: http://studentcode.illinois.edu/.

Academic dishonesty may result in a failing grade. Every student is expected to review and abide by the Academic Integrity Policy: http://studentcode.illinois.edu/. Ignorance is not an excuse for any academic dishonesty. It is your responsibility to read this policy to avoid any misunderstanding. Do not hesitate to ask the instructor(s) if you are ever in doubt about what constitutes plagiarism, cheating, or any other breach of academic integrity.

Students with Disabilities

To obtain disability-related academic adjustments and/or auxiliary aids, students with disabilities must contact the course instructor and the as soon as possible. To insure that disability-related concerns are properly addressed from the beginning, students with disabilities who require assistance to participate in this class should contact Disability Resources and Educational Services (DRES) and see the instructor as soon as possible. If you need accommodations for any sort of disability, please speak to me after class, or make an appointment to see me, or see me during my office hours. DRES provides students with academic accommodations, access, and support services. To contact DRES you may visit 1207 S. Oak St., Champaign, call 333-4603 (V/TDD), or e-mail a message to disability@uiuc.edu. Please refer to http://www.disability.illinois.edu/.

Emergency Response Recommendations

Emergency response recommendations can be found at the following website: http://police.illinois.edu/emergency-preparedness/. I encourage you to review this website and the campus building floor plans website within the first 10 days of class: http://police.illinois.edu/emergency-preparedness/building-emergency-action-plans/.

Family Educational Rights and Privacy Act (FERPA)

Any student who has suppressed their directory information pursuant to Family Educational Rights and Privacy Act (FERPA) should self-identify to the instructor to ensure protection of the privacy of their attendance in this course. See http://registrar.illinois.edu/ferpa for more information on FERPA.