Robust Machine Learning

[Research Statement] [Publications] [Home]

Research Statement

Despite great advancements from machine learning, especially deep learning, current learning systems have severe limitations. Even if a learner performs well on the typical scenario in which it is trained and tested on the same/similar data distribution, it can fail under new scenarios and be fooled and misled by attacks at inference time (adversarial examples) or training time (data poisoning attacks). As learning systems become pervasive, safeguarding their security and privacy is critical.

In particular, recent studies have shown that the current learning systems are vulnerable to evasion attacks such as adversarial examples for which the perturbation could be of very small magnitude. For example, our work has shown that by putting printed color stickers on road signs, a learner can be easily fooled by such physical perturbation. This is one of the first works to generate robust physical adversarial perturbations that remain effective under various conditions and viewpoints. Moreover, the model may be trained with a poisoned data set, causing it to give wrong predictions under certain scenarios. Our recent work has demonstrated that attackers can embed “backdoors” in a learner using a poisoned data set on real-world applications such as face recognition.

Several solutions to these threats have been proposed, but they are not resilient against intelligent adversaries responding dynamically to the deployed defenses. Generalization is a key challenge to deep learning systems. How do we know how a deep learning system such as a neural program, a robot or a self-driving car will behave in a new environment and still be safe and secure against attacks such as adversarial perturbation? How do we specify security properties for deep learning systems? How do we test and verify desired security properties for deep learning systems? Is it possible to provide provable guarantees? Thus, the question of how to improve the robustness of machine learning models against advanced adversaries remains largely unanswered. Here we aim to answer the above questions and explore practical novel attack strategies against real-world machine learning models, and therefore develop certifiably robust learning systems.


Recent Publications

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

Boxin Wang*, Chejian Xu*, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li.

NeurIPS 2021, Oral presentation

 

What Would Jiminy Cricket Do? Towards Agents That Behave Morally

Dan Hendrycks, Mantas Mazeika, Andy Zou, Sahil Patel, Christine Zhu, Jesus Navarro, Dawn Song, Bo Li, Jacob Steinhardt.

NeurIPS 2021

 

TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness

Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Pan Zhou, Benjamin I. P. Rubinstein, Ce Zhang, Bo Li.

NeurIPS 2021

 

Adversarial Attack Generation Empowered by Min-Max Optimization

Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li.

NeurIPS 2021

 

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma.

NeurIPS 2021

 

Profanity-Avoiding Training Framework for Seq2seq Models with Certified Robustness

Hengtong Zhang, Tianhang Zheng, Yaliang Li, Jing Gao, Lu Su, Bo Li.

EMNLP 2021

 

Can Shape Structure Features Improve Model Robustness?

Mingjie Sun, Zichao Li, Chaowei Xiao, Haonan Qiu, Bhavya Kailkhura, MingyanLiu, Bo Li.

ICCV 2021

 

Knowledge-Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks

Nezihe Merve Grel*, Xiangyu Qi*, Luka Rimanic, Ce Zhang, Bo Li.

ICML 2021

 

CRFL: Certifiably Robust Federated Learning against Backdoor Attacks

Chulin Xie, Minghao Chen, Pin-Yu Chen, Bo Li.

ICML 2021

 

Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation

Jiawei Zhang*, Linyi Li*, Huichen Li, Xiaolu Zhang, Shuang Yang, Bo Li.

ICML 2021

 

TSS: Transformation-Specific Smoothing for Robustness Certification

Linyi Li*, Maurice Weber*, Xiaojun Xu, Luka Rimanic, Bhavya Kailkhura, Tao Xie, Ce Zhang, Bo Li.

CCS 2021

 

Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks

Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, RuigangYang, Qi Alfred Chen, Mingyan Liu, Bo Li.

IEEE Symposium on Security and Privacy (Oakland), 2021

 

Nonlinear Projection Based Gradient Estimation for Query Efficient Blackbox Attacks

Huichen Li*, Linyi Li*, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li.

AISTATS 2021

 

Understanding Robustness in Teacher-Student Setting: A New Perspective

Zhuolin Yang, Zhaoxi Chen, Tiffany Cai, Xinyun Chen, Bo Li, Yuandong Tian.

AISTATS 2021

 

InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective

Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu.

ICLR 2021

 

Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

Yige Li, Xingjun Ma, Nodens Koren, Lingjuan Lyu, Xixiang Lyu, Bo Li.

ICLR 2021

 

REFIT: a Unified Watermark Removal Framework for DeepLearning Systems with Limited Data

Xinyun Chen, Wenxiao Wang, Chris Bender, Yiming Ding, Ruoxi Jia, Bo Li, Dawn Song.

AsiaCCS 2021

 

Realistic Adversarial Examples in 3D Meshes

Chaowei Xiao, Dawei Yang, Bo Li, Jia Deng, Mingyan Liu.

CVPR 2019 [oral]

 

Generating 3D Adversarial Point Clouds

​Chong Xiang, Charles R. Qi, Bo Li.

CVPR 2019

 

TextBugger: Generating Adversarial Text Against Real-world Applications

​Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, Ting Wang.

NDSS 2019

 

DeepSec: A Uniform Platform for Security Analysis of Deep Learning Models

​Xiang Ling, Shouling Ji, Jiaxu Zou, Jiannan Wang, Chunming Wu, Bo Li, Ting Wang.

Oakland 2019

 

Characterizing Audio Adversarial Examples Using Temporal Dependency

Zhuolin Yang, Bo Li, Pin-Yu Chen, Dawn Song.

International Conference on Learning Representations (ICLR). May, 2019.

 

DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems

​Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, Jianjun Zhao, and Yadong Wang.

ASE 2018 [Distinguished Paper Award]

 

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

​Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li.

Oakland 2018

 

Poisoning Attacks on Data-Driven Utility Learning in Games

​Ruoxi Jia, Ioannis Konstantakopoulos, Bo Li, Dawn Song, Costas J. Spanos.

ACC 2018

 

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

Chaowei Xiao, Ruizhi Deng, Bo Li, Fisher Yu, Mingyan Liu, Dawn Song.

The European Conference on Computer Vision (ECCV), September, 2018.

 

Exploring the Space of Black-box Attacks on Deep Neural Networks

Arjun Nitin Bhagoji, Warren He, Bo Li, Dawn Song.

The European Conference on Computer Vision (ECCV), September, 2018.

 

Generating Adversarial Examples with Adversarial Networks

Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song.

The International Joint Conference on Artificial Intelligence (IJCAI), July, 2018.

 

Robust Physical-World Attacks on Deep Learning Visual Classification

Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, Chaowei Xiao, Dawn Song.

The Conference on Computer Vision and Pattern Recognition (CVPR). June, 2018.

 

Press: IEEE Spectrum | Yahoo News | Wired | Engagdet | Telegraph | Car and Driver | CNET | Digital Trends | SCMagazine | Schneier on Security | Ars Technica | Fortune | Science Magazine

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Michael E. Houle, Grant Schoenebeck, Dawn Song, James Bailey.

International Conference on Learning Representations (ICLR). May, 2018.

 

Spatially Transformed Adversarial Examples

Chaowei Xiao*, Jun-Yan Zhu*, Bo Li, Mingyan Liu, Dawn Song.

International Conference on Learning Representations (ICLR). May, 2018.

 

Decision Boundary Analysis of Adversarial Examples

Warren He, Bo Li, Dawn Song.

International Conference on Learning Representations (ICLR). May, 2018.

 

Robust Linear Regression Against Training Data Poisoning

Chang Liu, Bo Li, Yevgeniy Vorobeychik, and Alina Oprea.

AISec 2017. [Best Paper Award]

 

Large-scale identification of malicious singleton files

B. Li, K. Roundy, C. Gates and Y. Vorobeychik.

In ACM Conference on Data and Application Security and Privacy (CODASPY 2017).

 

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, Dawn Song.

December, 2017.

 

Press: Motherboard | The Register