I am on the advisory board of the Center for Artificial Intelligence Innovation (CAII) at Illinois, and I am a member of the Information Trust Institute (ITI). I am also affiliated with several research centers aiming to broaden the research collaboration and bridge different communities, such as the Advanced Digital Science Center (ADSC), the Center for Cognitive Computing Systems Research (C3SR), and the Quantum Information Science and Technology Center (IQUIST). I also serve in the Accelerated Learning and Engineering Research Training (ALERT) program.
My research focuses on trustworthy machine learning, with an emphasis on robustness, privacy, generalization, and their interconnections. We believe that closing today's trustworthiness gap in ML requires us to tackle these grappled problems in a holistic framework, driven by fundamental research focusing on not only each problem but also their underlying interactions.
The long-term goal for our group, Secure learning lab (SL2), is to make machine learning systems robust, private, and generalizable with guarantees for different real-world applications. We have worked on exploring different types of adversarial attacks, including evasion and poisoning attacks in digital and physical worlds, under various constraints. We have developed and will continue to explore robust learning systems based on game-theoretic analysis, knowledge-enabled logical reasoning, and properties of learning tasks. Our work directly benefits applications such as computer vision, natural language processing, safe autonomous driving, and trustworthy federated learning systems.
For Prospective Students: Prospective PhDs and postdocs who are interested and experienced in machine learning, security, and optimization, please fill out the form.
Recent News
-
[08/23] Our paper
DecodingTrust got the Outstanding Paper Award at NeurIPS 2023!
-
[08/23] Our
"NSF AI Institute for Agent-based Cyber Threat Intelligence and Operation" is awarded
-
[08/23] We got the
"Teachers Ranked as Excellent Award".
-
[12/22]
Linyi Li
was recognized by the
"Rising Stars in Data Science"
.
-
[12/22] We have gotten the first prize in the
VNN-COMP'22
.
-
[06/22] We got the
"IEEE AI's 10 to Watch Award".
-
[06/22] We got the
"IJCAI-22 Computers and Thought Award".
-
[02/22] We got the "Alfred P. Sloan Fellowship in Computer Science".
-
[02/22] We got the
"Dean's Award for Excellence in Research".
-
[01/22] We got the
"C.W. Gear Outstanding Junior Faculty Award".
-
[01/22] We got the
"Teachers Ranked as Excellent Award".
-
[12/21]
Linyi Li
was recognized by the
"AdvML Rising Star Award"
.
-
[08/21] We got the
"2021 Facebook Research Award".
-
[06/21] Mantas Mazeika
got the
"Open Philanthropy Fellowship"
.
-
[04/21] We got the
"2020 recipients of AWS Amazon Research Award".
-
[09/21] We got the
"Google Research Scholar Award".
-
[03/21] Boxin Wang
got the
"Yunni & Maxine Pao Memorial Fellowship"
.
-
[10/20] We got the
"Intel’s 2020 Rising Star Faculty Award Recognizing 10 Leading Researchers".
-
[10/20] We got the
"NSF CAREER Award" .
-
[07/20] We got the
"2019 Q4 recipients of AWS Machine Learning Research Awards".
-
[06/20] We were selected as one of the
MIT Technology Review list of 35 Innovators Under 35,
2020.
-
[08/19] Our generated physical adversarial stop sign is
on display at Science Museum in London, and has become one of the permanent collections.
-
[06/19] Our paper:
"Adversarial Objects Against LiDAR-Based Autonomous
Driving Systems"
is reported by
JiQiZhiXin
QbitAI
and is discussed at
Reddit [1],
[2].
-
[06/19] Our paper:
"SemanticAdv: Generating Adversarial Examples via
Attribute-conditional Image Editing"
is reported by
JiQiZhiXin
.
-
[05/19] Workshop "Security and Privacy of Machine Learning"
in ICML 2019. Please submit your papers
here
and win the best paper award!
-
[05/19] Workshop "Adversarial Machine Learning in
Real-World Computer Vision Systems" in CVPR 2019. Please
submit your papers
here!
-
[05/19] Our paper "Realistic Adversarial Examples in 3D Meshes" is accepted in CVPR 2019 as oral presentation!
Congratulations to Chaowei and Dawei!
-
[05/19] Our paper "Generating 3D Adversarial Point Clouds" is accepted in CVPR 2019!
-
[02/19] Our paper "How You Act Tells a Lot:
Privacy-Leakage Attack on Deep Reinforcement Learning"
got accepted in AAMAS 2019 as oral presentation!
-
[01/19] Our paper "Towards Efficient Data Valuation
Based on the Shapley Value" got accepted in AISTATS 2019!
Check it out if you want to know which data contribute more
to your model!
-
[04/19] Our paper "Characterizing Audio Adversarial
Examples Using Temporal Dependency" got accepted in ICLR
2019.