Selected Publications
- Linyi Li, Jiawei Zhang, Tao Xie, Bo Li.
Double Sampling Randomized Smoothing.
(ICML 2022)
Robustness
[BibTeX]
[Code]
@inproceedings{li2022dsrs, title={Double Sampling Randomized Smoothing}, author={Li, Linyi and Zhang, Jiawei and Xie, Tao and Li, Bo}, booktitle={International Conference on Machine Learning}, year={2022} }
This work provides a double smoothing technique, which proves to be able to circumvent the known \(\ell_\infty\) barrier of existing certification approaches.
- Wenda Chu, Linyi Li, Bo Li.
TPC: Transformation-Specific Smoothing for Point Cloud Models.
(ICML 2022)
Robustness
[BibTeX]
[Code]
@inproceedings{li2022tpc, title={TPC: Transformation-Specific Smoothing for Point Cloud Models}, author={Chu, Wenda and Li, Linyi and Li, Bo}, booktitle={International Conference on Machine Learning}, year={2022} }
This work provides the first robustness certification for 3D point clouds under diverse semantic transformations.
- Xiaojun Xu, Linyi Li, Bo Li.
LOT: Layer-wise Orthogonal Training on Improving \(\ell_2\) Certified Robustness.
(NeurIPS 2022)
Robustness
[BibTeX]
[Code]
@article{XuLot2022, title={LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness}, author={Xu, Xiaojun and Li, Linyi and Li, Bo}, journal={NeurIPS}, year={2022} }
This work provides a layer-wise orthogonal training method (LOT) to effectively train 1-Lipschitz convolution layers and therefore provides high certified robustness for trained DNNs.
- Zhuolin Yang, Zhikuan Zhao, Boxin Wang, Jiawei Zhang, Linyi Li, Hengzhi Pei, Bojan Karlaš, Ji Liu, Heng Guo, Ce Zhang, Bo Li.
Improving Certified Robustness via Statistical Learning with Logical Reasoning.
(NeurIPS 2022)
Robustness
[BibTeX]
[Code]
@article{YangImp2022, title={Improving Certified Robustness via Statistical Learning with Logical Reasoning}, author={Yang, Zhuolin and Zhao, Zhikuan and Wang, Boxin and Zhang, Jiawei and Li, Linyi and Pei, Hengzhi and Karlaš, Bojan and Liu, Ji and Guo, Heng and Zhang, Ce and Li, Bo}, journal={NeurIPS}, year={2022} }
This work provides the first knowledge-enabled, certifiably robust ML framework, learning-reasoning, by integrating data-driven statistical learning with logical reasoning. It analyes the complexity of certifying the robustness of a reasoning component (e.g., MLN, Bayesian network), and provides the end-to-end robsutness certification.
- Nezihe Merve Grel*, Xiangyu Qi*, Luka Rimanic, Ce Zhang, Bo Li.
Knowledge-Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks. (ICML 2021)
Robustness
[Code]
[BibTeX]
@inproceedings{grel2021knowledge, title={Knowledge-Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks}, author={Grel, Nezihe Merve and Qi, Xiangyu and Rimanic, Luka and Zhang, Ce and Li, Bo}, booktitle={International Conference on Machine Learning}, year={2021} }
This work proves that the learning-reasoning framwork is provable more robust than a single neural network model, as long as the knowledge models make non-trivial contributions.
- Mintong Kang, Linyi Li, Maurice Weber, Yang Liu, Ce Zhang, Bo Li.
Certifying Some Distributional Fairness with Subpopulation Decomposition.
(NeurIPS 2022)
Fairness
[BibTeX]
[Code]
@article{Kangcertify2022, title={Certifying Some Distributional Fairness with Subpopulation Decomposition}, author={Kang, Mingtong and Li, Linyi and Weber, Maurice and Liu, Yang and Zhang, Ce and Li, Bo}, journal={NeurIPS}, year={2022} }
This work provides the first certified fairness for DNNs given a fair distribution and a trained model via distribution decomposition.
- Maurice Weber, Xiaojun Xu, Bojan Karlaš, Ce Zhang, Bo Li.
RAB: Provable Robustness Against Backdoor Attacks.
(IEEE Symposium on Security and Privacy (Oakland), 2022)
Robustness
[BibTeX]
[Code]
@article{rab2022, title={RAB: Provable Robustness Against Backdoor Attacks}, author={Weber, Maurice and Xu, Xiaojun and Karlaš, Bojan and Zhang, Ce and Li, Bo}, journal={(IEEE Symposium on Security and Privacy (Oakland)}, year={2022} }
This work provides the first certified robustness for DNNs against backdoor attacks by considering the training dynamics.
- Chulin Xie, Minghao Chen, Pin-Yu Chen, Bo Li.
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks. (ICML 2021)
Robustness
[BibTeX]
[Code]
@inproceedings{xie2021crfl, title={CRFL: Certifiably Robust Federated Learning against Backdoor Attacks}, author={Xie, Chulin and Chen, Minghao and Chen, Pin-Yu and Li, Bo}, booktitle={International Conference on Machine Learning}, year={2021} }
This work provides the first certifiably robust federated learning framework against adversarial agents, and the certification is guaranteed on the feature, instance, and agent levels.
- Boxin Wang*, Fan Wu*, Yunhui Long*, Luka Rimanic, Ce Zhang, Bo Li.
DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation. (CCS 2021)
Privacy
[BibTeX]
[Code]
@inproceedings{wang2021datalens, title={DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation}, author={Wang, Boxin and Wu, Fan and Long, Yunhui and Rimanic, Luka and Zhang, Ce and Li, Bo}, booktitle={Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security}, year={2021} }
This work provides a scalable differentially private data generative model for high-dimensional data with convergence guarantees.
-
Kaizhao Liang*, Jacky Zhang*, Boxin Wang, Zhuolin Yang, Sanmi Koyejo, Bo Li.
Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability. (ICML 2021)
Generalization
Robustness
[BibTeX]
[Code]
@inproceedings{liang2021uncovering, title={Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability}, author={Liang, Kaizhao and Zhang, Jacky and Wang, Boxin and Yang, Zhuolin and Koyejo, Sanmi and Li, Bo}, booktitle={International Conference on Machine Learning}, year={2021} }
This work proves that adversarial transferability (robustness property) and domain transferability (generalization property) are bidirectional indicators of each other.