A Leaderboard for Provable Training and Verification Approaches Towards Robust Neural Networks
Recently, provable (i.e. certified) adversarial robustness training and robustness verification approaches have demonstrated their importance in the adversarial learning community.
In contrast to empirical robustness and empirical adversarial attacks, (common) provable robustness training/verification approaches provide rigorous lower bound for the network robustness, such that no existing or future attacks will be able to attack the model further. The verification approaches, which verifies the robustness bound given a neural network model, are strongly connected with the training methods. Thus, after training on the training set, the provable robustness bound of a model can be measured in terms of the ratio of the verifiable robust points in the test set. To better record the advances in this field, we release a repo on Github (click to open) which keeps track of state-of-the-art provable robustness models on common datasets such as ImageNet, CIFAR10, and MNIST. Besides, a categorized paper list is attached.
Welcome to visit our repo and provide any feedback or updates!