# Technical Program

## Paper Detail

 Paper ID MLSP-37.4 Paper Title ADVERSARIALLY ROBUST CLASSIFICATION BASED ON GLRT Authors Bhagyashree Puranik, Upamanyu Madhow, Ramtin Pedarsani, University of California, Santa Barbara, United States Session MLSP-37: Pattern Recognition and Classification 2 Location Gather.Town Session Time: Thursday, 10 June, 16:30 - 17:15 Presentation Time: Thursday, 10 June, 16:30 - 17:15 Presentation Poster Topic Machine Learning for Signal Processing: [MLR-PRCL] Pattern recognition and classification IEEE Xplore Open Preview Click here to view in IEEE Xplore Virtual Presentation Click here to watch in the Virtual Conference Abstract Machine learning models are vulnerable to adversarial attacks that can often cause misclassification by introducing small but well designed perturbations. In this paper, we explore, in the setting of classical composite hypothesis testing, a defense strategy based on the generalized likelihood ratio test (GLRT), which jointly estimates the class of interest and the adversarial perturbation. We evaluate the GLRT approach for the special case of binary hypothesis testing in white Gaussian noise under $\ell_{\infty}$ norm-bounded adversarial perturbations, a setting for which a minimax strategy optimizing for the worst-case attack is known. We show that the GLRT approach yields performance competitive with that of the minimax approach under the worst-case attack, while yielding a better robustness-accuracy trade-off under weaker attacks. The GLRT defense is applicable in multi-class settings and generalizes naturally to more complex models for which optimal minimax classifiers are not known.