NSF
Award Abstract #2153358

CRII: SaTC: Backdoor Detection, Mitigation, and Prevention in Deep Neural Networks

See grant description on NSF site

Program Manager:

Dan Cosley

Active Dates:

Awarded Amount:

$175,000

Investigator(s):

Rui Ning

Awardee Organization:

Old Dominion University Research Foundation
Virginia

Directorate

Computer and Information Science and Engineering (CISE)

Abstract:

This award is funded in whole or in part under the American Rescue Plan Act of 2021 (Public Law 117-2).<br/><br/>From Alexa to self-driving vehicles, deep learning approaches to machine learning are rapidly transforming how we work and live, becoming prevalent and pervasive in contexts from centralized servers to fully distributed Internet-of-Things (IoT) configurations. This ubiquity makes deep learning-based systems an increasingly attractive target for a variety of cyberattacks, such as the generation of adversarial examples that trick a deep learning classifier into making incorrect decisions. Less studied are attacks in which attackers are able to corrupt the training of an existing model, or distribute a model they created (for instance, as part of a software library) that contains backdoors, ways that an attacker can create system inputs that grant unwarranted access or lead to predictable errors or failures. This projects goal is to lay out the fundamental principles, theories, and constraints on the creation of neural network backdoors, along with techniques and testbeds for detecting and mitigating them. The testbed will enable a wide variety of research questions around neural networks beyond this specific project, and the work will also provide training and education opportunities around security for K-12 teachers, students, and parents. <br/><br/>A key thrust of the project is to systematically investigate existing neural backdoor attacks to understand fundamental and generalizable attack principles. Based on those findings, the research team will (1) devise algorithms to accurately detect neural backdoors embedded in deep learning models, (2) develop robust backdoor eradication schemes for guaranteed model recovery, and (3) investigate preventive defense measure to make it harder to form backdoors during the training process. In parallel to the above research tasks, the investigator will develop a Development and Experimental Environment for Neural Backdoor testbed that collects neural backdoor libraries and datasets, with the goal of supporting standardized, replicable research around neural backdoors and eventually neural networks more generally. Overall, the proposed work will lead to enabling technologies to secure deep learning systems, accelerating their development and widening their trustworthy adoption in various application domains.<br/><br/>This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Back to Top