ADAPT-CPS: AI-Driven Adaptive Evaluation for Cyber-Physical System Defenses

Categories: Events, Seminar Series

Wed Feb 11. 11:30-12:30 WWH 335

Dr. Chenglong Fu

Abstract

Cyber-physical systems, including power grids, water treatment plants, smart homes, and manufacturing facilities, are increasingly exposed through cloud-hosted management platforms and programmatic control interfaces. While the security community has developed sophisticated intrusion detection systems and anomaly detectors to protect these critical assets, most are still evaluated using static datasets that contain fixed, pre-scripted attack recordings. This creates a significant but often overlooked risk: defenses that appear robust against replayed traces may fail against real-world adversaries who observe system responses, reason about detection logic, and adapt their strategies over time. This talk introduces the concept of the Evaluation Validity Gap, defined as the measurable discrepancy between a defense’s reported performance on static benchmarks and its true robustness against adaptive, feedback-driven attackers, and argues that AI-driven adaptive evaluation is necessary for understanding the actual security posture of deployed cyber-physical systems.

To address this gap, we present ADAPT-CPS, a framework that embeds large language model (LLM)-based adaptive attackers into realistic cyber-physical simulations through a physically constrained interface that ensures all attack actions remain valid and meaningful. As a proof of concept, we demonstrate LLM-GridEval, an instantiation targeting smart grid security in which an LLM attacker coordinates load-altering attacks on electric vehicle charging infrastructure within a HELICS-based transmission and distribution co-simulation. Our experiments show that attacker adaptivity and domain awareness materially affect measured risk: a timing-only adaptive agent underperforms a random baseline, while a strategy-aware agent matches the baseline’s impact with fewer actions. These findings illustrate that static evaluations can both overestimate and underestimate adversarial threats depending on the modeling assumptions, and they motivate a broader research agenda for AI-enabled security evaluation across CPS domains, where adaptive red teaming serves as the standard for rigorous defense validation.

Bio

Dr. Chenglong Fu is an Assistant Professor in the Department of Software and Information Systems at the University of North Carolina at Charlotte. He received his Ph.D. in 2022 from Temple University, advised by Professor Xiaojiang Du. His research focuses on the security of cyber-physical systems, the Internet of Things, and the application of artificial intelligence to security evaluation. His work spans semantics-aware anomaly detection for smart home platforms, automation interference and timing-based attacks on IoT systems, and AI-driven evaluation frameworks for critical infrastructure protection. His research has been published at top-tier security venues including ACM CCS, USENIX Security, IEEE S&P, DSN, and RAID. Dr. Fu received the Scott Hibbs Future of Computing Award from Temple University in recognition of his research contributions.