Securing Internet-of-Things Scientific Cyberinfrastructure

News, Research

Securing the nation’s scientific data, workflows, and infrastructure is an area of significant concern for our nation’s continued economic success. Dr. Ruimin Sun recently received a new National Science Foundation cybersecurity innovation for cyberinfrastructure (CICI) grant to advance scientific discovery and innovation in this area. Her project entitled “CICI: UCSS: Secure Machine Learning Inference in IoT-driven Analytical Scientific Infrastructure” will enable her to do just that.

As more and more of the nation’s scientific cyberinfrastructure evolves to become Internet of Things (IoT) driven, these systems will increasingly rely on machine learning (ML) models for advanced data analysis and predictive modeling. Today’s machine learning models increasingly “handle serious societal responsibility such as flood modeling and hurricane prediction,” according to Dr. Sun in her proposal abstract. “Leakage of these models can cause serious issues ranging from national security and cyber security to intellectual property loss.”

Through this project, Dr. Sun and her Co-PIs Dr. Jason Liu (FIU) and Dr. Yuede Ji at the University of North Texas, plan to “implement a secure ML inference solution to prevent safety and security-critical ML models from “leaking” critical information to attackers. The project raises awareness of ML model extraction attacks in device-driven scientific cyberinfrastructures, while also broadening the impacts of cyberinfrastructure security through the development of new mission-critical functionality for machine learning models.

Ultimately, the investigators aim to “advance the security and privacy of on-device ML models tailored for scientific studies using the Internet of Things-based CIs. The team aims to complete the work through two important tasks.

First, the team will develop a novel runtime detection and prevention mechanism for ML model extraction attacks. The proposed model will employ “multilevel instrumentation techniques for CI applications and is designed to extract patterns related to ML functions.” To be effective, the system will need to “redefine memory regions for various ML tasks and allow ML developers to customize security policies to control access to model-related data.”

A second task to be undertaken by the team will be to implement “a comprehensive assessment mechanism for on-device ML model security.” This system will “measure the feasibility of a potential model extraction attack with newly designed model extraction dependency graphs, so as to run penetration-based model extraction attacks against potentially vulnerable applications.” This step will confirm the existence of these attacks.

Ultimately, the project will integrate these techniques and tools into device-driven cyberinfrastructure across various existing scientific domains. Doing so will significantly reduce the attack vulnerabilities of these ML models.

Additional information can be found at https://www.nsf.gov/awardsearch/showAward?AWD_ID=2419843