Known for its history of innovation in the areas of mission-critical advanced defense electronics and autonomy technology, BAE Systems’ FAST Labs™ research and development organization has now been tapped by the Defense Advanced Research Projects Agency (DARPA) to develop a scalable machine learning system designed to provide data anonymity to improve data sharing. The program, called Cooperative Secure Learning (CSL), has many potential applications, including cybersecurity.

Security Operations Centers (SOC) function at the forefront of machine learning technology as they work to identify emerging cyber threats by analyzing large volumes of data. Yet, standard security log data fields such as IP addresses and URLs reveal sensitive details about their internal infrastructure and services that adversaries can use to craft attacks. This leads to limited information sharing among SOCs, resulting in a situation of many SOCs having only pieces of the cyber threat landscape puzzle.

To advance research in this area, DARPA asked us to develop a scalable machine learning solution that preserves the privacy of the data and the model, enabling Cooperative Secure Learning. By sharing information – while keeping the actual data secure – individual puzzle pieces can be connected to build the complete picture, enabling new cybersecurity models and protections to be created.

“The challenge was immense but very straightforward – to create a system of information sharing for advanced research and security modeling in a way that preserves the security of the data,” said Bernard McShea, principal investigator at BAE Systems’ FAST Labs. “Our approach allowed us to leverage the organization’s extensive experience in machine learning, security, and success on previous DARPA programs.”

Our Privacy-preserving Arithmetic Computation for Encrypted Learning solution, known as PArCELTM solution, overcomes common privacy challenges by combining our recent research in cooperative learning on encrypted feature embeddings with new network log sanitization techniques. Unlike other approaches such as encryption of raw private data, our focus on feature embeddings reduces computational complexity while providing additional protection against information leakage and reduced information sharing.

Work on this approximately $1 million program, which includes research teammates led by Prof. Khorrami at New York University, builds on BAE Systems’ previous work in cyber hunting, automated defense of cyber dataset, machine learning, and related techniques.

Nicole Gable
Nicole Gable
Media Relations
For media inquiries