Could we create an AI-driven tactics engine for use within synthetic training environments?

Published
2025-09-17T14:06:34.928+02:00 08 May 2025
Business Air

The Challenge

The question was inspired by the work of the US Defense Advanced Research Projects Agency (DARPA) and their AlphaDogfight trials in 2019-2020. Tactics training has historically been the sole preserve of human pilots training together in adversarial roles within synthetic environments. To be a truly successful substitute, four main success criteria were set for the AI Tactics engine: 1. It had to generate credible tactical behaviour from the viewpoint of human trainees and instructors 2. It had to be quick to train with emerging operational tactics, and lightweight to execute using minimal computational resources 3. It had to be generalisable to any pursuit/evade scenario 4. The results had to be explainable to ensure the tactical behaviours exhibited met specific training objectives.

The solution

Focussing on visual range air-to-air combat as a use-case, the development team based their approach on well-known Air Combat Manoeuvring (ACM) tactics, the laws of physics, and a self-learning approach inspired by AI game-playing. From these inputs the engine learned how to fly a synthetic aircraft and score points in simulated combat over millions of synthetic AI pilot vs. AI pilot engagements. This eventually led to the emergence of a pool of AI ‘Top Guns’. The approach was based upon a form of deep learning for speed of training, combined with decision trees to describe the aircraft manoeuvres in a manner that was explainable. The result was an engine that required minimal computing power to train and execute, and that generated tactical sequences that could readily be understood by humans.