While at BAE Systems Digital Intelligence, I’ve been lucky enough to view this discussion from both a supplier and customer perspective, and it has been fascinating to see how the technology has matured over the years.
Driven by increased computing power and more advanced models, we have now reached a point where AI is getting pretty close to accurately representing human decision making. This presents a tremendous opportunity for the defence sector if AI is adopted responsibly – for the right applications and with appropriate governance.
However, I believe we still face significant challenges in getting AI into operational service, i.e. “operationalising” it. Let’s take a closer look.
AI in defence
AI replicates how we make decisions. This means it can support and augment human activity, reduce the information burden on military users, and potentially replace human decision-making altogether. Possible applications include: supporting decision-making in conflict or crisis scenarios; automatically recognising objects from images or video to support analysis; analysing the vast amount of defence data; detecting imminent equipment failure; and automating defensive weapon response.
As such, defence and industry have made significant investments in AI over the years. There are well-developed science and technology programmes in place and new AI models under development that have the potential to transform defence. However, many of these are currently “proofs of concept”, demonstrated in the lab on constrained scenarios or with limited data.
The next step for defence is to drive forwards on how to operationalise them. Operationalising AI means getting it to working effectively for defence users, in a way that is embedded in how and where they work and supports their objectives.
However, there is currently an exploitation gap: how we get from R&D concepts into operational capabilities being used and trusted by defence users. Even when we have a great technology solution, we need to make it robust enough to manage the scale and complexities of operational scenarios and get it on to users’ platforms in a reliable, assured and responsible way. There are only a relatively small number of examples that have so far been able to achieve that.
There is also an expectation gap: an assumption that operationalising is easy and just a necessary technicallast step. Unfortunately this is not true. Operationalising takes time – often more so than R&D – and is as much about process and user adoption as the technical activities.
The key challenges to operationalising AI
There are several challenges that need to be overcome for AI to be widely used in defence. The first involves data. Vast quantities of data are required to successfully build AI models, as they learn how we make decisions from historical, tagged data. This requires an ability to capture the right data and make it available to the right people.
The MOD is making great progress in addressing this strategically through its Chief Data Officer function. There is also a role for industry in promoting the availability of data and two-way data sharing. Defence suppliers should find ways of sharing data with MOD, and MOD should give SMEs access to representative data to help them build deployable AI solutions.
Another challenge is around the practicalities of deploying AI into operational systems. Some AI systems need to be deployed outside centralised IT platforms – for example into a plane, tank or ship. This is partly a technical integration challenge that defence suppliers can help with. It’s also a programme challenge – we need to find an agile and effective way of changing AI models within critical and long-lived platforms.
Potentially the biggest operationalisation challenge, however, is around trust. If defence does not trust its AI models as being safe, accurate and reliable, then they will not be used. Whether users accept AI depends on how well it performs, how seamlessly it fits into their ways of working, and whether it is transparent. There is a big cultural change element required here, putting the onus on user training and the effective design of how humans and computers work together to make decisions.
We must also put in place guardrails to ensure AI’s use is safe, responsible, and in line with defence policies. This should be underpinned by an assurance process to assess AI’s performance and impact in operational use – we won’t use an AI solution unless it can pass this process – and an ethical framework toensure the proportionate use of technology. These frameworks represent another area that would benefit from collaboration with industry.
Defence’s AI future?
Ultimately, AI has so much to offer defence in terms of capability, matching potential adversaries, and for driving cultural and technology change. In today’s technology landscape, defence has an opportunity to invest in the UK ecosystem to drive the development and operationalising of AI capabilities, including sovereign AI capabilities where required.
I believe many AI models will originate from SMEs; they are at the forefront of technology change. Major defence suppliers will deliver novel AI solutions too, of course, but they have more to offer in delivering integration, and the right approach to operationalising AI: data management, user trust, assurance, and technology insertion into programmes and platforms.
There may still be challenges to overcome, but there can be no doubt that the future of defence is looking increasingly AI-driven.
We have a strong and unique heritage within Defence and National Security. We enable the MoD and other organisations to achieve Information Advantage. Learn more about the work we do and our expertise.