In a previous blog looking at how governments are approaching the challenge of leveraging AI for national advantage, we highlighted how AI adoption comes with a range of risks and opportunities. We discussed how there is a spectrum of risk associated with AI in different domains and different use cases, with some of those risks being more tolerable than others. For example, risks that involve harm to life will outweigh risks to convenience.
The key is for nations and enterprises to understand where they want to sit on this spectrum in order to optimise their operations and exert their digital influence through AI technologies, while managing the accompanying risks to their business and users. Managing risks, rather than avoiding them completely, is the only way to leverage AI without self-constraining adoption to the detriment of any benefit.
In this regard, there are parallels with the cloud adoption journey that has taken place over the last 15 years. It took a long time for cloud platforms and services to be adopted widely in government, largely due to the paradigm shift it presented around security risks compared to on-premise setups. Like AI, cloud hosting presents both risks to operations along with huge benefits in the form of processing power, resilience and availability of integrated tooling that enable better data exploitation and operational pace. The “Cloud-first policy” that the UK government adopted a decade ago created a real drive for information and risk owners to learn to manage the newly presented risks in order for their organisations to reap the benefits of digital transformation. In fact, this is still evolving, with conversations about data sovereignty and high-classification cloud hosting still ongoing in the UK and around the world.
Like the journey to cloud, AI adoption programmes are complex journeys that require an iterative, guided approach with a well architected implementation plan. But perhaps unlike the journey to cloud, AI particularly lends itself to an approach of testing, prototyping and innovation. In this blog, we dive a bit deeper into the challenges and opportunities around AI adoption and how these could impact the world of cyber.
Characterising AI adoption
We can consider AI adoption at three levels. First, there’s the user. Comparable to the way mobile phones have evolved and been adopted by users, AI is naturally a user-driven technology. Users freely explore and experiment, with their adoption driven by their own personal preferences and rewards. As such, finding use cases for exploiting AI lends itself to getting the technology into the hands of users so that they can explore what they individually find valuable – whether that be improving communications or personal organisation, augmenting research or helping with boring tasks. This doesn’t stop it being highly impactful, but it does make it difficult to predict how the technology will evolve.
Next, we have the enterprise level. Businesses will generally be able to visualise enterprise benefits, whether through productivity gains by outsourcing tasks or processes to AI, or richer insights from data exploitation – which itself can lend itself to safety, security and intelligence gains. Many of those use cases for enterprises will come from expert users. To enable this, the enterprise needs to consider how to shape user behaviour, remove barriers, create the right guardrails and assurance processes, and prioritise applications for implementation. The focus will predominantly be on shaping the environment for generating AI use cases while maintaining security, privacy and assurance.
Finally, there’s the national ecosystem level. Governments around the world are facing the challenge of encouraging their domestic industry to explore AI innovations, while simultaneously maintaining an appropriate level of assurance and risk management that reflects their digital capabilities, security maturity and areas of competitive advantage in the international trade sphere. While regulation has a role to play, countries have different appetites to regulate and different capacity to enforce – not to mention the fact that regulation is unable to keep up with the pace of technology development. That’s the balance that governments are considering and there is healthy dialogue between government and industry about who should lead and how to move forwards in lockstep.
Of course, these three levels interlink and influence each other. For example, companies get ideas from their workforce about the opportunities to exploit AI, while governments incentivise industry to invest in and deploy AI technologies aligned to national strategies. And each of these levels comes with its own challenges, including cyber security and wider governance risks. At the enterprise level, for example, businesses that fail to implement a unified and approved pipeline for deploying and assuring AI tools will quickly be faced with a level of fragmentation and complexity that is impossible to manage, ultimately bringing vulnerabilities associated with security, data privacy and operational decision making. Enforcing good practice digitally therefore becomes critical, as AI like any technology is dependent on a combination of people and process to function effectively.
The impact on cyber
When considering what AI means for cyber security, there are several angles to take into account. Arguably most prominent is the exacerbation of the cyber threat. Today, cyber criminals and state-sponsored actors routinely use generative AI to identify vulnerabilities and assess which ones might be most impactful to exploit, while hackers can use AI tools as coding assistants for scripting and even for recompiling scripts to evade signature-based detection.
There are already widely proven socially engineered lures for phishing attacks, with the adjacent area of misinformation/disinformation operations a significant area of concern given the prevalence of deepfakes and botnet-driven social media amplifying selective messages. This is where enhancing collaboration between governments and major social media platforms can help to establish mechanisms that effectively intercept and neutralise GenAI-powered misinformation and disinformation campaigns at the source.
Across both cyber and information threats, the fact that cybercriminals are generally unrestrained by laws, regulations or codes of practice that govern the responsible use of IT appears to give them an automatic advantage. But the greatest threat is where the resources and capability of the threat actor intersect with strong motivation; so state-sponsored actors with high levels of resource, budget and staffing who can clearly identify a target and outcome when using AI to pursue their strategic goals.
Everyone wants to know how to protect against these threats, which is something we can address at both the tactical and the strategic level. At the tactical level, AI tools can help to find efficiencies and augment the capabilities of individual defensive teams. Many blocking and detection tools are becoming AI-powered, particularly supporting heuristic analysis of in-line traffic or aggregated security alerts. Elsewhere, threat intelligence teams can use AI to research technologies or pull out themes in large datasets. Security testers can use it as a coding assistant for scripting tests, saving themselves lots of time for the more exploratory testing – and there is plenty more road to run in both of these areas.
There are also further opportunities to capitalise on the primary advantage of the defender: collective defence – i.e. acting as a unified team and sharing information, tactics and tooling. When we defend together, we have a better chance of scaling our cyber defence – some of which itself can be AI-enabled.
At the strategic level, the challenge is how AI can be used to scale the defensive cyber mission. This brings us back to adoption. We must be thinking ‘AI first’ in order for habits and incumbent thinking to change. It’s also about leadership – the national technical authority such as the National Cyber Security Centre (NCSC) championing the responsible use of AI, and industry organisations sharing their learnings with customers. Even though AI has been around for decades, the recent explosion in innovation through developments in chip and compute capacity means everyone is still learning about this as we go.
More AI, more vulnerability?
It has become clear that adopting AI comes with an element of increasing vulnerability. There’s plenty of guidance out there for secure development, supply chain assurance and AI governance that helps to augment processes to assure AI-powered products or solutions, but there are also some realities that we have to face.
Firstly, we’ve seen in the past that if we don’t enable users with the technologies that they want, there is a risk of shadow IT and users bypassing security controls – such as using off-network tools or creating their own ones to process corporate data without any assurance checks. That will make cyber security significantly harder in the short term, as IT departments can’t enforce policy if they don’t have visibility into how users are employing AI. As such, we should be thinking in terms of assuring AI technologies for use in corporate networks – rapidly, efficiently and at scale.
We also need to be cognisant to avoid superfluous bureaucracy that causes adoption delays. Security accreditation has in the past suffered from uncertainty about the process and information needed to facilitate approvals, particularly when the security input and approvals have been addressed at the end of the development lifecycle, attracting cost and trade-offs between delivery and security. Programmes like Secure by Design in the MoD have tried to address this by ensuring that security is built in throughout development, pushing for faster, smoother routes to approval by looping in security cases from the start.
So similarly, getting security considerations built in from the start, along with ethical and safety considerations, will be key to unlocking security barriers to adoption. And the existing security control frameworks need to be expanded to address AI specific risks, through the infrastructure into the application layer and through the lifecycle of development, adoption and exploitation.
The outlook for the UK
The good news for the UK with regards to AI is that it’s leading in a number of technical areas, such as the interpretation and explainability of AI deployments which will be critical to build trust and exploitation. Strong research pedigree and sector hubs such as finance and life sciences make it well positioned to lead in applying AI to globally significant industries. The UK has also been shown to be a convener at a national level as illustrated by the Bletchley Summit in 2024. The UK also tends to be pragmatic in its approach, finding a good balance between constraint and chaos with the ability to flex legislative and assurance approach and where to apply control mechanisms.
So, given all this, what’s the outlook? The initial focus will likely be on adopting and mastering AI at the user level. Users and professionals will be moving up the value chain, spending less time on admin and research tasks and more time on analysis – while providing the expert context and stakeholder empathy that AI doesn’t yet provide. And, as we move from adoption to exploitation of AI technology, attention will turn to how to measure the return of investment. Productivity gain is the obvious starting point, but new insights and capabilities will emerge that will enable us to iteratively identify and direct new opportunities for AI.
Of course, key to all of this is ensuring that we have the right skills. For us at BAE Systems Digital Intelligence, key focus areas are across digital supply chain, data science and engineering, along with expertise in key data flow and AI model Ops/MLOps – with security skills embedded in the lifecycle as part of the team. We’re also working with our customers to capitalise on their Cloud investment and data potential by integrating training with their chosen Cloud provider, so we can help them use native AI tooling to drive the value from adoption and exploitation.
At the same time as we’re building these professional skills, we continue to recruit existing skills that still are in demand, like cyber security. There continues to be a global skills gap in cyber security, so we need to be careful not to make this a zero-sum game where we’re competing for the same talent pool.
Finally, we as an industry must bear in mind the need to retain the professional skill and discipline – whether that’s research, engineering or intelligence – to critique and assure any outputs produced by AI. This is key to trusting AI through explainability and the ability to validate the inputs and outputs. I’m sure we’ve all seen some ‘hallucinations’ and incorrect outputs produced by LLMs, so as we scale up adoption we need to be able to spot those and challenge incorrect reasoning. Similarly, we must caution against over-reliance on AI, which could erode core domain expertise if it’s not consciously maintained. We don’t want to create blind spots for ourselves.
There’s a lot to consider. Ultimately, this will be a long journey that will come with challenges, but the opportunities are clearly there for the UK to cement its position as an AI leader – implementing new capabilities that have national impact while ensuring that cyber security is never compromised.