The global artificial intelligence (AI) market is primed for massive growth. GrandView Research predicts that it will reach $1.8 trillion by 2030, growing at a compound annual growth rate (CAGR) of 36.6%, due to various factors including ever-increasing data volumes and the growing demand for image processing.
Clearly, this will present huge opportunities for all organisations across both the private and public sector – from automating processes to enabling predictive maintenance and deriving insights from vast quantities of data. There’s no domain that won’t be touched by the power of AI technologies.
We must recognise that the opportunities of AI also come with risks – this is unavoidable. But, while AI can be seen as a double-edged sword, it is much more nuanced. There is a spectrum of risk associated with AI in different domains and use cases. The key is for nations and enterprises alike to understand where they want to sit on this spectrum – putting themselves in the best position to optimise their operations and exert their digital influence through AI while managing the accompanying risks to their business. Managing risks, rather than avoiding them completely, is the only way to leverage AI without self-constraining adoption to the detriment of any benefit.
At the enterprise level, the focus will predominantly be on maintaining security, privacy and assurance when implementing AI technologies. At a national level, AI capabilities such as autonomous systems and advanced threat detection offer the potential to transform national security, while empowering nations to exert their strategic influence on the global stage. The challenge for governments across the world is figuring out their optimum point on the risk-innovation spectrum.
Recognising AI’s societal threat
It’s no exaggeration to say that AI poses material risks to societies around the world. For example, while AI tools allow for faster and more precise threat detection and response, they also empower hostile actors to enhance their capability whilst lowering the bar for new entrants. AI is giving cyber-criminals and hostile nation states alike access to more sophisticated tooling that can be used to wreak havoc on our systems.
As such, nations’ critical national infrastructure – from banking systems and energy plants to transportation networks and satellite systems – are all at risk from enhanced AI-based attacks. As is our democratic election system. AI is presenting several risks to the election cycle, such as giving nefarious actors the capability to conduct sophisticated disinformation campaigns that mislead voters in order to influence their decision making and undermine the election process.
An arguably greater and more fundamental threat to society is the arms race towards Artificial General Intelligence (AGI), where systems match or surpass human capabilities across virtually all cognitive tasks. If adversaries get there before we do, we will face an insurmountable strategic disadvantage. This highlights the scale of the challenge at the national level for governments that have to simultaneously encourage their domestic industry to explore AI innovations while maintaining a level of assurance and risk management that’s appropriate for that nation, reflecting its digital capabilities and security maturity along with its future employment landscape and environmental priorities. Clearly, there is a risk of further entrenching imbalances both between nations and within societies.
Given that cyber threats can be deployed at a scale that could impact the safety and prosperity of society, a ‘whole of nation’ approach is required to counter them. Every organisation innovating with AI has a role to play in balancing risk and reward by understanding where assurance is most important to reflect high impacts and putting risk management at the core of their activities. In the same way that strong cyber security enables digitisation, risk management enables safe exploitation and adoption of AI – particularly in the contexts of Defence, National Security and CNI.
The use cases for AI – the problems to which we apply the technology – themselves range from low- to high-risk. Assessing whether a decision can be safely outsourced to a machine comes down to accuracy, predictability and explainability – from deterministic technologies which provide expected outputs (such as Robotic Process Automation) through to probabilistic systems like LLMs. AI systems can have guardrails applied and are subject to significant dependencies on data, training, inferencing and human in the loop to achieve acceptable outputs – but this might not be appropriate for highly regulated industries and mission critical systems.
The cost efficiencies and productivity gains offered by such technologies must be balanced against risk. Innovative adoption therefore requires a fast and effective process to be established for making those risk-based decisions within an overarching mindset of safe, secure and ethical AI deployment – which sits at the heart of our work with customers in high-trust sectors.
Shaping an assured adoption journey
There are several potential avenues to take when thinking about enabling assured AI. As previously mentioned, every government’s core mission should be to balance the growth of AI versus the security threat and other safety risks. This requires influencing sensible decisions to be made within individual enterprises and at a national level in order to manage economic and societal risks.
Eliminating all risk isn’t the answer. Rather, finding ways to manage risk and recover when things go wrong. These decisions involve choosing appropriate use cases and securing the foundations of AI systems, thereby enabling a nation to build trust and assurance that is proportional to its risk tolerance spectrum.
For example, at an enterprise level, implementing Secure by Design principles is a fundamental requirement. This includes securing the data that’s used to build models as well as the systems themselves – setting up the right data pipelines, recognising how data can be manipulated (particularly through open-source software), and understanding what constitutes bias. The role of a government is to encourage, complement and support this work by setting standards and guardrails that help enterprises make effective choices.
The dynamic is similar with regards to AI regulation – an area which is evolving rapidly around the world and, again, comes with plenty of nuance. For example, the UK has recognised that while regulation is certainly a national level consideration, it’s an area that enterprise has a view on and should be encouraged to feed into.
Of course, this topic is inherently linked to debate around innovation. How much regulation is too much? Is there a risk of stifling innovation rather than enabling it? Questions like these highlight the need for governments to take a balanced approach. We’ve seen nations adopt a position of care not to ‘over-regulate’, while still making clear where the regulatory guardrails are in an attempt to focus innovation. This includes the UK, which set out its pro-innovation approach to AI regulation in March 2023 and has since issued guidance to help regulators implement the key principles.
We can learn lessons about risk management decisions on technology adoption by looking back at the journey to cloud adoption over the last 15 years. In terms of risk and governance, the adoption of AI isn’t dissimilar to the cloud adoption journey – it presents risks to our data and our operations – but we have to learn to manage them if we want to earn the benefits of digital transformation. By embracing AI with guardrails in place and not falling into the trap of over-enforcement, governments can support the development of AI capacity in their countries through safe innovation – managing risks rather than shying away from them.
There are also various multilateral activities designed to examine and address the potential impact of AI systems at an international level that governments can connect into. These include The Bletchley Declaration – signed by nearly 30 countries that attended the AI Safety Summit in November 2023 – and the EU’s General-Purpose AI Code of Practice. Sitting adjacent to AI as a technology, there’s also the Pall Mall process that seeks to regulate and contain the proliferation of cyber hacking and surveillance tools, which are increasingly enabled by AI.
Building on assurance
Ultimately, the public and private sectors must be aligned to ensure that AI is adopted in a way that empowers innovation without sacrificing privacy and resilience. Businesses and governments must recognise that the route to the safe exploitation and adoption of AI technologies is through assurance. This can be achieved through the effective implementation of AI governance policies covering the four key foundational elements of People, Processes, Technology and Data.
This will be key to ensure that new capabilities are embraced safely and securely, and that the true benefits of AI technologies are realised. There’s no hiding from the fact that AI comes with risks as well as reward. Building capabilities based on a foundation of assurance and governance will be key to ensuring the most effective balance in pursuit of gaining a national AI advantage.
AI with purpose
From space to subsea, right now our talented teams are engineering ways to integrate Artificial Intelligence (AI) for our customers. We use the best models and approaches so our customers can accelerate their exploitation of AI. Building higher quality efficiencies, faster outcomes, and with responsibility at its core. This is AI with purpose.