If we look at ONS public surveys, 32% remain concerned about AI taking their job, while others are dubious about the ability of AI to really do a job more intuitively than a human (as discussed in this recent article by the Financial Times: AI won’t fix the real issue with customer service).
In pursuit of public sector efficiency and productivity gains, the UK Government wants to leverage AI within its digital systems. At the beginning of 2025, it unveiled a new blueprint to turbocharge the use of AI and “deliver a decade of national renewal” with improved public services at the centre of the drive. The announcement was informed by a 50-recommendation action plan put forward by Matt Clifford from the Advanced Research and Invention Agency (ARIA) and will impact systems used at a societal and individual level.
Balancing benefits with caution
Healthcare, policing, education, civil engineering and transport – they are all set to benefit from effective deployment of AI over the coming years. But caution is crucial to avoid misuse and to gain - and maintain - public trust.
In areas of healthcare, AI is already enhancing diagnostic capabilities in areas such as cancer and diabetes diagnosis. When it comes to planning consultations and communication with public bodies, agentic AI is paving the way for quicker task resolution and problem solving. In policing, the use of tools like the Intelligent Lead Assessment Service (ILAS) are filtering and flagging possibilities of domestic or sexual violence in a way that leads to quicker outcomes and more manageable workloads for officers. In education, administrative tasks are being plotted and planned automatically, allowing for heightened focus on the students themselves.
When these systems are in the decision chain, we need to consider how much decision-making agency will be surrendered to efficiency and productivity, and how to build and maintain public trust. A focus on safe, secure and ethical AI deployment which takes account of both societal and individual impacts of this technology is key.
Risk and opportunity
All of these enhancements and digital disruptions come with privacy and security caveats, and the government needs to prove it can manage a new form of data management in the public sphere. For example, Gartner has already predicted that by 2028, 25% of enterprise breaches will be traced back to AI agent abuse.
Ultimately, AI amplifies risk as well as opportunity, so trust must be considered every step of the way. Furthermore, trust requires fairness and accountability. Without this, the wider public will not be willing partner on the journey that the Government wishes to embark upon.
What are we doing to help?
Alongside supporting our customers with AI integration programmes, our AI Assessment Service supports organisations that wish to maintain their ‘high trust’ status, while moving forwards with technology at pace. Our teams look at cyber and AI layers, focusing on organisational maturity as well as remediation, and AI penetration testing as well as security assessments.
This is just one way we are pursuing the goal of ‘AI with purpose’ - enabling our customers to apply AI carefully, purposefully and effectively.
AI with purpose
From space to subsea, right now our talented teams are engineering ways to integrate Artificial Intelligence (AI) for our customers. We use the best models and approaches so our customers can accelerate their exploitation of AI. Building higher quality efficiencies, faster outcomes, and with responsibility at its core. This is AI with purpose.