Chronic risk in a digital age

Published
2025-09-17T14:06:00.695+02:00 11 September 2025
Business Digital Intelligence
Location United Kingdom
What does the UK Government’s new analysis on the current risk landscape mean for cyber security resilience?

The UK Government’s newly published Chronic Risks Analysis (CRA) offers a stark but valuable look at the enduring, systemic risks that threaten the nation’s long-term resilience. While the National Risk Register focuses on immediate, acute incidents, the CRA deals with persistent, compounding challenges ranging from climate change and demographic shifts to geopolitical instability and deepening technological dependence.

For those of us working in cyber security and technology, the implications are clear: our resilience strategies can no longer be based solely on short-term threat models or tactical fixes. We must start thinking about risk through a chronic lens – acknowledging not only today’s threat landscape, but tomorrow’s deeply interconnected system shocks.

The CRA identifies 26 long-term risks grouped under seven broad themes: Security; Technology and Cyber Security; Geopolitical; Environmental; Societal; Biosecurity (including health); and Economic. These themes reflect the systemic nature of the world we live in and how risks in one domain may cascade into others.

The implications for cyber security

The CRA dedicates an entire section to “Technology and Cyber Security” – one of seven themes which also includes Security, Geopolitical and Environmental – identifying four key chronic risks within the tech and cyber umbrella.

  1. Changes in the nature of cyber security threats
  2. Increasing reliance on digital platforms
  3. Dominance of global technology companies and concentration of risk
  4. Impacts from use and capability of Artificial Intelligence (AI)

These four themes are not isolated issues, but represent long-term, systemic challenges that are shaping the future of digital resilience. Together, they present a layered and evolving challenge for businesses, public institutions, and critical national infrastructure.

Building on those themes, it feels appropriate to focus on four key takeaways: the persistence and acceleration of diverse cyber threats; the compounding effect of skills shortages; the increasing fragility caused by our over-reliance on digital platforms and cloud ecosystems; and the dual nature of artificial intelligence as both a powerful enabler and a significant risk.

1.    Cyber threats are persistent, diverse and accelerating

The CRA states that “Cyber attacks, such as ransomware, continue to pose a significant and ongoing threat to individuals, businesses and critical national infrastructure. The increasing complexity and severity of these attacks, along with a growing range of perpetrators—many based overseas—intensify the risk landscape.” 

We’ve seen this play out recently through high-profile incidents affecting Marks & Spencer, Co-op, and Synnovis, the pathology provider whose ransomware attack disrupted thousands of NHS appointments. These attacks reflect not only rising technical complexity, but the diversification of threat actors which can range from nation-states, and ransomware-as-a-service operators, to criminal splinter groups, and lone actors. While the individuals behind the recent attacks against UK retailers were believed to be UK based and some individuals have been arrested, many threat groups operate across borders or outside any meaningful jurisdiction, which complicates attribution and response.

The cybercrime economy itself is becoming more sophisticated, with ransomware identified as one of the greatest and most persistent risks to governments, businesses, and critical infrastructure. The CRA references both the Microsoft Digital Defence Report 2024 and the National Crime Agency’s Strategic Assessment, which warn of a growing professionalisation of ransomware operations, complete with affiliate networks, support infrastructure, and monetisation pipelines that mirror legitimate industries. These are no longer opportunistic hacks; they are chronic, scalable business models designed to exploit systemic digital dependencies and weak cyber security controls.

One area the CRA briefly highlights, but which has sparked recent public controversy, is end-to-end encryption (E2EE). The analysis notes that it is a “rapidly evolving risk with wide-ranging implications for the UK’s national security and public safety.” This framing aligns with recent developments where the UK Government reportedly issued a technical notice to Apple and other tech providers, requesting capabilities to access encrypted communications. While intended to disrupt serious crime, terrorism, and child exploitation, many in the security community view this as a double-edged sword: weakening encryption to catch ‘bad actors’ also risks undermining the privacy and security of law-abiding users, and simply pushes threat actors onto other platforms or unregulated channels. Resilience requires careful balance in that targeted interventions must not erode the foundational trust and safety of the digital ecosystem we all rely on.

2.    Skills shortages compound technical vulnerabilities

A chronic shortage of skilled cyber professionals is one of the most underappreciated and escalating risks. According to the UK NCSC, 44% of UK businesses lack the internal capability to meet even basic cyber security standards. This challenge isn't just about hiring gaps; it is a structural weakness in people’s understanding of cyber risks and the national response to evolving cyber threats.

As demand continues to outpace supply, organisations increasingly depend on overstretched internal teams or look to external security partners for support, underscoring the need for well-integrated, strategic collaboration. This creates fragility at scale, especially for small and medium enterprises that can’t compete for top talent or afford to pay a suitable skilled outsourced service provider to act on their behalf.

The CRA indirectly highlights how this shortage amplifies other chronic risks:

  • Burnout and retention challenges further reduce resilience in incident response
  • Lack of diversity in the cyber workforce can limit innovation and blind spot detection
  • Digital transformation projects often proceed without adequate security oversight due to staff constraints
  • Underfunded training and development programmes leave current staff unprepared for AI, cloud or OT-specific threats
     

This all suggests that fixing the cyber skills gap isn’t just a workforce issue, it’s a risk mitigation imperative. Boards and public sector leaders must treat investment in their workforce capability as they would physical infrastructure or technology products requirements. That includes:

  • Upskilling existing staff with technical and strategic cyber knowledge
  • Mentoring early-career professionals and providing structured development paths
  • Collaborating with academia and industry to build more inclusive pipelines
  • Supporting neurodiverse and non-traditional candidates who bring fresh perspectives
     

The future threat landscape cannot be managed with 2020-level staffing. If cyber resilience is to be sustainable, it must be resourced, trained and nurtured like any other core business function.

3.    Over-reliance on digital platforms and cloud ecosystems increases fragility

Digital convenience brings with it hidden systemic risks. Our dependence on global tech giants and digital platforms centralises risk. A single outage, breach, or act of sabotage can ripple across markets, sectors, and countries. The CrowdStrike incident in 2024 is a clear example where a software bug, not a cyber-attack, triggered widespread disruption. The CRA and supporting Ofcom data reports that over 70% of the UK’s cloud infrastructure is dominated by just two vendors, raising serious concerns about redundancy and vendor lock-in, as well as data sovereignty.

Beyond core cloud platforms, the proliferation of connected and embedded technologies – particularly Internet of Things (IoT) devices – has introduced new, often invisible, vulnerabilities. While the Government has legislated to improve consumer IoT security through the Product Security and Telecommunications Infrastructure (PSTI) Act, the CRA rightly highlights that action is urgently needed across enterprise and industrial IoT environments. IoT products often lack basic controls, visibility, or update mechanisms, yet are increasingly connected to sensitive operational or safety-critical systems. Without careful governance, IoT could represent the soft underbelly of digital transformation.

But fragility in the digital ecosystem extends beyond outages and weak endpoints. The CRA also highlights how competition for critical technology components (especially advanced semiconductors) is reshaping geopolitical power dynamics. The heavy concentration of chip manufacturing in a small number of regions creates systemic exposure not only to economic disruption but to strategic vulnerabilities including national security. A breakdown in this supply chain could stall innovation, limit access to essential compute resources, and destabilise sectors reliant on high-performance and real-time technology.

As our reliance on digital platforms and third-party providers continues to grow, so too does the imperative to adopt emerging technologies in a secure and sustainable way. The CRA notes that dominance by a handful of global tech firms is shifting control and influence in the international arena. One potential response could be for governments and organisations to reassess the trade-offs between convenience and sovereignty such as between short-term efficiency and long-term resilience.

4.    AI is a double-edged sword

The CRA frames AI as both a strategic enabler and a magnifier of chronic risk, particularly in the cyber domain. Its dual-use nature means AI can just as easily be used to harden defences as it can to automate and scale attacks.

On the defensive side, AI and machine learning are already embedded in threat detection, user behaviour analytics, and automated incident response. These capabilities offer speed, scale, and adaptability in the face of growing attack volumes and complexity, especially where security teams are resource constrained.

But adversaries are adopting AI just as aggressively. We are already seeing:

  • Polymorphic malware that mutates its code to evade traditional detection
  • Deepfake-based phishing, voice impersonation and business email compromise
  • Chatbot-assisted scams and social engineering at scale
  • AI-augmented reconnaissance where attackers rapidly map exposed assets
  • Weaponised language models that automate misinformation and malicious scripting


The CRA notes that computational resources used to train AI models are doubling every 3–4 months, a pace that is exponentially accelerating the capabilities of AI systems. As compute scales, so too does capability: today’s frontier AI models can generate content, pass professional exams, and create highly realistic images, videos, or code from simple text prompts.”

This new arms race in automation is not just technical, it’s strategic. The CRA warns that AI development is accelerating faster than policy and regulatory frameworks can adapt. Without careful governance, AI adoption within organisations could unintentionally introduce new vulnerabilities, deepen reliance on opaque third-party models (that have been used to train the AI) and erode public trust through misuse or bias.

But this isn't purely a risk conversation. As the National Security Strategy 2025 and the AI Opportunities Action Plan both emphasise, the UK sees AI as a transformative capability with the potential to revolutionise public services, healthcare, defence, and economic competitiveness. These government strategies reflect the dual reality: AI presents huge national opportunities, but also introduces serious and novel security risks that must be designed for, governed and continuously reviewed.

Various UK Government and global initiatives – such as the research being undertaken by the AI Security Institute – signal recognition of this challenge, but regulation alone will not close the gap. Organisations must now:

  • Treat AI assurance as part of their security and risk governance frameworks
  • Include AI-related threats in red teaming and scenario planning exercises
  • Monitor how AI is used in security tooling for transparency, accuracy, and bias
  • Educate employees on AI-enhanced phishing and fraud techniques


In short: AI is not a future risk, it’s a chronic one already embedded in today’s threat landscape. Strategic resilience means designing with AI in mind and not just for competitive advantage, but for control, continuity, and trust.

A cross-cutting resilience imperative

While cyber is the core focus for many of us, the CRA emphasises the “interconnectedness” of chronic risks. For instance:

  • Cyber threats may be used as tools of state interference, terrorism, or organised crime
  • Increasing digital exclusion could deepen vulnerabilities to fraud and identity theft
  • Environmental shocks or supply chain failures could cascade into digital service disruptions
  • The misuse of data and platforms could fuel disinformation, polarisation and social fragmentation


In short: no chronic risk exists in isolation. And that means our responses must be equally integrated.

As well as diagnosing the problem, the CRA offers a structured preparedness framework that organisations can use to build long-term resilience. Drawing on futures thinking and scenario modelling, it encourages:

  • Selecting chronic risks that matter most to your organisation (e.g., cyber threats + AI + skills gaps)
  • Workshopping realistic future scenarios (e.g., AI-enhanced ransomware cripples a key technology service provider to you during a supply chain disruption)
  • Identifying mitigation strategies, which can be grouped into five categories:
    • Mitigate: Reduce the likelihood (e.g., Cyber Essentials Plus, implement segmented architecture)
    • Adapt: Change to cope (e.g., workforce resilience, multi-cloud strategy)
    • Exploit: Leverage the opportunity (e.g., using AI to accelerate detection and response)
    • Continue: What still works (e.g., mature governance and awareness programmes)
    • Terminate: What to phase out (e.g., unsupported legacy systems where possible or weak vendors)

The Cyber Essentials certification scheme still remains a vital foundation for organisations of all sizes. As highlighted by the NCSC, it guards against the most common cyber-attacks while signalling to customers and suppliers that the business takes cyber security seriously. It defines the minimum technical standard that every organisation should implement as a baseline before tackling more complex risks. It makes sense that without this basic foundation, or more advanced certification, strategic resilience will always rest on shaky ground.

Upcoming regulation is also raising the bar. The UK’s proposed Cyber Security and Resilience Bill will expand the scope of regulated digital services, strengthen the powers of regulators, and enhance incident reporting requirements giving government a more accurate picture of the threat landscape. It reflects a shift from voluntary best practice to mandatory resilience expectations, particularly for organisations providing essential services.

Final Thought

We are living in a time of polycrisis: where chronic risks overlap, compound and accelerate. As cyber professionals, we’re no longer just defending against technical threats – we’re safeguarding the stability of systems that underpin our economy, democracy and society.

The CRA publication is certainly worth a read and makes a compelling case: resilience is strategic. That’s always been true. But now, more than ever, we need to stop thinking of security as a department and start embedding it across every part of our organisations, for the long term.

Get in touch
Steve Buckley

Enterprise Security Consultant – National Security and Government

National Security and Government