Artificial intelligence (AI) has become a common part of vendor messaging, with many tools now including their own natural language features or copilots. These built-in functions promise to simplify analysis and improve decision making. In practice, however, they often create inconsistency, increase cost and reduce transparency.
This blog examines an alternative approach to AI adoption. Instead of relying on embedded AI within each product, organisations can build a unified external AI layer that interacts with SOC tools using standard interfaces. This approach, which we refer to as ‘AI from the outside’, offers a controlled and predictable path to enhancing analyst capability without exposing critical systems to uncontrolled automation.
Limitations of embedded AI in SOC tooling
The industry-wide trend to include AI within every tool has led to mixed outcomes. While natural language interaction can be helpful, many implementations lack the robustness, reliability and predictability needed in operational environments. Analysts also often encounter varying behaviours across tools, unclear guardrails and inconsistent results because of the non-deterministic characteristics of large language models.
Another challenge is the cost of token consumption. AI interactions that appear trivial in daily use can translate into significant operational expenses, particularly where multiple tools independently integrate their own AI features – each consuming resources in isolation.
These problems highlight a deeper issue. Embedded AI typically extends the complexity of a single tool without addressing the broader challenge of how SOC components work together. Analyst inefficiency is most impacted by the need to switch between systems.
The result tends to be either a movement towards monolithic platforms, where one vendor ecosystem provides its own fully-integrated solution – where opacity makes information security teams nervous and long-term lock-in is a conscious choice – or a lock-in free ecosystem where many tools offer partial automation yet none deliver the cohesive, end to end support necessary for a simple user experience.
Working inwards not outwards
‘AI from the outside’ provides a different perspective by shifting the focus from internal automation inside each tool to external orchestration across the SOC environment. With this approach, the organisation deploys its own auditable and assured AI system that communicates with tools through well defined, narrow and secure interfaces.
Distributed SOC architectures are the foundation
BAE Systems has been talking to customers about the benefits of a distributed architecture in SOCs for many years, mirroring the service-based architecture of applications that gain benefits in terms of vendor lock-in avoidance by exploiting loose coupling to simplify component upgrades and swap-outs.
Similar conversations have been happening at other strategic forums in the IT industry. An example is the Security Operations and Analytics Platform Architecture, widely known as SOAPA, which was proposed by Jon Oltsik at Enterprise Strategy Group in 2016. It was created to address fragmentation across SOC technologies and to move beyond isolated SIEM deployments. SOAPA describes a layered approach that includes a distributed data layer, an integration and services layer, an analytics layer and an operations layer. It’s no surprise that it translates well into the current era of pervasive AI given the explicit focus on the exposure of data through open integrations.
The key idea across all such concepts is that security operations should be built on a shared data and analytics fabric, rather than the alternatives of either a single monolithic platform or a collection of disconnected tools.
This decoupled data-centric philosophy aligns closely with the ‘AI from the outside’ approach. This architecture makes it far easier to attach an external AI layer that can see relevant data, call the right tools, and help coordinate workflows without customising each tool internally. Security is controlled by the architectural implementation and not by the vendor, allowing Information Security teams to apply policy and utilise observability tools to validate compliance. New functionality is evolutionary and non-impacting at every stage from the Large Language Model (LLM) to the toolsets.
Most importantly, this also encourages gradual adoption, since tools can be added or replaced with limited disruption if they participate in the shared architecture.
Model Context Protocol provides the middleware
Security Orchestration, Automation and Response (SOAR) tools have for some time aimed to provide the functional ‘glue’ to orchestrate across multiple external platforms. The ability to drive action through simple interfaces was always the attractor for these platforms, but conversely the creation (and maintenance) of these interfaces in a proprietary environment like SOAR becomes both a burden and a lock-in. Whilst the introduction of AI in SOAR can offer a path to this ‘from the outside’ approach, the opaque nature of internal SOAR AI means the fear about guardrails and unintended consequences remain. And SOAR platforms come at a significant price.
The Model Context Protocol, often referred to as MCP, was introduced by Anthropic in November 2024 as an open standard for connecting AI systems to external tools, data sources and services. MCP defines how AI applications interact with MCP servers, which expose a catalogue of functions upon which the AI can call. These functions are defined and controlled by the organisation, which ensures that the AI operates only within the intended boundaries.
This structure allows an AI system to coordinate activity across several tools through transparent, controlled interactions. For example, an analyst may ask whether any activity related to a newly reported intrusion has been observed. The AI can retrieve public information on the incident, identify indicators of compromise, and query internal systems such as the SIEM or threat intelligence platform – all via MCP servers – without any one tool needing direct awareness of the others.
Because MCP has been adopted across multiple AI platforms and tooling ecosystems, it provides a realistic and portable way to standardise these connections rather than building one-off integrations for every tool and every model. This means we can create the simplified interfaces we want from SOAR tools in an open and easy MCP-led way, without lock-in or SOAR license costs.
Anthropic’s Claude Skills are a recent addition to the AI tooling landscape, evolving MCP further. They were announced in October 2025 to package instructions, scripts and resources into modular capabilities that a model can load only when required. A skill can describe responsibilities, decision making style and tradecraft for a particular role, along with the tools that role is allowed to use.
For example, an organisation may define a skill for a threat intelligence researcher that outlines how to structure findings, how to handle incomplete evidence, and which MCP tools to call to retrieve or verify data. When this skill is active, the AI behaves much more like a consistent member of the SOC team than a generic assistant. The behaviour is guided by the explicit definition of the role and the limited set of tools linked to that role.
Combining skills with MCP control provides a strong foundation for analytical work. All interactions pass-through well-defined interfaces, skills bring structure and repeatability, and the organisation retains control of both behaviour and access.
"‘AI from the outside’ offers a safer first step toward the long-term goal of autonomous support. Because all actions pass through defined MCP interfaces, each function is limited in scope and is visible to the organisation. The AI is not given unrestricted access to internal capabilities. Instead, it is granted access only to narrow, well understood operations such as searching logs, summarising findings, or requesting that a preexisting orchestration pipeline is executed."Chris Holt, BAE Systems Digital Intelligence
Moving towards Agentic AI
Many organisations are exploring the idea of agentic AI, where systems can perform complex tasks with minimal supervision. In practice, this may involve autonomous triage of alerts, continuous enrichment of cases, or unattended execution of low-risk playbooks.
Although the potential is significant, full autonomy inside security tools carries operational risk. Without careful design, there is a genuine possibility of accidental deletion of data, misinterpretation of tasks, or unintentional execution of high impact actions. Early experiments in long running autonomous coding and workflow agents across the software industry have already shown that models can ignore constraints, make incorrect assumptions, or take shortcuts when tasks are difficult, even if they are not malicious.
‘AI from the outside’ offers a safer first step toward the long-term goal of autonomous support. Because all actions pass through defined MCP interfaces, each function is limited in scope and is visible to the organisation. The AI is not given unrestricted access to internal capabilities. Instead, it is granted access only to narrow, well understood operations such as searching logs, summarising findings, or requesting that a preexisting orchestration pipeline is executed. Pipelines can also be created that incorporate agentic AI aspects thanks to the loose coupling characteristics, enabling users to benefit from a plethora of low and no-code offerings that support rapid autonomy development.
This fragmented model also makes it easier to add monitoring and review around AI actions. Requests from the AI must pass between systems as they execute and thus can be logged, inspected or gated by human approval. Capabilities can be added or removed by changing the MCP servers rather than reworking each underlying product. Furthermore, by applying Open Policy Agent–style (OPA) policy-as-code principles to SOC decision-making, actions such as isolating endpoints, disabling accounts, or blocking IPs are gated by a deterministic, auditable policy layer that you control. Using this OPA approach evaluates facts and returns explicit allow/deny decisions with reasons that can be recorded as an immutable audit of AI actions. This separates judgement from execution, providing transparency and control that opaque, embedded AI features cannot offer.
The result is a more controlled and auditable path toward agentic behaviour.
Conclusion
The rush to embed AI in every tool has not delivered the transformation many organisations anticipated. SOCs require consistency, visibility and a clear sense of control, yet embedded AI often introduces complexity rather than reducing it. We believe ‘AI from the outside’ offers a practical alternative that enhances analyst capability and creates opportunities for automation.
By adopting a unified external AI layer, organisations can gain the benefits of natural language interaction, structured analysis, and cross-tool coordination while avoiding the risks associated with uncontrolled internal autonomy. This approach provides a stable path forward. It allows teams to experiment with skills, MCP based integrations, and early forms of agentic behaviour in a way that remains transparent and manageable. Over time, it can form the basis for more advanced automation, while keeping human architects and SOC leads firmly in control.
Subscribe to our Threat Intelligence Insights newsletter to hear more about the perspectives and experiences of our clients and stay up to date with the latest insights from our experts.
Addressing the cyber needs of governments and businesses across the globe, delivering specialist consultancy, tools and agile support.