How a new model of collaboration can detect data risk
How a new model of collaboration can detect data risk
Venture Lead, BAE Systems Applied Intelligence
26 Oct 2021
Data sharing has many advocates but it is fraught with ethical risk. Richard Thorburn says technology holds the key to delivering greater collaboration, ethics and transparency
I was asked to give a reference for a former colleague the other day. I was happy to and gave her my details to be added to her CV. I was quite surprised, then, when she replied to thank me but to explain she wasn’t able to because of GDPR requirements. Nowadays, it transpires, you can’t put other people’s names and contact details on your résumé.
This is just one (minor) example about how data sharing is not straightforward but, of course, this isn’t limited to individuals. In the context of data and information, collaboration across organisations is also often constrained by privacy concerns and restrictions.
In some situations, there are clear limitations on the purposes for which data can be used and the conditions under which it can be shared. In others, the challenge is around policy and the confidence of organisations to share data in a way that doesn’t overstep legally or ethically.
Don’t get me wrong – clearly any data sharing has to be lawful, ethical, transparent and well understood – but bluntly implementing these guardrails can also limit how effectively organisations can work, and how quickly they can make use of their data. In law enforcement, for example, such restrictions can be particularly challenging. This is because while they operate under specific legal powers which enable them to use specific data and intelligence, there is often the need to collaborate closely with other partners to safeguard and protect the public.
It's good to share – or is it?
Determining what to share is a challenge and often leaves organisations in a Catch-22 situation. With blanket sharing of data even for law enforcement purposes unacceptable, they have to decide whether there is justification for sharing information before its full relevance and context is known. This leads to a number of problems when trying to collaboratively detect risk across organisational boundaries.
For example, information and intelligence is commonly only shared when that piece of information alone implies significant risk – leaving open the possibility that a wider pattern of low level indicators is missed even if together they indicate a high level of risk.
Where indicators of risk are shared on a larger scale – often through channels such as sharing hubs – the need to manually implement controls on proportionality and justification often limits the speed at which information can be shared and cross-referenced, and the amount of intelligence developed.
And finally, the amount of low level information that must be manually triaged to meaningfully detect risk can lead to certain organisations or teams being overloaded where they have to collate, cross-reference and analyse the information coming in from partner organisations.
These challenges are generally built on an assumption that in order to fuse and analyse data, it must first be centralised and shared. However, while this is often the way that collaborative systems are built, it is not the only way that collaboration can happen across organisations.
Talking up tech
Using knowledge-based technologies, information can be assessed and analysed to a certain level automatically, and different organisations can share and collaborate at this level of risk-relevant knowledge without sharing all of the underlying data, or needing to centralise it. This gives four key benefits over the current “centralise-fuse-analyse” model.
Firstly, it’s proportionate. Only the relevant knowledge needed to identify risk is shared, on a case by case basis, rather than all of the underlying data.
Secondly, it’s justified. This is because the knowledge being shared is built up with the detection of risk in mind, and so it is inherently justified by being tied to a purpose, and having initially been pre-analysed to extract the relevant knowledge from the underlying data.
Thirdly, it’s controlled. The knowledge shared is sufficient to detect potential risk and as such, access to knowledge and the underlying data can be withheld until a higher level pattern of risk is identified.
And fourthly, it’s fast. Using technology to streamline the process of collaboration drastically reduces the time taken to identify risk, as well as the manual effort required to analyse activity and decide the appropriate action to take. And because risk-relevant knowledge is being shared rather than the underlying data, it’s a smaller volume of information that needs to be transferred and this is easier to keep up to date and quicker to analyse risk.
All this transforms what was previously quite a catch-all model for collaboration into a targeted one, ensuring that data is not centralised just because it comes from the right source or could be relevant. Instead, knowledge is inferred by analysing the data on a case-by-case basis, and shared because it is specifically relevant to detecting a targeted risk.
Fundamentally, combining analysis, collaboration and reasoning into a continuous and integrated process – rather than separating out stages of sharing and analysis – provides a proportionate, justified and controlled model of collaboration to detect risk at speed.
And because this significantly reduces friction in the collaborative process, it is likely to be very helpful in advancing law enforcement and safeguarding in the future.
Find out more about our Futures team. Driving innovation from within.
Bringing data to the party. Caroline Bellamy is on a mission to transform how the UK Ministry of Defence uses data. She tells Mivy James about her 30-year career in industry and why data holds the key to smarter and faster decision-making across Defence