All of us face unexpected situations in our lives. But police officers and analysts have little room for error when they face the unexpected. It’s their role to identify situations where there is a significant risk of harm and to decide whether, when and how best to respond. If they get it wrong someone could get hurt and their job could be on the line.
 
The decision-making itself, assessing what action to take, rightly remains the responsibility of police officers with police analysts – but that doesn’t mean technology doesn’t have a key role to play in getting them to that decision point. Anything but.
 

Cops and robots

Information is the lifeblood of policing. Today’s digital world helps ensure there’s plenty of information available and fewer manual errors. But using this information effectively to understand what is really happening out in the real world and which situations should be attracting our attention, is far from easy. Are there technologies which can assist with spotting high risk situations from all this information?
 
One such technology that is getting a lot of attention at the moment is machine learning. However this can introduce a number of challenges when applied in this space. Situations in the real world vary enormously and finding or creating a representative dataset to train a machine learning model is hard and fraught with legal and ethical challenges around bias. In the sensitive fields of crime prevention and terrorism, where some ethnic and religious groups already feel they are unfairly singled out by the authorities, getting this wrong has a significant impact.
 
Machine learning outputs are also hard to validate. Coming from a ‘black box’ model that has learnt what to do from a large training data set, each specific output has little traceability back to how it was derived and the source material. This can have big implications for how explainable machine learning outputs can be.
 
But what if we revisit how expert policing analysts do this work at the moment, can we learn from them to find a new approach?
 

Accelerating police tradecraft Capturing tradecraft

Police analysts rely on advanced tradecraft which they have learnt and developed through training and extensive live experiences. This enables them to sift through information, extract the key factors and infer from them what's going on. And when new information arrives they use their tradecraft to consider whether it is relevant and what the implications might be.
 
The trouble is that it takes a long time to do this and they can’t possibly get through all the information available to them.
 
But what if we could capture and record this analyst tradecraft for each area of expertise? It could then be managed, challenged, improved and shared. And what if we could use this tradecraft articulation to configure a system to replicate an analyst applying that tradecraft, to identify high risk situations?
  • We could assemble the narratives of high risk situations that we seek, but at system speeds and not leaving out any relevant information
  • Policing could understand and validate how the system was working – it’s following the tradecraft, and any bias in the tradecraft is there to see and address, not hidden in data
  • Analysts could interpret the output that was produced, why a situation was highlighted as risky, and assess each situation for action as appropriate
 

Introducing ‘inference’ technology

Let’s take it a step further. What if we could apply the concept of confidence to each situation, based on the contributing information, to provide a sense of how likely it is that this situation is actually playing out and not a coincidence?
  • There would be fewer false positives, instead a range of situations with varying confidences based on the contributing information that can be understood and validated by the analyst 
  • Police could prioritise resources to focus on high severity situations
  • They could also understand where there are opportunities for early intervention
 
We refer to this new approach as ‘inference’ technology, as it is inferring what’s going on. We’ve built it for policing, and it’s already being trialled in a UK police force where the early signs are positive.
 
We’re on the cusp of a new ‘Augmented Intelligence’ approach to support police officers and analysts on the frontline of fighting crime, harnessing this new inference technology in the race to stay ahead of criminals as they seek to exploit the vulnerable and damage lives.
 

About the authors
 
Matt Boyd is Head of Futures at BAE Systems Digital Intelligence
matt.boyd2@baesystems.com
 
Richard Thorburn is a Venture Lead at BAE Systems Digital Intelligence
richard.thorburn@baesystems.com

Matt Boyd, Head of Futures and Richard Thorburn, Venture Lead

BAE Systems Digital Intelligence
top