Artificial intelligence: it’s not just about ‘machine learning’

Venture Lead, BAE Systems Applied Intelligence
Artificial intelligence may offer exciting ways to strengthen public services but it’s not without its challenges. Richard Thorburn sheds some light on the importance of accuracy and transparency when deploying these new technologies
Artificial intelligence: it’s not just about ‘machine learning’ Language matters in government. Of course, it matters in any context or situation, but for those in the corridors of power, a stray word or misplaced phrase can have real and cascading consequences.
 
I’m not just talking about the problems arising from a poorly drafted treaty or an ill-thought soundbite at a press conference, but also when people reach for catch-all terminology when actually they would do better to be more precise. A good example of this is when it comes to technology.
 
A lot of public sector discourse uses the phrases “machine learning” (ML), “artificial intelligence” (AI) and “algorithms” interchangeably. In a way this is understandable. Not everyone is a technologist and it’s easy to assume that such terms mean basically the same thing. But actually they don’t – and this can be a real problem when applied to areas of public policy such as law enforcement, for example.
 
Here, valid concerns about automated decision making and facial recognition (the conjunction of ML and computer vision) have impacted the discussion to such an extent that the use of other algorithms and forms of AI are now less likely to go ahead – even if they don’t suffer from such challenges or they offer the potential to help improve the effectiveness of police forces hamstrung by limited resources.
 

Back to basics

It’s helpful, then, to be clear about what exactly we’re talking about. In general, an algorithm is simply a process, a set of instructions – which can usually be implemented by a computer – to reach an outcome.
 
AI, meanwhile, is a specific set of algorithms which try to simulate human intelligence processes – most implementations being targeted towards specific tasks such as vision, natural language processing or other narrow areas of human intelligence.
 
And machine learning is a subset of artificial intelligence – one of six primary areas within AI – and the area which includes algorithms that can adapt (and hopefully improve) based on exposure to data, whether that data is marked with the expected outcome (supervised learning) or not (unsupervised learning).
 
Conflating these three terms can be risky as it can limit the use of simple algorithms that are not AI, and the areas of AI that are not machine learning – all based on genuine concerns, particularly around bias, privacy and black box algorithms. And in some cases, using overly simplified algorithms that don’t work can limit people’s openness to exploring more nuanced and balanced ones.
 
Just think of the issues with using an overly simplified algorithm (which arguably was not at all intelligent, and definitely not AI) for deciding examination grades last summer. One of the consequences of that deeply unfortunate episode was that it closed down any discussion of using fairer, more nuanced and more balanced algorithms – which was a step backwards because, when used well, algorithms and AI continue to be brimming with exciting potential.
 

Introducing ILAS

We believe that one area of AI in particular has the potential to transform the situational awareness of police – higher level reasoning. This uses an encoded understanding of the world (in this case crime) to reason about what is going on, and to infer knowledge.
 
At BAE Systems, we have developed an inference and reasoning engine (ILAS – Intelligent lead assessment service) which uses tradecraft from analysts and investigators to, for example, proactively identify children at risk within the data and intelligence that police hold. ILAS ensures this takes place in a consistent and completely transparent manner, and without learning from historic and potentially biased data.
 
While it is important to ensure that the tradecraft used to do this does not contain bias from analysts and investigators, the explainability of the system makes it easier to identify potentially discriminatory or problematic factors to ensure that they aren’t used, and to adapt and update this tradecraft if needed.
 
Furthermore, the act of documenting the tradecraft as it is captured from experts enables sharing, review and scrutiny in a way that can’t happen for the reasoning in an analyst’s head – thereby giving the opportunity to significantly reduce bias both at the outset and on an ongoing basis.
 
For example, identifying that an indicator or variable is a proxy for a protected characteristic can lead to that being quickly removed from the system, and even recent decisions made based on that piece of logic identified and reviewed. That makes ILAS a significant opportunity for reducing bias in policing.
 

Transparency matters

Look, it’s important to stress that no AI system can be completely unbiased. The data they rely on can be based on biased human decisions and flawed sampling where groups are under or over represented. There is also often a difficulty in balancing statistical accuracy and societal fairness – targeting higher crime areas can perpetuate the appearance of higher crime in those areas because of higher arrest levels, for example. 
But ensuring transparency and openness throughout remains critical. Only then can we make progress towards a fairer use of technology and AI in law enforcement.
 

About the author
Richard Thorburn is a Venture Lead at BAE Systems Applied Intelligence 
richard.thorburn@baesystems.com
Government Insights Promo Block Image

Explore Government Insights

Stay up to date with the latest thinking, trends, technologies and projects from our Government team
Find out more

Recommended reading

  • Making the most of machine learning. How can machine learning strengthen the use of data and alert investigation? We’ve got some ideas but we want to hear what *you* think too. Matt Boyd explains
  • Moving out of the fast lane. When it comes to new technology, organisations would be better served by slowing down and considering whether they first really need it. Holly Armitage explains why it’s better to hit the brakes
  • 5 ways emerging technologies can help tackle Coronavirus. As the Covid-19 pandemic continues to envelop the world, Roberto Desimone considers how emerging technologies can help resolve both the current crisis, as well as prevent future outbreaks from taking root
  • The human heart of technological change. Are computers really poised to assume total control over humanity in the years to come? Chris Bull says that for all the bewildering power of computers, the human element remains key to achieving longer term transformations
  • Helping innovation take flight. Group Captain Blythe Crawford is on a mission to do aviation differently. He tells Mivy James about his experiences of leveraging technology and innovation to drive forward change in Defence
top
Richard Thorburn Venture Lead, BAE Systems Applied Intelligence 5 October 2021