Seven ethical considerations for artificial intelligence

Digital Transformation Director, BAE Systems Applied Intelligence Read time: 3 mins
Artificial intelligence is here to stay but fears about this new technology remain stubbornly high. Mivy James sets out seven ways to inject greater calm and reassurance into its deployment
Seven ethical considerations for artificial intelligence For computer scientists like me these are exciting times. It’s almost hard to believe that proper artificial intelligence (AI) is no longer the stuff of sci-fi.
 
It’s such a shame, then, that the excitement at the potential of this game-changing technology seems to be drowned out by the fear of it. Fear that automation will replace humans and make us entirely redundant; fear of putting AI into the wrong hands; fear of AI systems going rogue (The Terminator movies have a lot to answer for) and much else besides.
 
This all means that we’re deeply anxious about “machines taking over” and what AI is truly capable of that could result in jobs going. People often refer to this as the latest industrial revolution and the nature of employment will change but without any clues as to what future jobs could look like this threat remains a real fear. And so my first ethical consideration for AI would be:
 

1. Don’t just ask “can AI be used here?” but “should it” – due to the implications for the workforce

We expect machines to make decisions and, unlike humans, to never make any mistakes. A single error unravels all faith and trust in the machine, in fact all machines. This means we’re under pressure to make the machines perfect and infallible – which will be arduous and time-consuming, precisely the opposite of what we’re trying to achieve with using Agile, for example.
 

2. Consider our tolerance for machine made mistakes

I might find it nothing more the mildly amusing when marketing algorithms push content to me that is way off the mark for my tastes. No harm done other than to my sense of fashion pride. It’s hardly the same as being able to demonstrate that whatever data the system has automatically gone after as part of an investigation is definitely necessary, proportionate and authorised.
 
Can we use multiple, entirely separate machines and check for consensus, just like how a jury works today? Note this pattern is already applied in safety critical systems with completely separate teams building each.
 

3. Quantify the risk and comfort of varying levels of automation for different scenarios

By scenario I mean things like efficiency by reducing onerous repeatable work done by humans – sometimes a single application of automation can be transformative – as well as speed of response. In other words, if the need is low but the impact of it going wrong is high – don’t do it. If the need is high (e.g. automated response to hypersonic missile) then tolerance for impact is higher.
 

4. Focus on data collaboration standards and practice to boost public confidence

In order to improve data collaboration the public’s perception of government accessing and sharing personal data despite lives being lived out and shared on social media needs to improve. UK government departments now operate in an era of unpresented public scrutiny.  The first step in the public (and therefore policymakers) trusting automated machines is to trust the way government uses personal data. 
 

5. If we’re to rely on data driven decision making we need to be pretty confident about the quality and provenance of that data

Fake news anyone?  You could see an infinite loop occurring when trying to use AI to find fake news if it doesn’t know whether it’s being fed fake news to train on.
 

6. Know how to take innovation to enterprise scale in a secure fashion even when you don’t know what the innovation is

It’s pretty easy to go and have a little play with AI technologies now days. Just set yourself a trial cloud account, grab some python knowledge and off you go. The technology skill and cost barriers to practical innovation are pretty low. What’s not so easy is taking that innovation to enterprise scale and doing it securely. This can lead to frustrations and impatience, and division within organisations.
 
It’s important to remember, though, that not all innovation will make it to scale, and that needs to be accepted.
 

7. Don’t tolerate bias in the machine

The ethical issue that is of most concern to me right now is bias in the machine. I worry we are running out of time. There have been some high profile news stories for when machine learning goes wrong and we can’t really blame the machines because we taught them that prejudice.
 
Machine learning must be trained on wildly diverse data sets. And diversity within the teams building the automation isn’t optional. There’s a lot of focus on improving diversity and inclusion everywhere. For me, this is the most urgent place to bring about change in tech.
 

Don’t panic

Our adversaries don’t operate within the same ethical and legal frameworks that we do. That gives us a challenge in that we could be limited on what we are technically allowed to do. However, it also gives us an advantage. If they’re less worried about bias in the machine and machines making mistakes then ultimately we will have the upper hand. Not just morally.
 
AI doesn’t need to be something to fear if conscious choices are made – that’s the bottom line. 
 

About the author
Mivy James is Digital Transformation Director at BAE Systems Applied Intelligence
Client Conversation: Moving cyber into the diplomatic mainstream

Explore our Client Conversations

Discover the perspectives and experiences of our clients in this series of interviews
Find out more

Recommended reading

  • Analytics and Machine Learning: We work with our clients to understand their mission, and apply machine learning and automation techniques appropriately and ethically, to turn ideas into impact.
  • The human heart of technological change. Are computers really poised to assume total control over humanity in the years to come? Chris Bull says that for all the bewildering power of computers, the human element remains key to achieving longer term transformations.
  • Making the most of machine learning. How can machine learning strengthen the use of data and alert investigation? We’ve got some ideas but we want to hear what *you* think too. Matt Boyd explains.
  • Protecting ourselves from ourselves – post pandemic ethics pillars. As the world reacts to the long-term reality of Covid-19, Nick Rhodes explains the role of ethics and accountability in generating greater trust.
  • Selling serverless. When it comes to going serverless, Stephen Rolph and Paul McAninly say that the abundant benefits outweigh any teething problems.
  • 5 ways emerging technologies can help tackle Coronavirus. As the Covid19 pandemic continues to envelop the world, Roberto Desimone considers how emerging technologies can help resolve both the current crisis, as well as prevent future outbreaks from taking root.
top
Mivy James Digital Transformation Director, BAE Systems Applied Intelligence 19 October 2021