BAE Systems

In his latest blog, David Nicholson takes a look at the reasons for bias in AI and the role of regulators, governments and big tech firms in strengthening the culture of responsibility and ethical behaviour within financial organisations, beyond data processing

Tuesday 30 March
Read time: 3 mins
When it comes to the Financial Services (FS) sector, the need for fair and ethically sound Artificial Intelligence (AI) systems is paramount. FS organisations are making increasing use of AI to inform critical decisions that can adversely affect their clients, such as the denial of loans and credit card applications. It is therefore important that any decisions generated by AI algorithms are not unfairly biased toward a particular demographic sector of their customer population, i.e. false positives.

Why does AI bias exist?

AI tools don’t just become biased by themselves, they pick-up on innate human and societal bias weaved into data sets, algorithmic setups and decision outcomes. Even the big tech firms haven’t been completely spared from innate bias creeping into their algorithmic tools. In 2018, it was reported that one such firm had to scrap a ‘sexist AI tool’ used for recruitment that picked up on gender bias, and penalised women in its rating system. The tool was trained on data submitted by applicants over a 10-year period, which mostly came from men. The system picked-up on this imbalance, and taught itself that male candidates were preferable.
Modern AI solutions are driven by machine learning (ML) algorithms that predict outcomes for new instances of data. The investigated outcomes of those predictions are fed back into the learning process, which therefore adapts to new data and becomes more accurate over time. However, if there is underlying bias in the data or outcome decisions, the learning process can amplify this bias.

What role do regulators, governments and big tech firms play in ethical AI?

The UK Government’s Centre for Data Ethics and Innovation released a report into bias in algorithmic decision making in November 2020, which looked at bias specifically in the recruitment, financial services, policing and local government sectors. The aim of the report was to make “cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making”.
But, this is an issue that is further reaching than regulatory and government bodies. Big tech organisations also have a role to play in informing safeguards and policy development. For example, Microsoft created the FATE community group to define the fairness, accountability, transparency and ethical principles and processes in AI. The group studies “the complex social implications of AI, machine learning, data science, large-scale experimentation, and increasing automation”, with the aim “to facilitate computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history, and science and technology studies”.
However, it is worth noting that the current safe guards and policies created by such groups and government bodies acknowledge their thinking is preliminary. The reality is that ethics in AI is a relatively new concept that has grown in prominence over the last three to four years and is a concept we are all still trying to fully understand, unpick and control. Current safe guards and controls need to be fully worked through FS use cases and tightened up over time, in order to become a cornerstone of FS ethics principles.
We are all aware that AI and its underpinning ML algorithms can be a game-changer for financial institutions, driving significant detection improvement. But, we must minimise the possibility of amplifying bias across the modelling lifecycle by prioritising AI ethics. The FS sector must lean on strong model governance principles and practices to recognise and remediate the effect of AI bias, which could lead to unfair or unethical outcomes if left uncontrolled.
For more information on how to control AI bias, check out our latest Banking Insight: Artificial Intelligence – friend or foe for financial institutions.

Other recommended content

Do you want to access more blog content from our Financial Services experts?

More blog content
banking sign up

Sign up to get the latest industry intelligence and insight

Stay on top of the latest news, forthcoming webinars, new podcast episodes and upcoming trends in Banking, Insurance, Data and Cyber by signing up to our BAE Systems Insights series. Hear from industry experts sharing their views on hot topics and new technologies. Delivered fresh to your inbox. 

Get in touch with our experts today


Americas  +1 720 696 9830     |     Europe, Middle East  +44 (0) 330 158 3627     |     AsiaPac  +61 290 539 330