Get the Report! The Modernization Imperative for Insurance – in partnership with Datos Insights

By · January 25, 2022
5 minute read

Can AI in banking remove human bias and enable fairer credit decisions?

AI banking

Data can help power fairer lending decisions – as long as it’s used in the right way 

Key Takeaways:  

  • Lending is often “riddled” with biases 
  • Data offers a fairer way of making credit decisions 
  • Lenders must be careful to avoid introducing new biases into their data sets 

The banking sector has been working hard to remove bias and build a fairer industry. Yet lending is “riddled with biases against protected characteristics, such as race, gender, and sexual orientation”, the Harvard Business Review warned in 2020. This should no longer be the case for many good reasons, starting with the very simple fact that lenders are no longer allowed to discriminate.  

In the US, the 1974 Equal Credit Opportunity Act prohibited credit-score systems from using information such as sex, race, religion, marital status or national origin to make decisions. Yet today, figures show that the rejection rate for loans is higher among black, Latino and Asian communities. When the Payment Protection Program was rolled out in 2020, entrepreneurs from minority ethnic groups also found it more difficult to arrange small business relief loans, with the problem particularly acute at smaller banks. 

Although financial institutions are forbidden from discriminating against minority groups, they may still inadvertently rely on data sets which contain biases. In the US, one in five Black consumers and one in nine Hispanic consumers have FICO scores that are below 620, compared to one out of 19 white people. The FICO credit score draws on metrics such as the amount of money owed, the length of a credit history and the mix of credit products. But experts believe this data is influenced by generational wealth which members of minority communities do not have access to as well as other factors which prevent fair, equitable lending.    

“We’re often told to stop talking about history, but history won’t stop talking about us,” Frederick Wherry, professor of sociology and director of the Dignity and Debt Network at Princeton University told Forbes. “The data used in current credit scoring models are not neutral; it’s a mirror of inequalities from the past. By using this data we’re amplifying those inequalities today. It has striking effects on people’s life chances.” 

Learning from machines 

One way of tackling bias lies in removing the human element of decision making and replacing it with machines that draw on a wide variety of data sets.  

“Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed,” The Brookings Institute also wrote in a report published in 2020.  

“The data explosion coupled with the significant growth in ML and AI offers tremendous opportunity to rectify substantial problems in the current system.” 

It added: “Artificial intelligence (AI) presents an opportunity to transform how we allocate credit and risk, and to create fairer, more inclusive systems. AI’s ability to avoid the traditional credit reporting and scoring system that helps perpetuate existing bias.” 

Getting AI right  

Unfortunately, AI may develop its own prejudices and draw on data points which are a proxy for protected characteristics when making decisions.  

You don’t need to look far to uncover examples of biased AI. Amazon once found that its automated recruitment services favored men over women, because it was trained to look for patterns such as certain types of languages which tended to be used more often by males. After admitting that its AI was biased, Amazon offered “no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory,” the news agency Reuters reported.  

Data diversity 

This problem is already being addressed at an official level. The UK Government, for instance, has set out guidelines on how to deal with algorithmic bias and warned that “new forms of decision-making have surfaced numerous examples where algorithms have entrenched or amplified historic biases; or even created new forms of bias or unfairness”. 

The report called on business leaders and decision-makers to “engage with understanding the trade-offs inherent in introducing an algorithm”. They should understand how algorithms make decisions and monitor the data that is being used to make sure biases are not being introduced.  

At FintechOS, we believe at least some of the answers to bias lie in bigger data, drawn from a variety of sources. Data now serves as the backbone of financial institutions and more of it is now available than ever before. Lenders can now draw on non-traditional data points when assessing a credit score, such as details of regular rental payments. But all the information in the world cannot help a bank if its core is not able to deal with this stream of information. Data can build a fairer industry, but banks need the right systems in place to get the best from this information. By giving your bank the tools to build customer-centric, data-driven experiences and applications, FintechOS can help you battle bias.  

Find out how FintechOS is driving a paradigm shift for banks and financial institutions, book a demo. 

Share this article with your connections