UK banks looking to deploy artificial intelligence (AI) for loan approval can only do it if they establish that the system will not discriminate against minorities, Financial Times has reported.
Financial watchdogs in the country have been tightening their hold on leading lenders regarding the use of AI, sources familiar with the matter said.
The digital shift in the financial services industry has led to lenders using technology to automate their lending by leveraging the credit data of customers.
The algorithms and AI can also group different types of borrowers based on where they live and their employment details.
Getting a decent deal on a loan is already tough for customers belonging to the minority community, which is why regulators are pushing for even stringent rules on the use of AI in lending.
The lenders feel that using machine learning would eliminate the possibility of “subjective and unfair” judgments made by humans, the report said.

US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataClifford Chance lawyer Simon Gleeson opines that by introducing AI in lending, banks can do away with human decision-makers who they see as the “potential source of bias”.
UK’s regulators and some groups feel that AI would not necessarily solve the problem.
“If somebody is in a group which is already discriminated against, they will tend to often live in a postcode where there are other (similar) people . . . but living in that postcode doesn’t actually make you any more or less likely to default on your loan,” said Sara Williams of Debt Camel, a personal finance blog.
“The more you spread the big data around, the more you’re going after data which is not directly relevant to the person. There’s a real risk of perpetuating stereotypes here.”
Concerns regarding the use of AI in offering credit has been raised in the US and European Union as well.
Last week, EU’s financial regulators called on lawmakers to analyse the use of data in AI/ML models and bias leading to discrimination and exclusion.