Rejected for a Credit Card? AI May be to Blame

Howso Staff

black asus laptop computer on white surface

How Howso can help banks meet new Consumer Financial Protection Bureau regulations.

Many of us have had a vexing experience at one time or another: we got turned down for a credit card, loan, or an apartment rental because we weren’t deemed “creditworthy.” Most often, we have little idea why a lender turns down an application for credit. 

Financial institutions are mandated to explain why they approve or deny credit requests, in large part due to regulations from the The Consumer Financial Protection Bureau. The CFPB requires lenders and landlords to provide an “adverse action letter” explaining what factors led to a denial, if a consumer requests it, even though most people don’t realize they can ask for this information.

And, now, the issue is only getting thornier–because lenders are increasingly using black-box AI algorithms to decide who is creditworthy and who is not. 

Last month, the CFPB released new guidance on how lenders must explain credit denials made using AI.

In a nutshell, lenders must provide “accurate and specific reasons for credit denials” made by AI–and this is a huge issue for banks and credit card companies.

That’s because they’re using black-box AI algorithms and predictive models: consumer data goes into the models and an answer is spit out, but there is no visibility into the parameters used by the AI to accept or deny an application. With black-box AI, financial institutions don’t know why an application is denied, making it impossible to comply with CFPB requirements.  

The CFPB explains the new requirements like this: “Even for adverse decisions made by complex algorithms, creditors must provide accurate and specific reasons. Generally, creditors cannot state the reasons for adverse actions by pointing to a broad bucket.

For instance, if a creditor decides to lower the limit on a consumer’s credit line based on behavioral spending data, the explanation would likely need to provide more details about the specific negative behaviors that led to the reduction beyond a general reason like ‘purchasing history.’” 

In other words, if a financial institution uses AI algorithms for credit evaluations, the AI’s logic must be as transparent as a human’s. That is clearly not happening today. 

Many researchers, regulators, and industry groups have raised the alarm over the lack of explainability in AI decision-making in the consumer credit industry. The FinRegLab, a DC-based independent research organization, recently testified before the U.S. Senate on the issue, calling for true explainability to be a critical requirement for AI used in lending decisions.  

At Howso, AI explainability is our mission and our DNA. We work with financial institutions such as Mastercard to help them deploy AI they can trust, audit, and explain.

In the consumer credit space, some lenders are attempting to bolt post hoc SHAP tools onto their AI models to improve transparency, but these are simply band-aids and lack the accuracy needed to provide true explainability into each and every consumer credit decision. Howso enables the deployment of fully explainable AI models from the get-go.  

We look forward to helping many more financial institutions deploy explainable AI in the near future so they can meet and exceed not only CFPB regulations, but the myriad reporting and compliance issues that require full AI transparency.

Want to find out more about AI you can trust? Access Howso Engine Playground here.