AI tools present exhilarating opportunities for the subprime industry. Lenders and retailers are leveraging AI to great effect: streamlining operations, personalizing customer experiences, and making data driven decisions. But, if you asked many auto execs, there is one issue that holds them back – fear of AI hallucinations.
AI hallucinations refer to the tendency of large language models to occasionally generate false outputs and present them confidently as fact. For an industry built on trust, accuracy, and compliance, such errors can be catastrophic. Companies want AI, but they also need accuracy.
Recent reports from the industry suggest that AI hallucinations are not going away any time soon. OpenAI’s own research, for instance, found that its advanced reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by its PersonQA benchmark. This is more than double the rate of older models, suggesting that as AI becomes more complex, the risk of subtle and impactful fabrications may even increase.
The Risks of Hallucination
For subprime auto finance lenders, the consequences of AI hallucinations can be severe, impacting both the bottom line and, crucially, customer trust.
- Flawed Risk Assessments and Loan Approvals: Imagine an AI hallucinating an applicant’s income, employment history, or credit score, leading to the approval of a high-risk loan or, conversely, the rejection of a qualified applicant. This directly impacts portfolio health and can lead to significant financial losses.
- Compliance Breaches and Legal Exposure: The auto finance industry is heavily regulated. An errant AI could expose a lender to serious compliance risk. Just imagine the compliance complications that would occur if an AI model extracts the wrong Truth in Lending Act (TILA) information from a Retail Installment SalesContract (RISC) and those terms are sent to the servicing system to service the loan.
- Erosion of Trust and Reputation: Trust is the bedrock of financial services. If customers discover that loan offers or financial advice from your AI-powered system are based on fabricated information, you risk losing their trust.
Understanding AI Hallucinations:
Why do hallucinations occur? AI models are prediction models, albeit on a grand scale. They use mathematical probabilities to produce responses to queries. AI hallucinations reflect the inevitability that any prediction model will get answers wrong. Generative AI hallucinations are particularly unsettling because they appear so real, and often involve the use of language (e.g. citing facts and sources) in a manner that humans tend to instinctively trust.
It is important though to put these hallucinations into context. The auto lending industry is already accustomed to working with a variety of prediction models that can produce inaccurate answers – e.g. credit scores, OCR models, fraud detection models. The key is to have a system for managing and minimizing these risks.
How to Protect Against Hallucinations:
So, how does your company mitigate the risk of AI hallucinations? Here are some questions to look at when evaluating the AI solutions you use.
- How complex are the questions the LLM model is answering? Look for ways to have your AI solutions focus on bite-sized questions. Domains in which LLMs are tasked to make more aggregated decisions or deeper analysis are the ones that are more worrisome from the hallucination perspective. With increased complexity and assumption, errors in the models can compound.
- How complete and fit-to-purpose is the training data? Another question to ask is whether the model is based on training data relevant to the task at hand. A closer match between the training data and the questions asked reduces the risk for hallucinations.
- Does the model have guardrails put on the AI outputs? Model developers can establish internal checks to prevent hallucinations. Going back to the RISC example, the itemized costs for financing are supposed to add up. Rules can be set up to make sure outputs are consistent with requirements.
- Is accuracy independently verified? It’s important to systematically monitor these models for bias and concept drift, and take remedial steps when accuracy strays from acceptable benchmarks.
As AI becomes an increasingly integral part of the automotive finance, subprime, and retail ecosystem, the challenge of hallucinations looms large. However, it is a challenge that can be effectively managed. By using AI for fit-to-purpose tasks, using controlled data sets, and maintaining appropriate internal controls, subprime lenders can shield their businesses and customers from the perils of AI-generated falsehoods.





