Developments in artificial intelligence (AI) have accelerated rapidly over the past decade, and the topic now features prominently in mainstream media and corporate messaging. Many organizations promote proprietary “AI-driven” capabilities as a marker of sophistication, often without clearly defining what those capabilities entail. As a result, perceptions of AI have become polarized: some view it as little more than marketing hype, while others regard it as a transformative solution to nearly every business challenge.
In the consumer lending space, skeptics are often well-versed in traditional quantitative methods and correctly recognize that many AI techniques represent extensions or refinements of existing analytical approaches. While this perspective is grounded in experience, it can lead to missed opportunities if new methods are dismissed outright. Conversely, enthusiastic adopters may embrace AI uncritically – motivated by competitive pressure, fear of falling behind, or the desire to signal sophistication to capital providers. This latter approach carries greater risk, as poorly designed or insufficiently understood models can introduce instability and unintended volatility into lending decisions.
The truth, as is often the case, lies somewhere in between. Sophistication alone is not a marker of wisdom; one can own a very sophisticated watch that still fails to tell time. Every modeling approach carries assumptions—and when those assumptions are violated, results degrade, sometimes materially. For this reason, it is important to understand the broad categories of AI and what lenders should consider before jumping into the deep end.
General Classes of AI
To be clear, significant advances have been made across the broad field of artificial intelligence—but not all in the same way or for the same purpose. Understanding what AI can realistically do for an organization requires clarity around which type of AI is being discussed. “AI” is not a single tool or technique, but an umbrella term covering several distinct categories. The following provides a high-level overview of the most common subcategories:
- Predictive AI Historical data is used to train models that estimate the likelihood of future outcomes. This is the category most familiar to consumer lenders and underpins applications such as credit scoring, fraud detection, loss forecasting, and collections prioritization.
- Prescriptive AI These systems address the question, “What action should I take?” Prescriptive AI builds on predictive outputs by recommending or optimizing decisions. Common examples include inventory management, dynamic pricing, and next-best-action frameworks that respond to real-time inputs.
- Generative AI Generative models create new content based on patterns learned from large volumes of reference data. Chatbots, text generation, image creation, audio synthesis, and video generation all fall into this category.
- Natural Language AI These models focus on understanding and structuring human language rather than generating it. Applications include intent detection, sentiment analysis, document classification, entity extraction, and speech-to-text. While closely related to generative language models, the emphasis here is interpretation rather than creation.
- Autonomous AI Autonomous systems are designed to execute tasks with limited human intervention, often combining multiple models with rules, memory, and external tools. Examples include AI agents and research bots that can plan, retrieve information, and perform multi-step workflows on behalf of a human user.
There are emerging categories as well as hybrid systems that combine several of these approaches. For the purposes of this article, the focus will remain on predictive AI, as it is the most widely deployed—and most consequential—category in consumer lending.
A Closer Look at Predictive AI
Many people do not realize that machine learning as a discipline dates back to the 1950s. Likewise, decision trees, neural networks, and many of the tools that now comprise the predictive AI toolbox have existed for decades. The most meaningful advances in this area have not come from entirely new modeling forms, but rather from dramatic increases in computing power and the availability of massive data sets that allow these models to be estimated—and re-estimated—at scale.
From a practical standpoint, most predictive modeling techniques used in consumer lending fall into one of three broad categories:
- Linear / Parametric Extensions This category includes linear and logistic regression, ridge and LASSO regression, generalized linear models, elastic net, and neural networks. Conceptually, these models estimate outcomes as weighted combinations of inputs. A simplified example might be: default risk equals varying weightings of repossessions, credit inquiries, and months in the credit file. Some may object to grouping neural networks with linear models. While technically correct, this distinction is often overstated in practice. Like linear models, neural networks rely on weighted inputs, even if the functional mapping between inputs and outcomes is nonlinear.
- Multivariate / Distance-Based Methods These techniques include clustering, factor analysis, and k-nearest neighbor methods, which group observations based on statistical distance in a multidimensional space. In consumer lending, these approaches are more commonly used to inform variable selection or segmentation, such as separating thin-file and thick-file borrowers, rather than serving as standalone credit decision models.
- Tree- or Rule-Based Extensions These methods create recursive partitions of the data and are most often associated with modern machine learning. Examples include classification and regression trees, random forests, gradient boosting, and other rules-based models. Consider a default model that repeatedly splits observations into groups with increasing or decreasing risk—for example, first by months in file, then by time since most recent delinquency, and then by number of tradelines. The result is a set of terminal nodes, each representing a unique borrower profile with an associated probability of default based on observed performance. While all modeling approaches are susceptible to overfitting, tree-based methods are particularly prone to this risk. They also tend to require substantially more data, due to the number of implicit degrees of freedom than many lenders realistically possess.
Every method in the categories above comes with assumptions, and when those assumptions are violated, results suffer. Ordinary least squares (OLS) regression, for example, relies on linear relationships and well-behaved error terms, and it becomes increasingly fragile in the presence of noisy, correlated, real-world data—which is to say, almost all credit data. As these limitations accumulate, predictive performance and model stability inevitably deteriorate.
How Should Lenders View AI?
Embracing predictive AI within a lending organization requires educated and discerning executive leadership. Too often, I encounter lenders where quantitative expertise is concentrated in a single individual—typically the lead data scientist within the risk management function. This creates several structural risks. Modelers frequently perceive their value as being tied to how intellectually impressive their work appears to others. As a result, there is a natural incentive to favor increasingly complex techniques that few outside the modeling team fully understand. In some cases, this complexity masks aggressive overfitting designed to inflate apparent predictive performance. The outcome is predictable: models that perform well in development but fail badly when confronted with real-world results.
Compounding this problem, vendors routinely exaggerate the sophistication and impact of their AI capabilities. In recent years, I have seen providers claim they can double origination volume while cutting losses in half or assert that traditional methods are no longer relevant. Perhaps the most extreme example of equating sophistication with credibility comes from the recently imploded TriColor. In June of 2025, the company issued a press release promoting its most recent asset-backed securitization, attributing superior performance to AI models allegedly leveraging “60 million attributes”—a claim that strains credulity. Just months later, the company ceased operations amid revelations of extensive malfeasance, fabricated performance metrics, and triple-pledged receivables, what may ultimately prove to be one of the most significant frauds in the history of auto finance.
The discerning executive—who is the primary customer of both vendors and internal risk teams—should consider the following:
- Logical Fallacies Support for AI is frequently rooted in faulty logic. A common example is the appeal to novelty, which assumes that newer methods are inherently superior. Another is the appeal to authority, which suggests that a technique must be sound because respected or well-known organizations use it. In practice, a method should be judged not by how impressive it appears or who endorses it, but by how it performs on validation data—data withheld from the model development process—and how that performance compares to credible alternatives.
- Model Limitations Every modeling approach has limitations. A capable executive investigates these limitations and expects clear, candid explanations from vendors or internal analysts. Competent modelers understand that each technique has appropriate and inappropriate use cases. They view the model taxonomy as a toolbox and can explain which tool is most suitable for the problem at hand—or, more realistically, which approach violates the fewest assumptions given the constraints of the available data. If a vendor or internal analyst cannot articulate how a model can fail, you should look elsewhere for guidance.
- Competing Model Approaches Whether one is skeptical or enthusiastic about AI, a competing-model framework remains the safest way to balance innovation with discipline. Modern analytics tools—many of them free, such as R—allow competent analysts to test multiple modeling approaches on the same data. These comparisons must be performed on data withheld from model training. The objective is not maximal complexity, but the simplest model that delivers meaningful incremental predictive power. In short, executives should ask: How much predictive gain am I achieving, and at what cost in volatility and stability?
- Data Bias and Stability Risk One of the most significant risks in machine learning is inductive bias—the assumption that historical data is representative of future conditions. Recent history provides a clear example: many lenders experienced unusually strong performance during COVID-era stimulus, followed by sharply rising losses as inflation accelerated from 2021 through 2023. A model calibrated to yesterday’s environment offers no guarantee of success in tomorrow’s. This risk is particularly acute for self-updating or continuously retrained models, which can amplify instability when economic conditions shift.
- Third-Party Validation Neither vendors nor internal analysts are naturally inclined to highlight their own blind spots. Independent third-party review provides a critical safeguard. High-quality modelers do not fear scrutiny. Those who rely on complexity to project competence should welcome it—or be exposed by it. External validation provides executives, particularly those without deep quantitative backgrounds, with an essential defense against overconfidence and marketing-driven analytics.
My final observation on this topic is simple: executive decision-makers must invest time in understanding the methods, benefits, and limitations of AI techniques relevant to their business. These tools should neither confer automatic credibility nor receive a free pass because they are labeled “AI.” Internal teams and external vendors alike must be held to a high standard, or lenders risk not only poor performance, but reputational damage as well. The old adage that an ounce of prevention is worth a pound of cure applies here. In the context of AI, an ounce of discernment and education is worth just as much.





