Home Executive Interviews Current AI Models Aren’t Good Enough

Current AI Models Aren’t Good Enough

AI Models Needs to Grow Up

We sat down with Scott Zoldi Chief analytics officer at FICO to discuss responsible AI and why current AI models aren’t ‘good’ enough.

Algorithm X: With COVID-19 on every business’s agenda now, how should companies be adapting or changing their analytic models?

Scott: In general, companies should not rush to change their models, but rather understand them. How were the models built? What are the models most sensitive to, and how would they respond to macroeconomic changes in the environment they operate in?

Does the company have sufficient model development governance in place to answer these questions?

Organizations that have a corporate model governance process can answer specific questions with confidence, using a two-pronged process:

  • First, by examining the artifacts of when the models were built
  • Second, by contrasting the model’s development data and prediction drivers to the current operating environment.

Companies with proper governance can also field questions as to whether the model is still non-biased and non-discriminatory in the face of changing customer behaviors. If an organization doesn’t have that capacity, its number one priority should be to develop and enforce the analytic model governance standard at a corporate level.

Efficient AI development is about anticipating and producing assets and analyses that may be needed in future production environments. Without Efficient AI, model testing will be a laborious effort, to produce these assets in a time of crisis.

Same thing with correctly adjusting strategies in real-time, based on how the model is responding and the details of how it was built. Anticipating and producing these assets as part of the model development often surfaces flaws and enhancements to the model upfront, as opposed to being learned the hard way when the model misbehaves in production.

If a company doesn’t have this model governance standard and associated Efficient AI assets today, it needs to examine if the model in question was built robustly, and ask which analyses need to be done on the fly.

For robustly built models, it is still often better to adjust strategy than trying to build a new model based on the erratic, inherently non-stationary behaviors that are hallmarks of the pandemic.

Algorithm X: How will COVID – 19 and responsible AI be changing the credit scoring industry?

Scott: Those who haven’t been developing models with Responsible AI will regret that they haven’t and will start looking carefully at doing so.

Generally, data scientists will need to understand where there may be failings in their current model development processes. Going forward, they will focus more on the robustness and simulation work that they need to do as they build these models.

In other words, it’s not sufficient to say, “We’ve built a quality model.” Data scientists need to understand how the model is going to respond to all kinds of possible changes and scenarios in the operating environment post-COVID—for example, how will model performance be impacted if there’s another large job loss?

What does the recovery look like and at what rate? How could the model performance possibly move from unbiased at the time of model development, to biased in production?

Those companies that have built models using Responsible AI are already ahead of the game. They’re anticipating what the flow-throughs of payment waivers and delinquencies look like, and adjusting strategies of how to use these scores.

Robustly built models continue to perform, and the insights from a model development governance system—ideally, one based on immutable blockchain technology—allow accurate, confident strategies to be formed.

The bottom line here: If you’ve built an analytic or AI models robustly and responsibly, it will continue to rank-order properly. Typically, only the score thresholds in your strategies among customer segments will need to be adjusted.

Algorithm X: How do we eliminate AI bias?

Scott: Neural networks can find complex nonlinear relationships in data, leading to strong predictive power, which is a key component of AI. Neural networks can also learn bias from data.

That’s because while the mathematical equations of machine learning algorithms are often straightforward, deriving a human-understandable interpretation is often difficult. However, this is essential, given the bias intrinsic in data on which these models are trained.

The result is that even ML models with strong business value may be inexplicable and potentially biased—qualities incompatible with regulated industries—and thus not suitable for deployment into production in a responsible way.

To overcome this challenge, companies can use a machine learning technique called Interpretable Latent Features. This leads to explainable neural network architecture, the behavior which can be easily understandable by human analysts.

Notably, model explainability should be the primary goal toward eliminating bias, even if that means deprioritizing predictive power. Here, each of the latent features learned by the neural network can be tested for bias; you test the model drivers for bias, not just the final score.

Algorithm X: How should a CEO respond to responsible AI in post-pandemic scenarios?

Scott: CEOs need to understand how their analytic organizations are structured around a standard of Responsible AI. No CEO can operate under the assumption that different analytic teams make the same judgments and share the same standards about models, or algorithms used bias testing, or guidelines to adhere by. In real life, these are often left to the artistry of the individual data scientist.

A CEO needs to examine the current governance around the model development and endorse and enforce a single standard of Responsible AI. This includes continuous periodic re-evaluation and audit for compliance with the model development standard.

Even during COVID, if the organization was doing all the right things, the CEO should be asking her data science team if the models remain ethical and why, for example. That can’t be readily answered unless there is a very strong framework of Responsible AI, and Efficient AI recording the artifacts of the Responsible AI development approach.

Algorithm X: Are current AI and machine learning models ‘good’ enough?

Scott: The short answer: No. It’s time for AI to grow up. These are models that can have a tremendous impact on the quality of someone’s life. They shouldn’t be academic research projects where data scientists try to impress each other.

Analytic models need to be explainable first, predictive second, and built-in a Responsible AI framework. If not, data scientists can operate as cowboys, for example overly reliant on stochastic gradient-boosted tree structures, or using deep learning because it’s cool versus using it responsibly.

Overly complicated models obscure explanation and are less robust to changes in production which, as a tragic byproduct, can harm the people they score.

AI and machine learning models need to be viewed through the lens of the tremendous impact they can have on individuals’ lives. The seriousness of this responsibility requires Responsible AI standards and model development governance audit trails to ensure that, while looking for benefits of AI, we don’t unintentionally inflict harm or discrimination.

Let me emphasize that these model development governance standards need to be applied when these models are being developed, not as attempted quality assurance when inspecting the already-built model.

Algorithm X: Are current models resilient enough?

Scott: Again, that depends on whether they’re built properly—there is an enormous variety of models in the industry today. Resilience is not just about the model itself.

For example, deep learning models or stochastic gradient-boosted trees are significantly less resilient to changes, even subtle ones; resilient models need to be transparent and explainable to operate safely in the real world.

Beyond the model itself, I have been referencing the concepts of Explainable AI, Ethical AI, and Efficient AI. For example, I could build the most ethical AI model and explain to the CEO how I built it. The model might be resilient, but if the CEO is doing her job, she won’t use it unless there’s Efficient AI on the backend.

Efficient AI is what provides a demonstrable audit trail of the proof of work, and adherence to the company’s Responsible AI standards. She won’t just take my word for it.

In other words, an analytic model must be accompanied by proof.

Algorithm X: Can you please explain the 3 Es of AI?

Scott: Sure. As I described above, Explainable AI hinges on exposing and explaining a model’s latent features, which are hidden relationships between data inputs that unexpectedly drive model performance, as derived from the model’s training data.

Interpretable Latent Features lead to an explainable neural network architecture; when the behavior of the typically hidden latent features is exposed, they are easily understood by human analysts.

As for Ethical AI, machine learning learns relationships between data to fit a particular objective function or goal. It will often form proxies for avoided inputs, and these proxies can show bias.

From a data scientist’s point of view, Ethical AI is achieved by taking precautions to expose what the underlying machine learning model has learned, and testing to see if those learned hidden features could impute bias.

These proxies can be activated more by one data class than another, resulting in the model producing biased results. For example, if a model includes the brand and version of an individual’s mobile phone, that data can be related to the ability to afford an expensive cell phone—a characteristic that can impute income and, in turn, bias.

A rigorous development process, coupled with visibility into latent features, helps ensure that AI models function ethically. Latent features should continually be checked for bias in changing environments.

Efficient AI doesn’t refer to building a model quickly; it means building it right the first time. To be truly efficient, models must be designed and developed from inception to run within an operational environment, one that will change.

The model development process in Efficient AI produces assets during the model build process, allowing on-demand analysis of how the model may respond to rapidly changing conditions later when it’s in production.

Machine learning models are complicated. As I’ve alluded to already, achieving Efficient AI means that models must be built according to a company-wide model development standard.

This includes shared code repositories, approved model architectures, sanctioned variables, and established bias testing and stability standards for models.

Strictly adhering to a detailed model development standard dramatically reduces errors in model development. Ultimately, these errors would get exposed in production, cutting into anticipated business value and negatively impacting customers.

Algorithm X: Is responsible AI getting enough attention at C-level?

Scott: No— most Boards of Directors and CEOs don’t have a deep enough understanding of analytics and its impact on the business, especially the harm that AI can do. There is enthusiasm at the Board level for AI, but a distinct gap between attention paid to AI hype versus Responsible AI. A focus on Responsible AI is required to get meaningful, safe, and sustainable business results.

On the flip side, most data scientists aren’t thinking of model development governance, either. Analytics teams need to be led in a way that embraces and integrates a single corporate model governance standard, and audit, into the way they develop models.

Algorithm X: Who should be responsible for responsible AI at a company?

Scott: The chief analytics officer is in the position to meld model governance and model development, so it’s the CAO’s responsibility to set Responsible AI standards across the company. And if it’s not CAO, it should be someone else at the C level because ultimately, Responsible AI is the CEO’s responsibility.

Algorithm X: AI is still evolving; is it still too early to define responsible AI?

Scott: The components of Responsible AI have existed for some time now—the challenge is in promulgating widespread acceptance and use. The good news is, we have a clear path and the pandemic has drawn much attention to the criticality of that journey. We should see AI grow up, and with that, model development governance standards put into place that allows benefits to be realized safely.