Home General AI Model Development isn’t the End; it’s the Beginning

AI Model Development isn’t the End; it’s the Beginning

AI model development isn’t the end; it’s the beginning.

Like children, successful models need continuous nurturing and monitoring throughout their lifecycle.

Parenting is exhilarating and, if we’re being honest, a lot of hard work. From conception through birth, simply “preparing” for a new life can be all-consuming. Yet, as most parents soon realize, that’s just a teaser.

The real work lies ahead—the constant monitoring, nurturing, and steering them down a steady path to becoming mature, productive adults. In my mind, responsible AI model development mirrors this lifecycle, or at least it should.

Given the intense complexities of model development, it’s tempting to think “my work here is done” and push away from the table after a model build. Whether intentional or not, it happens.

According to a 2020 Corinium report, 67 percent of AI leaders admit to not monitoring their models to ensure continued accuracy and prevent drift, let alone evaluate developing bias to changing market conditions.

The truth is, building the model is just the start. Once the model is ‘live’ it can add increased value to the organization or, in some cases, cause considerable harm. Just as AI model development is methodically planned, continuous monitoring strategies should also be anticipated.

Assets should be gathered for that journey during the early stages of model development—it shouldn’t be an afterthought. Further, validating and revalidating models to ensure expected performance and discourage drifts to discrimination as recommended by the Federal Trade Commission is critical to responsible AI.

In short, even more, important than model development is model deployment, monitoring, testing, evaluation, and more monitoring. Much like raising kids, there is much work to be done after birth—a lot of work.

Now, let’s step back and examine a lifecycle of a single, high-performing AI model within the context of human development to identify opportunities for continuous improvement and ongoing model optimization.

The first step of AI model development is “conception,” or idea initiation. Preparing for a new model project is an exciting time, full of planning and possibilities. Careful evaluation of everything from data sets, machine learning tools, and algorithm selection takes place at this point.

Performance objectives, historical data, and best features and algorithms are also defined to form the machine learning model.

This time is also essential for recording decisions made, bias tests, variable sensitivity simulations, and thresholds of model degradation. Strategies around humble AI where the model should not be used at all in a changing data environment should also be recorded now.

Everything should be done within the context of a unified, enterprise-standard approach to developing the machine learning model.

The resulting model development governance asset should be immutable and often persisted in the blockchain.

1. Birth: it’s “go live” time. Congratulations! After weeks (or, more likely, months) of sleepless nights and hard work wrangling the data, testing, training, auditing, and documenting, your new model is live!

You should feel proud because it’s a big deal. Roughly 43 percent of AI projects aligned with a legitimate business need has yet to be fully deployed, according to a recent survey of data and analytics leaders.

Next, come immediate operational changes and constant monitoring of the model and the business environment. Much like bringing a new baby home from the hospital, as soon as your model goes live, things change at home—and a great deal.

Is the operational data the same as when the model was developed? How is it different?

Does it align with model governance blockchain as to acceptable data parameters?

Is the score distribution reasonable?

Are the reason codes coming from the explainable AI algorithm produced adequately?

What are the error codes?

How do I address immediate data issues and adjust operations to this change of life, our new model?

2. Childhood: model performance takes center stage. Ahh, the glory days. High hopes and expectations accompany the young model, as it performs well with a little guidance. The data is typically under control, score distributions are settled and reason codes adequately generated.

Next, you try to understand if it’s providing the return on investment expected. Is it proving the value envisioned and attached to the model governance blockchain?

The longer it’s utilized, you learn valuable strategies. You better understand how it responds and makes mistakes. Slight adjustments may be required as you “teach it right from wrong” or adjust operations to where it’s still learning.

3. Adolescence: avoiding bias, prejudice, and bad decisions. Like a know-it-all teenager, the model and use of the model gain confidence, and it eventually begins over-extending, making mistakes, and taking on too much risk.

It’s easy to relax the reins at this point; however, remain vigilant and always watch for hidden bugs, bias, the impacts of innovations, and sometimes radical reactions to changes in data. Here the model governance blockchain is of paramount importance.

Those who built the model may no longer work in the organization, or the features that can be biased may not be understood unless persisted to the blockchain for testing. At this stage, challenge the model to ensure that the features and model do not show bias.

This will be a difficult time, as the model will be performing, but make sure it’s doing so ethically and understand the bias. Much like an adolescent falling in with the wrong crowd, it occurs over time when we let the guard down.

4. Adulthood: prime productivity—ready for improvement and innovation. Independent and high-performing, the model is well utilized, as lessons learned on its proper use—and misuse—have led to expert predictability.

The operational environment is stable. The model is unbiased, continuously tested, and is performant. At this stage, enhancements are considered, possibly a retrain. Innovations in machine learning, our understanding of the problem, and improved data have enabled a path to improve the model, strengthen it, and return more value.

The model governance blockchain provides all parameters of the original model build, so we can do A/B testing of the original model versus the one retrained and enhanced with innovation.

Areas of weakness at the time of original construction or learned in production are addressed and the model incrementally improves with a strong basis of understanding of stability, bias, operations, and proper use of the model.

5. Midlife: a consistent, dependable workhorse. No midlife crisis here, as the model becomes the standard by which all others are measured.

Due to the thoughtful, pre-production processes established and documented to the model governance blockchain early in the lifecycle—think: model enhancements focused on model weakness and proactive model monitoring— the model continues to perform and be well understood.

Yet, perhaps it doesn’t keep up with newer technology. As a result, serious discussions are initiated about changing the model in its entirety.

Is there a better model or technology for the job? Before considering a different model, evaluate and compare other technologies to gauge the possible return of value over the risks of restarting the entire lifecycle of untested and unproven technology.

Often the midlife model becomes the workhorse. It may no longer get “oohs and ahhs,” but it continues to provide tremendous consistent outcomes.

6. Senior: passing down knowledge, as age takes its toll. Over time with judicious retraining, adjustments, and continuous monitoring, the model standard begins to dull.

Bakeoffs may show the stronger performance of other technology, years of enhancement may show their age and it becomes increasingly clear that something new is needed. Perhaps, the “something new” involves new data sources that are not consumed by the model or algorithms that can’t be easily accommodated.

Or, maybe changing business conditions have made the model less relevant. During this stage, the blockchain is monitoring the model shift to changing conditions, allowing an understanding of features that were once highly performant and why, now, they have reduced effectiveness.

This all feeds into decisions on how to build the new model as we prepare for new model conception. This conception is defined in the context of the older model and its blockchain. Specifically, where can the new technology make the largest improvements in areas where the senior model showed the greatest weakness?

7. End of life: reflection and imparting wisdom. Much like our lives, this stage is inevitable. As the model is eventually retired, it is noted that it left the world a better place by enhancing the state of the art, providing a solid foundation upon which all future models will be built.

The model has been chronicled since its conception, and now makes its last logs to the model governance blockchain. The new model is on its way—everyone is anticipating its birth—and it’s time to remove the old model from production. The blockchain now imparts proven knowledge and insight acquired throughout the model lifecycle and its value cannot be overstated.

The documented history it provides will serve as a continuous source of wisdom around the problems faced, addressed, solved, and repeated in different contexts. It offers efficient and potentially competitive shortcuts to familiar issues. Likewise, mistakes made throughout its lifetime are also remembered, so they’re not repeated in future projects.

No doubt, the circle of life is as real in AI model development as it is in the human lifecycle. Given the significant time, energy, and resources invested in our AI projects, it’s easy to feel like a proud parent after a successful model build.

However, it’s far more important that we adopt an “engaged parent” mindset from start to finish.

By mindfully approaching the model development lifecycle, you can learn from—and ultimately, operationalize—the “life lessons” acquired through continuous model management and produce more responsible and efficient AI models over the long haul.

Subscribe to our newsletter

Signup today for free and be the first to get notified on the latest news and insights on artificial intelligence

Scott Zoldi
Dr Scott Zoldi is chief analytics officer at FICO, responsible for the analytic development of FICO's product and technology solutions. While at FICO, Scott has been responsible for authoring 80 analytic patents, with 40 patents granted and 40 in process.
- Advertisment -

MOST POPULAR

AI Model Development isn’t the End; it’s the Beginning

AI model development isn’t the end; it’s the beginning. Like children, successful models need continuous nurturing and monitoring throughout their lifecycle. Parenting is exhilarating and, if...