Shadowing an adaptive model
When new potential predictors become available in Pega Customer Decision Hub™ but have not yet been approved by the governance team, you can shadow the model that drives a Prediction with a copy that has access to the new predictors. In shadow mode, the challenger model has no impact on the Business Outcomes but learns from the production data.
Hi, I'm Hugo, a Data Scientist new to Pega. I'm working on a Customer Decision Hub project in financial services, responsible for monitoring the Adaptive Models that support next-best-action decisions.
Recently, the System Architect made a new set of potential predictors available in the Application.
They haven't been approved by the governance team yet, so I want to measure the predictive power of these new predictors without influencing the Business Outcomes.
I'm planning to ask Bella, the Lead System Architect on the project, how to set that up.
Hi Hugo, great to see you again!
Hi Bella! How can I safely evaluate newly available predictors of Adaptive Models?
To measure the predictive power of the potential predictors for Adaptive Models, you can shadow the model that drives the Prediction with a copy that has access to these new predictors.
Got it! Here's my Use Case: U+ Bank utilizes Customer Decision Hub to personalize the credit card offers on their website that customers see.
The Predict Web Propensity prediction calculates the propensity, or likelihood, that a customer will click on the banner displaying the credit card offer.
Customer Decision Hub uses these propensities to determine which offer to show.
Let's see. The Adaptive Model Configuration that drives the Predict Web Propensity prediction is the Web Click Through Rate Customer configuration.
So first, you'll want to introduce a challenger model and use a copy of the active model configuration for the candidate models.
Next, add the new predictors that you want to monitor to the challenger model.
I'm looking to add the Financial Services Clickstream, a summary used for aggregating clickstreams on financial services-based web pages.
Yes, recent web browsing information can be highly relevant and, therefore, very predictive.
So, in shadow mode, the challenger model has no impact on the Business Outcomes but learns from the production data.
This allows you to monitor the predictive power of the new predictors without violating company policies.
Perfect! So, I'll need a change request, correct? I work in a Business Operations Environment.
No. When you submit your changes for deployment, Customer Decision Hub automatically creates a change request in the current revision.
The Revision Manager ensures that all change requests are completed and then deploys the revision to all other environments, including production.
Great! The candidate models are new, so I'd assume they start learning from scratch.
Exactly, the challenger models begin learning when customers respond to the credit card offers after deployment of the Shadowing Pattern.
Let's say that, after a while, I notice that the new predictors significantly contribute to the predictive power of the models.
When the governance team approves the new predictors, how do I put them to good use?
When the challenger models outperform the active models, you can promote them.
A Champion Challenger Pattern lets you use the challenger model for a limited number of decisions.
Alternatively, you can replace the active model and make the new predictors available for all decisions.
Crystal clear! So, to summarize, to safely introduce new predictors in the Adaptive Models, I create a copy of the active model configuration and use it in a Shadowing pattern.
When the new predictors are approved, I can use the challenger models for a percentage of the decisions in a Champion Challenger Pattern, or completely replace the active model.
Yes, that sums it up.
Thanks for all the info, Bella!
Anytime, Hugo! Let's keep innovating!