Skip to main content

Using machine learning services

Text

Pega Platform™ provides out-of-the-box support for Amazon SageMaker and Google Vertex AI machine learning services. This is useful when you need to use highly specialized algorithms or frameworks that are not built into Pega by default, giving you much greater flexibility.

Video

Transcript

Hi there and welcome. My name is Iris and I am an intern with the Pega Enablement Team. I am excited to be here today to walk you through configuring and using external machine learning services within the Pega Platform™. In this session, we will cover supported cloud providers, detailed configuration steps, performance considerations, and best practices.

So let's start with the basics. When we talk about external machine learning services, we're referring to the ability to integrate Pega with powerful AI and ML models hosted on cloud platforms. Unlike Pega's native models that run directly on the platform, this approach sends data to a service like Amazon SageMaker or Google Vertex AI for processing. The results are then sent back to Pega. This is incredibly useful when you need to use highly specialized algorithms or frameworks that are not built into Pega by default, giving you much greater flexibility.

Pega Platform provides out-of-the-box support for Amazon SageMaker and Google Vertex AI, two of the biggest players in the cloud ML space. SageMaker is Amazon's comprehensive service. Vertex AI unifies Google's ML offerings into a single platform. Both allow you to tap into a vast ecosystem of tools and will look at the specific frameworks Pega supports for each one next.

Diving into Amazon SageMaker, you will see Pega supports a variety of popular frameworks. This includes deep learning with TensorFlow, Gradient Boosting with XG Boost, and several algorithms for clustering, classification, and anomaly detection. For most of these, you will be working with CSV for input and JSON for the output predictions.

For Google Vertex AI, the support is slightly different. Pega integrates well with Google's AutoML capabilities, which is great for rapidly developing models. You can also connect to custom built models and those using the popular Scikit-learn Library, and XG Boost is available here as well. I want to call out one important limitation. Currently, Pega does not support PyTorch or TensorFlow models, specifically within the Google Vertex AI integration. This is a key difference to remember when choosing your platform.

Before you can configure anything in Prediction Studio, you must set up the proper authentication in Dev Studio. For Amazon SageMaker, you will create an AWS authentication profile, providing it with credentials that have the necessary permissions to invoke your SageMaker model endpoints. For Google Vertex AI, the process uses an OAuth 2.0 profile to securely connect to your GCP service account, using standard grant types like client credentials.

Once authentication is handled, the configuration process in Prediction Studio follows these four main steps. First, you establish and test the connection to the service. Second, you create a new predictive model rule in Pega. The third and most technical step is configuring the model metadata via a JSON file, which we will discuss in more detail. Finally, you map the models inputs to the corresponding properties in your Pega application.

We will walk through these steps now. In Prediction Studio settings, you'll create the machine learning service connection. This is where you link the authentication profile you made earlier and provide the endpoint details. The test connection button is your best friend here. Always use it to confirm everything is working before you move on. Once the connection is green, you can proceed to create the actual predictive model rule, selecting your new service from the list.

Step three is arguably the most important and technical part of the configuration, the model metadata file. This JSON file acts as a contract, telling Pega exactly how to interact with the external model. It defines the model's objective, what kind of outcome to expect, like a binary yes-no or a continuous number, and a list of all the input predictors. My biggest piece of advice here is to have your data science team generate this file automatically as part of their model building pipeline. This prevents typos and mismatches that can be very difficult to debug later.

After all that configuration, how do you actually use the model? The good news is it's very straightforward. Within a Next Best Action strategy, you use the standard predictive model shape and select the model you just configured. From a strategy designer's perspective, it feels just like a native Pega model. Behind the scenes, Pega handles the entire data flow, collecting the necessary data, securely sending it to the external service, and then integrating the returned prediction back into your strategy execution.

While external models are powerful, there are practical trade-offs to consider. The biggest one is latency. Every call has to travel over the network, get processed, and travel back. Which takes more time than a native model. You also need to be mindful of the cost. As these cloud services are typically pay-as-you-go. Therefore, these services are ideal for scenarios where the need for a highly complex or specialized model. Like for churn prediction or credit risk, justifies the potential latency and cost.

To wrap up, let's cover some common issues and best practices. If you run into problems, the issue is often in one of three areas. The connection itself, the data format, or authentication. Systematically check your author profiles, network settings, and that critical metadata file. For long-term success, always enforce security best practices like using encrypted connections and rotating credentials. And finally, treat this as a living integration. Regularly monitor its performance and cost, and be sure to update the metadata in Pega whenever your data science team retains or changes the external model.

That's it for me on using external machine learning services within Pega. Thanks for watching, see you next time.


This Topic is available in the following Modules:

If you are having problems with your training, please review the Pega Academy Support FAQs.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega Academy has detected you are using a browser which may prevent you from experiencing the site as intended. To improve your experience, please update your browser.

Close Deprecation Notice