Default background processors
Default queue processors
Pega Platform™ provides the many default queue processors. Leverage the following common processors in your application if applicable.
If your use case does not generate a large volume of queue items and does not have high scaling and throughput requirements, queue the page to the existing standard queue processor and specify the activity that you want this queue processor to run.
pyProcessNotification
The pyProcessNotification queue processor sends notifications to customers and runs the pxNotify activity to calculate data such as the list of recipients, the message, or the channel. The possible channels include an email, a gadget notification, or a Push notification.
pzStandardProcessor
You can use the pzStandardProcessor queue processor for standard asynchronous processing when:
- Processing does not require high throughput, or processing resources can be slightly delayed.
- Default and standard queue behaviors are acceptable.
This queue processor can be used for tasks such as submitting each status change to an external system. It can be used to run bulk processes in the background. When the queue processor resolves all the items from the queue, you receive a notification with information about the number of successful and failed attempts.
pyFTSIncrementalIndexer
The pyFTSIncrementalIndexer queue processor performs incremental indexing in the background. This queue processor posts rule, data, and work objects into the search subsystem as soon as you create or change them, which helps keep search data current and closely reflects the content of the database.
Default job schedulers
Pega Platform provides the many default job processors that can be useful in your application.
Node cleaner
The node cleaner cleans up expired locks and outdated module version reports.
By default, the node cleaner job scheduler (pyNodeCleaner) runs the Code-.pzNodeCleaner activity on all the nodes in the cluster.
Cluster and database cleaner
By default, the cluster and database job scheduler (pyClusterAndDBCleaner) runs the Code-.pzClusterAndDBCleaner activity on only one node in the cluster, once every 24 hours for housekeeping tasks. This job purges the following items:
- Older records from log tables
- Idle requestors for 48 hours
- Passivation data for expired requestors - clipboard cleanup
- expired locks
- cluster state data that is older than 90 days.
Persist node and cluster state
Node state is saved by pyPersistNodeState on node startup.
Cluster state data is saved once a day by the pyPersistClusterState job scheduler.
Cluster state data is saved for 90 days and is purged by the pzClusterAndDBCleaner job scheduler.
Default agents
When Pega Platform is initially installed, many default agents are configured to run in the system (similar to services configured to run in a computer OS). Review and tune the agent configurations on a production system because there are default agents that:
- Are unnecessary for most applications because the agents implement legacy or seldom-used features
- Should not run in production
- Run at inappropriate times by default
- Run more frequently than needed, or not frequently enough
- Run on all nodes by default but should run on only one node
For example, by default, there are several agents configured to run the Pega-DecisionEngine in the system. Disable these agents if decisioning does not apply to the applications. Enable some agents only in a development or QA environment, such as the Pega-AutoTest agents. Some agents are designed to run on a single node in a multinode configuration.
A complete review of agents and their configuration settings is available in the Pega Community article Agents and agent schedules. Because these agents are in locked rulesets, they cannot be modified. To change the configuration for these agents update the agent schedules generated from the agents rule.
Bulk processing
Bulk actions are available in the Case Manager and Caseworker portals to process more than one case at a time. Bulk processing saves time and is less error-prone than processing cases individually.
By default, you can perform the following major tasks with bulk processing:
- Create a case.
- Update case information.
- Run a flow action to update a case.
Bulk processing is handled by pzStandardProcessor queue processor with activity pzBulkProcessCases.
Want to help us improve this content?