Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Pega Cloud cloning limitations for stream data sets and environment stream tiers

Updated on December 20, 2021

This content applies only to Pega Cloud environments

Pega Platform stores Stream data sets on Kafka servers. Pega Cloud currently supports the Kafka server either as an internally managed by Pega or provides it to Pega Platform as a managed service. In both cases, this Kafka data is not copied during the cloning operation. Instead, Kafka topics created by Stream data sets are re-created in the cloned environment on first use.

All standard product features will continue to work using these newly created data sets.

Pega Customer Decision Hub Update Guide

Queue Processors

All custom and internal queue processors automatically start on the cloned environment. Required topics are created in Kafka, on first use.

Visual Business Director / Real Time Data Grid (RTDG) service

All Visual Business Director (VBD) topics are recreated on first use.

Client-defined Stream data sets

Client-defined Stream data sets are created the cloned environment starts, but will be empty. If required, clients ingest new events to the exposed endpoints in order to see this data in Stream data sets. For example, ingesting few sample Clicks event stream to the exposed endpoints of the data set.

  • Previous topic Pega Cloud cloning limitations in Pega Customer Decision Hub environments
  • Next topic Removing deprecated pyCampaignOptimization settings

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us