Skip to main content

Published Release Notes

Find release notes for the selected Pega Version and Capability

Browse resolved issues for Platform releases.

This documentation is for non-current versions of Pega Platform. For current release notes, go here.

Changes to the architecture of the Data Flow service

Valid from Pega Version 8.4

In Pega Platform™ 8.4, the architecture of batch and real-time data flows uses improved node handling to increase the stability of data flow runs. As a result, there are fewer interactions with the database and between the nodes, resulting in increased resilience of the Data Flow service.

If you upgrade from a previous version of Pega Plaftorm, see the following list for an overview of the changes in the behavior of the Data Flow service compared to previous versions:

Responsiveness

Nodes no longer communicate and trigger each other, but run periodic tasks instead. As such, triggering a new run does not cause the service nodes to immediately start the run. Instead, the run starts a few seconds later. The same applies to user actions such as stopping, starting, and updating the run. The system also processes topology changes as periodic tasks, so it might take a few minutes for new nodes to join runs, or for partitions to redistribute when a node leaves a run.

Updates to lifecycle actions

To make lifecycle actions more intuitive, the Stop action consolidates both the Stop and Pause actions. The Start action consolidates both the Resume and Start actions.

You can resume or restart stopped and failed runs with the Start and Restart actions. The Start action is only available for resumable runs and continues the run from where it stopped. The Restart action causes the run to process from the beginning. Completed runs can only be restarted. If a run completes with failures, you can restart it from the beginning, or process only the errors by using the Reprocess failures action.

Starting a run

New data flow runs have the Initializing status, and start automatically. You no longer need to manually start a new run, so the New status is now removed.

If there are no nodes available to process a run, the run gets the Queued status and waits for an available node.

Triggering pre- and post-activities

The system now triggers pre-activities on a random service node, rather than on the node that triggered the run.

The system triggers post-activities only for runs that complete, fail, or complete with failures. If you manually stop a run with the Stop action, the post-activity does not trigger. However, restarting the run with the Restart action triggers first the post-activity, and then the pre-activity.

You can no longer choose to run pre- and post-activities on all nodes.

Selecting a node fail policy

For resumable runs, you can no longer select a node fail policy. If a node fails, the partitions assigned to that node automatically continue the run on different nodes.

For non-resumable runs, you can choose to restart the partitions assigned to the failed node on different nodes, or to fail the partitions assigned to the failed node.

No service nodes and active runs

If the last data flow node for an in-progress run fails, the run remains in the In Progress state, even if no processing takes place. This behavior results from the fact that data flow architecture now prevents unrelated nodes from affecting runs.

Decision Data Store data sets can be used only on DNodes

Valid from Pega Version 7.1.8

Data flows that contain Decision Data Store data sets as the primary or secondary source must be created and executed only on DNodes.

Data flows are not restarted automatically after application server restart

Valid from Pega Version 7.1.8

When you restart the application server or the Pega 7 server, you stop the execution of your data flows. The interrupted batch and real-time runs are marked as Failed

Recommendation:

  • Go to the Designer StudioDecisioning > Decisions > Data Flows > Batch processing tab and start the failed batch runs.
  • Go to the Designer StudioDecisioning > Decisions > Data Flows > Real-time processing tab and activate the failed real-time runs.

 

Value list and value group properties are not supported inside data flows

Valid from Pega Version 7.1.8

Value list and value group properties are not supported inside data flows, and you need to instead use other property types.

See the Pega 7 developer help to check all available property types.

Data flow preview size is fixed to 10

Valid from Pega Version 7.1.8

The Preview option for each shape in data flows returns the first 10 records. This value is fixed and currently cannot be changed.

Data Flow transformation shapes cannot be used in combination with the Compose or Merge shapes

Valid from Pega Version 7.1.8

When you reference another data flow from a data flow that contains the Compose or Merge shape, the referenced data flow cannot contain transformation shapes (EventStrategy, Decision strategy, Convert).

Data flow validation does not currently prevent you from designing a data flow that goes against this design pattern. Make sure that your data flows follow this design pattern by checking the referenced and referencing data flows.

 

Data flow destination is resolved at assembly time

Valid from Pega Version 7.1.8

Data flow execution always uses the design-time destination data sets or destination data flows, regardless of the input records at run time.

Example:

You create a data flow and set the destination to a specific data set. In the data flow, you use a strategy that produces some results. Regardless of these results, the execution of your data flow always uses the destination data set that you specified when you designed the data flow.    

The same applies when you set the destination to a data flow.

 

 

Problem with truncating Decision Data Store data set

Valid from Pega Version 7.1.9

The Truncate operation for the Decision Data Store data set may cause timeout exceptions. This problem is caused by the Apache Cassandra database that waits until compaction tasks finish before it can truncate the data set.

Recommendation:

Repeat the Truncate operation until it is successful.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us