Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

INC-126129 · Issue 569666

PropertyToColumnMap made more robust

Resolved in Pega Version 8.1.9

The DF_ProcessEmails dataflow was intermittently failing with a StageException error. This was traced to schema changes being propagated asynchronously by system pulse, which seem to have caused PropertyToColumnMap to cache stale schema. To resolve this, if the property mapping is not found the first time, the system will make a second attempt to get the mapping. Additional logging has also been added for better diagnostics.

INC-128385 · Issue 564521

Behavior made consistent between SSA and legacy engines

Resolved in Pega Version 8.1.9

There was a behavioral disparity between the legacy execution engine and the SSA engine where the latter was not creating a new page when the index was one above the size of the page list. This has now been corrected in order to make the SSA behavior fully backward compatible with the legacy engine, i.e. a new blank page will be added to the list if the index is one above the size of the list.

INC-129222 · Issue 568530

Handling improvements for commit logs

Resolved in Pega Version 8.1.9

The ADM commit log logs the number of unconsumed messages that are going to expire. In certain circumstances, it can include unconsumed messages that are not going to expire in the count. Because they are not expired and removed, the environment was running out of disk space due to the ADM commitlogs table growing larger than expected and performance issues were seen. To resolve this, a new adm_commitlog.adm_responses_commit_log_date_tiered table has been created, with a default_time_to_live of 24 hours. DateTieredCompactionStrategy has been set with max_window_size_seconds as 24hrs and tombstone_compaction_interval as 24hrs.

INC-132976 · Issue 580685

Performance improvements for Test Strategy data flow

Resolved in Pega Version 8.1.9

In the Test Strategy panel under Single case -> "Settings", selecting the "Data flow" option and choosing CustomerData dataflow was taking an excessive amount of time to run on a system with an extremely large database. To improve performance, two areas have been addressed: 1) the default behavior for record key suggestions in the test panel has been modified to collect only the ID as the additional data is not necessary at that time; 2) a DSS has been added that will opt out of reading and collecting the customer IDs in order to minimize data stored on the clipboard.

INC-138037 · Issue 586593

Strategy handling updated for very large systems using IH summary

Resolved in Pega Version 8.1.9

When a Strategy in a Real-time dataflow used IH Summary on a system with more than 5000 groups for one eventKey, the message "Error retrieving aggregates from Cassandra KVS" intermittently appeared. Investigation showed that if the number of result rows was greater than the FETCH_SIZE (set to 5000), it meant another read to Cassandra was required and an exception was generated. To resolve this, updates have been made so that instead of returning maps, the system will return iterators and change them to map on the calling thread.

SR-D92734 · Issue 553412

Simulation can take Data flow type as destination

Resolved in Pega Version 8.1.9

Support has been added for Data flow functionality as simulation target and data transform in simulation input.

SR-B52067 · Issue 311178

Handling improved for extremely large data flow run statistics page

Resolved in Pega Version 7.3.1

A data flow monitoring page with 1000+ data flow shapes in total was hanging while loading the statistics. This happened because the component statistics table had no pagination enabled, and displaying all 1000+ shapes in one screen caused the browser to hang. This has been remedied with: - Pagination on the component statistics table in the data flow monitoring screen - Filtering on the component statics table, allowing monitoring of only relevant data flow components

SR-B69080 · Issue 318226

Local PegaAPI instance added for Monte-Carlo dataset

Resolved in Pega Version 7.3.1

A dataflow running from a Monte-Carlo data set was failing due to stale thread exception. This was due to the PegaAPI variable 'pega' not being defined as local variable in the Montecarlo dataset generator, and has been resolved by creating a local variable PegaAPI instance.

SR-B53147 · Issue 310299

HDFS configuration updated to support KMS server

Resolved in Pega Version 7.3.1

A field has been added in HDFS configuration to allow configuration of a KMS server. This setting will be propagated to all places where Hadoop configuration is used - hdfs & parquet data sets.

SR-B74689 · Issue 324632

Marketing data flow run made more robust

Resolved in Pega Version 7.3.1

After upgrade, it was observed that Pega Marketing Campaigns were failing if there were no customers in the Audience configured on the Campaign, generating the error message "The run failed, because it exceeds the maximum number of failed records, which is currently set to 0". The cause of this was executing a distributed data flow with a database as primary source on an empty table, leading the run to fail as a table without any partition was considered in the handling. The database dataset has now been updated to differentiate the case when there's no partition available from the case when there's a single partition for every record, ensuring the DB data set now returns 'all' records when there is no partition key defined, and the data flow handles the no values for partitions in a more robust way.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us