Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

This content has been archived and is no longer being updated.

Links may not function; however, this content may be relevant to outdated versions of the product.

Performance guidance for production applications - Database

Updated on September 10, 2021

The following article consists of recommendations collected from Pegasystems Global Customer Support for improving the performance of your Process Commander application regarding database operations and settings in a production environment.

Because individual situations vary, you should conduct performance testing before and after you implement any of the suggestions in this article.

Vendor-Independent

Database Maintenance

  • Perform daily statistics-gathering on the database. If self-tuning can be automated, enable the feature.
  • Correctly size initial database and size growth rate to prevent a slow, fragmented database. For general size recommendations, see the “Creating a PRPC Database” section in the install guide appropriate to your application server. The install guides are found on the PDN under the Guides section. For more detailed sizing guidance specific to your application, request a Resource Sizing Estimate from your Pegasystems representative.
  • Reexamine size and growth rate weekly to keep well-matched to production needs.
  • Co-locate the database and application server, preferably in the same data center or local network. Ensure that the database and application servers are in the same time zone, especially for upgrades.
  • Ensure connection pooling is properly sized. Set minimum connection size to 10 to start, and adjust based on the number of users per JVM.

Tables

  • Use dbTrace to identify commonly used and resource-intensive queries, and give this information to your DBA. These queries may see improved performance by adding indexes which are based on more than one column, called composite indexes. Composite indexes are generally more efficient than single column indexes.
  • Avoid creating an index for a single column. Instead, examine the most common and the most time-consuming queries and create multiple-column indexes for them.
  • Rebuild indexes regularly. Since index rebuilding needs vary by application and by database table within each application, analyze the indexes monthly to determine how often your application requires index rebuilds.
  • Avoid using the AS IS tag. Using this tag has two disadvantages:
  • Create Declare Indexes for embedded properties if they need to be exposed for lookup and reporting. For details see:
    How to create declarative indexes for embedded properties
  • Avoid creating an index for a single column. Instead, examine the most common and the most time-consuming queries and create multiple-column indexes for them.
  • The following tables are small and volatile, and thus vulnerable to fragmentation. Defragment them regularly by re-writing the table and rebuilding the indexes, or cache them in a separate database buffer pool. Share the following list with your DBA for tuning, and provide your DBA with a list of any additional custom tables likely to be candidates for partitioning:
    • pr_sys_locks
    • pr_sys_updatescache
    • pr_sys_context
    • pr_page_store
    • pc_data_unique_id
  • The following tables are likely to grow over time as work items are created and the work item's history saved. These tables may be candidates for their own table space in unique buffer pools. Share the following list with your DBA for tuning, and provide your DBA with a list of any additional custom tables likely to grow rapidly:
    • pc_assign_worklist
    • pc_assign_workbasket
      These two tables hold open assignments, and have high turnover. Ordinarily, assignments created are eventually completed, so the row count should remain stable or grow with overall workload. If these tables grow disproportionately, some assignments may be neglected, or "orphaned" - an application issue.
      If one workbasket grows to contain a very large number of assignments, consider a redesign to split them into two or more workbaskets.
    • pc_history_work
      In production, each application should have its own history table. Use this table as a template; copy and customize it for each application.
    • pc_work
      In production, each application should have its own work table. Use this table as a template; copy and customize it for each application.
    • pr_perf_stats
      This table grows to a particular size and then stops growing,as it is trimmed by an agent, and therefore never contains more than 30 days of data. Therefore, if this table grows continuously without bound, research the issue.
    • pr_other
      Process Commander places items in this table as a last resort, only if there is no other designated place for them in the database. Usually, this indicates an error - a missing database table record. This table should be empty in a production system.
  • Split storage for work item data, work indices, and work item BLOBs. If an application has multiple work types, the individual work types should have separate work and history tables. This addresses issues of table size as well as which columns to expose and index, as different work types have different database columns. For database-specific instructions on determining the utility of this adjustment and instructions for performing the table set-up, see the following:
    PDN ArticleDatabase LOB Sizing and Performance Optimization

Queries

  • Wherever possible, construct queries in ways that do not require full table scans. Any query which requires a full table scan is resource intensive, and can seriously impact performance when regularly used.
  • Review queries that use wildcards, as these require full table scans. An example is a query which uses the percent sign, which tests for zero or more characters.
    Example: SELECT table_name WITH field_name LIKE “%example%”
  • Review queries that do not have WHERE clauses, as these require full table scans.
  • Review queries for search conditions which cannot use an index and therefore perform full table scans:
    • != (not equal to)
    • !> (not greater than)
    • NOT EXISTS
    • NOT IN
    • NOT LIKE
    • OR
  • Review queries to identify and correct any which are missing a needed index. Examine the properties in SQL queries which exceed the database operation threshold, triggering a PEGA0005 alert. For these queries, consult with your DBA to determine whether properties in the WHERE clause are indexed.
  • Tune queries to identify performance issues using their Explain Plan.
  • Identify database queries that could make better use of indexes or need functional indexes. Functional indexes, unlike standard indexes, are not created against a column in a table, but instead are created against a function applied to a column. If a query currently has a functional index as part of a WHERE clause, consider instead a design which eliminates the need for a functional index. For example, if the query tests against upper(‘last_name_string’) , consider instead storing two columns in the database, one which is the mixed case original entry and one which is the uppercase version of that same entry. Additionally, where possible, apply the function to the string before it is stored on the clipboard.
  • Use bind variables to improve time required to construct and execute an SQL query against a database when using Rule-Connect-SQL rules:
    External Link Bind variables - The key to application performance
    See also the following PDN guide on writing efficient SQL, specifically the section on Bind Variables.
    PDN ArticleWriting SQL
  • Work history table queries: As Process Commander systems are used over time, work items are created and their history is recorded, with queries to the history table becoming correspondingly longer. In production, the PEGA0005 alert that a database operation took over 500 milliseconds should be reduced to 200 milliseconds. Tune queries that frequently trigger this alert.
    Understanding the PEGA0005 alert - Query time exceeds limit
    How to change prconfig.xml file settings

Oracle

  • Ensure that the number of connection pool processes match the application. The default is 200 processes, which the DBA may increase if necessary.
  • Process Commander is a logging-intensive application. Minimize the impact of logging by writing log files to a disk which is separate from the disk(s) containing database files. To size the database log files appropriately, tune the number of log files written and size of those files such that log switches occur no more often than every 15-20 minutes.
  • Use Oracle AWR reports to identify the top twenty longest running database queries for tuning.
  • Work with your DBA to compare the average BLOB size to the database block size. If the average BLOB size is larger than 4096 bytes, disable storage in the row. Use the CHUNK SIZE parameter to allocate contiguous blocks for the data.
    XXXXXFor additional details on BLOB size issues, see Section 2.7.10.-->
  • Consider setting the parameter optimizer_index_cost_adj to a value between 20 and 25.
  • Confirm that System Global Area (SGA) is large enough. At minimum, it should be greater than 1GB.
  • Alter SQL queries which use functions in the WHERE clause to use functional indexes instead. For further details on creating and using Oracle functional indexes, refer to the following links:
    External Link Oracle® SQL Developer User's Guide - Section 5.26 Create/Edit Index/
  • Note the following issue if you are using Oracle 10.2.0.3 with JDBC driver 10.2.0.3:
    External Link Troubleshooting: ORA-01460 and ORA-01461 errors for database operations with Oracle 10.2.0.3

MS SQL Server and DB2 for Linux, Unix, and Windows (UDB)

DB2 for z/OS

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us