Skip to main content

This content has been archived and is no longer being updated.

Links may not function; however, this content may be relevant to outdated versions of the product.

Importing adaptive models to another environment

Suggest edit Updated on April 5, 2022

This content applies only to On-premises and Client-managed cloud environments

Note: In Pega Cloud environments, you can import adaptive models by using the data migration pipeline.

You can import trained Adaptive Decision Manager (ADM) models from your production environment to a simulation environment. Synchronizing both environments is useful when you want to run scenarios in your simulation environment and apply the most up-to-date models. Adaptive models in the production environment are constantly processing data and self-learning. By importing these models to your simulation environment, you ensure that the scenarios that you run yield relevant and accurate results.

Note: By following this procedure, you replace existing ADM data in the target environment with the ADM data that you import from the source environment.
Before you begin:

Ensure that the target environment meets the following requirements:

  1. The Cassandra cluster is not set up as Active-Active across multiple data centers. The following procedure does not apply to Active-Active setups.
  2. The adaptive rule versions are in sync with the versions in the source environment.
  1. In the source environment, export the pyADMFactory data set.
    This data set is a database table that contains all the adaptive model instances in your system. For more information, see Exporting data into a data set.
  2. Log on to the target environment and perform the remaining steps there.
  3. On the Services landing page, on the Adaptive Decision Manager tab, decommission all ADM nodes by selecting the appropriate action from the Action menu.
    Decommissioning a node
    The Decommission action is selected for an ADM node.
  4. Open the pyADMFactory data set, and then from the Actions menu, select Run.
  5. In the Run Data Set dialog box, from the Operation list, select Truncate.
    Running a data set
    The Truncate operation is selected in the Run Data Set dialog
                                box.
  6. If the target system has any models that report data in the following tables, prevent inaccurate reports by manually truncating the following tables:
    • pr_data_dm_admmart_mdl_fact
    • pr_data_dm_admmart_pred
  7. Connect to a Cassandra database on a Decision Data Store node.
  8. Remove any ADM (response) data that may cause a conflict with the source data by using the following CQL commands:
    • drop keyspace adm_commitlog
    • drop keyspace null_adm (if present)
    • drop keyspace adm (if present)
  9. Import the pyADMFactory data set from the source environment.
    For more information, see Importing data into a data set.
  10. Recommission all ADM nodes.
    The first node that you recommission creates scoring models from the imported factory data. For more information, see Connecting to an external Cassandra database through the Decision Data Store service.
What to do next: When the status of all ADM nodes is NORMAL, access the Model management landing page and verify that the model data matches the source data.
Did you find this content helpful? YesNo

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us