Configuring a machine learning service connection for topic detection models using REST API

To run your custom models in Prediction Studio through an API, configure the connection between Pega Platform and your web server.

Before you begin: In Dev Studio, create an authentication profile to which you map the new service configuration. For more information, see Creating an authentication profile.
  1. In the navigation pane of Prediction Studio, click Settings > Machine learning services.
  2. In the header of the Machine learning services area, click New.
  3. In the New machine learning service dialog box, in the Service type list, select Custom model.
  4. Enter the service name, and then select the authentication profile that you want to map to the new service.
  5. Click Connection type, and then select the type of API connection that you want to use:
    Choices Actions
    Service discovery (Open API)
    1. In the Service discovery endpoint field, enter the HTTP address of your web service discovery endpoint.
    2. In the Prediction service list, select the endpoint to which you want to connect.
    Stand alone API
    1. In the Prediction API URL field, enter the HTTP address of your prediction endpoint.
    2. Select the GET or POST request method.
  6. In the Request parameters section, map the parameters obtained from the endpoint to field types in Prediction Studio.

    The following field types are available for mapping:

    Field type Importance Description
    Model identifier Mandatory The parameter that specifies the name of the model that you want to deploy if the prediction service hosts multiple models.
    Text Mandatory The parameter through which text passes to the model for analysis.
    Default Optional Map a default parameter to define a constant value to use across all the models that you create in Prediction Studio using the same machine learning service.
    Prompt Optional Map a prompt parameter to define a value which is different for each topic model that you create in Prediction Studio using the same machine learning service. When creating a model, you need to provide a value for the Prompt parameter that is specific to that model.
    For example: For a topic detection model, you can map the parameters as follows:
    Parameter Mapping
    data#modelName Model identifier
    data#text Text
    data#modelType Default
    data#language Prompt
  7. Click Back.
  8. In the Define output mapping field, select a data transform that determines how the API sends the output in JSON format back to the topic detection model.
    If you deployed a custom model by using sample containers, select DTInhouseMLService. For more information, see Configuring sample containers to use Python models for topic detection.
    Tip: To configure the selected data transform or to create a new one, click the Open icon. For more information, see Configuring a data transform for a JSON output mapping.
  9. Save the new service connection configuration by clicking Submit.
  10. Optional: To test the service connection, select the More icon, and then select Test connection.
What to do next: Run your custom model through the new service connection.

For more information, see Creating a text categorization model to run topic detection models through an API.