Creating a Kafka data set

You can create an instance of a Kafka data set in Pega Platform to connect to a topic in the Kafka cluster. You can also create a new topic in the Kafka cluster from the Pega Platform, and then connect to that topic.

Topics are categories where the Kafka cluster stores streams of records. Each record in a topic consists of a key, value, and a time stamp. Use a Kafka data set as a source of events (for example, customer calls or messages) that are used as input for Event Strategy rules that process data in real time.

You can connect to an Apache Kafka cluster version 0.10.0.1 or later.

Before you begin: 

If you want to use Schema Registry with your Kafka data set, download the Schema Registry component provided by Pega, and install and configure the component by following the instructions that are available in the Pega GitHub repository.

Note: This Schema Registry component is supported by Pega Platform 8.2.x and later.
  1. In Dev Studio, click Create > Data Model > Data Set.
  2. Provide the data set label and identifier.
  3. From the Type list, select Kafka.
  4. Provide the ruleset, Applies to class, and ruleset version of the data set.
  5. Click Create and open.
  6. In the Connection section, in the Kafka configuration instance field, select an existing Kafka cluster record ( Data-Admin-Kafka class) or create a new one (for example, when no records are present) by clicking the Open icon.
  7. Check whether the Pega Platform is connected to the Kafka cluster by clicking Text connectivity.
  8. In the Topic section, perform one of the following actions:
    • Select the Create new check box and enter the topic name to define a new topic in the Kafka cluster.
    • Select the Select from list check box to connect to an existing topic in the Kafka cluster.
    Note: By default, the name of the topic is the same as the name of the data set. If you enter a new topic name, that topic is created in the Kafka cluster only if the ability to automatically create topics is enabled on that Kafka cluster.
  9. Optional: In the Partition Key(s) section, define the data set partitioning by performing the following actions:
    1. Click Add key.
    2. In the Key field, press the Down Arrow key to select a property to be used by the Kafka data set as a partitioning key.
      Note: By default, the available properties to be used as keys correspond to the properties of the Applies To class of the Kafka data set.
    By configuring partitioning you can ensure that related records are sent to the same partition. If no partition keys are set, the Kafka data set randomly assigns records to partitions.
  10. Optional: If you want to use a different format for records than JSON, in the Record format section, select Custom and configure the record settings:
    Note:

    If you use Schema Registry with your Kafka data set, configure these settings according to the instructions that are provided with the Schema Registry component in the Pega GitHub repository.

    For information about writing and configuring custom Kafka serialization, see Kafka custom serializer/deserializer implementation.

    1. In the Serialization implementation field, enter a fully qualified Java class name for your PegaSerde implementation.
      For example: com.pega.dsm.kafka.CsvPegaSerde
    2. Optional: Expand the Additional configuration section and define additional configuration options for the implementation class by clicking Add key value pair and entering properties in the Key and Value fields.
  11. Click Save.