Skip to main content

 –

File storage

Suggest edit Updated on March 11, 2021

Configure local and remote storages to use them as data sources for your decision strategies.

To read, write, and apply data stored in files, create HDFS and File data sets.

  • Creating an HDFS data set record

    You must configure each instance of the HDFS data set rule before it can read data from and save it to an external Apache Hadoop Distributed File System (HDFS).

  • Creating a File data set record for embedded files

    To read data from an uploaded file in CSV or JSON format, you must configure an instance of the File date set rule.

  • Creating a File data set record for files on repositories

    To enable a parallel load from multiple CSV or JSON files located in remote repositories or on the local file system, create a File data set that references a repository. This feature enables remote files to function as data sources for Pega Platform data sets.

  • Requirements for custom stream processing in File data sets

    Standard File data sets support reading or writing compressed .zip and .gzip files. To extend these capabilities to support encryption, decryption, and other compression methods for files in repositories, implement custom stream processing as Java classes on the Pega Platform server classpath.

Did you find this content helpful? YesNo

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.

Ready to crush complexity?

Experience the benefits of Pega Community when you log in.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us