During file ingestion the file listener picks up the file and the file’s status becomes In Progress. If the ingest file is not corrupted, the X12 de-batch process is initiated.
The intent of the X12 de-batch process is to segregate the claims in the ingest file into individual claims or N837 objects.
If there are any errors or exceptions found during this conversion, then based on the DSS settings the file status will be updated.
- If the file has exceptions and the DSS setting is Ignore, the file status will be updated as Processed with errors.
- If the file has exceptions and the DSS setting is Fail, the file status will be updated as Partially processed with errors.
- If all the claims in the file have failed, the status of the file is Failed.
- If all the claims in the file are processed successfully, the file status is Success.
From the X12 de-batch process individual N837 objects will be created for each claim in a file.
Considering some unique keys, such as EDI control number, group ID, transaction ID, claim type and claim ID, the existing N837 is compared against the new ones being created to check for the duplicates.
- If any duplicate instances already exist for a new N837 being created, then the file status for that new N837 will be updates as -2 indicating that is a duplicate.
- If a record already exists in PegaX12-Data-N837, which was successfully processed, or is yet to be processed, (X12ClaimStatus = 0 or 1), and the same file (containing the same record), is dropped again, it creates a record in the system with X12ClaimStatus = -2. The records with X12ClaimStatus = -2 are ignored across all processing.
- If the N837 is successfully created without any exceptions, then the status of the file is updated as 0.
- If the work object is created successfully from the N837 file, the status of the file is updated as 1.
- If there are any exceptions found during the N837 to work object creation, the status of the file is updated as -1.
- Based on the files which have a -2 status, the files that are skipped will be updated.
As two different threads will be processing these files, it will always return zero results when we query against the 5 keys for each record to check if they already exist. This results in creation of duplicate data.