Skip to content

Batch Processing

Batch processing executes one workflow across many input rows.

CSV schema design

Use one row per case and one column per variable. Example headers:

  • case_id
  • member_id
  • date_of_birth
  • payer_name
  • service_date

Keep values normalized before upload.

Batch workflow

  1. Upload CSV.
  2. Map each column to a workflow variable.
  3. Validate mappings on a sample row.
  4. Launch canary batch.
  5. Expand to full volume when success thresholds are met.

Operational guardrails

  • Start with 5 to 20 rows to validate field mapping.
  • Monitor completion and error rates after first 50 runs.
  • Pause rollout if error taxonomy shifts unexpectedly.

Pilot metrics to monitor

  • Completion rate (target commonly >= 90 percent)
  • Error category distribution
  • Mean and p95 runtime
  • Support minutes per completed case
  • Cost per workflow run

Export and reconciliation

Use pilot export endpoints and jobs tables to reconcile:

  • Total submitted rows
  • Terminal run counts
  • Flagged vs completed outcomes
  • Missing outputs requiring reprocessing

Recovery strategy

If a batch fails partially:

  • Export failed rows.
  • Fix root cause in workflow or mapping.
  • Re-run only failed rows using the same batch ID convention.