Batch Processing
Batch processing executes one workflow across many input rows.
CSV schema design
Use one row per case and one column per variable. Example headers:
case_idmember_iddate_of_birthpayer_nameservice_date
Keep values normalized before upload.
Batch workflow
- Upload CSV.
- Map each column to a workflow variable.
- Validate mappings on a sample row.
- Launch canary batch.
- Expand to full volume when success thresholds are met.
Operational guardrails
- Start with 5 to 20 rows to validate field mapping.
- Monitor completion and error rates after first 50 runs.
- Pause rollout if error taxonomy shifts unexpectedly.
Pilot metrics to monitor
- Completion rate (target commonly >= 90 percent)
- Error category distribution
- Mean and p95 runtime
- Support minutes per completed case
- Cost per workflow run
Export and reconciliation
Use pilot export endpoints and jobs tables to reconcile:
- Total submitted rows
- Terminal run counts
- Flagged vs completed outcomes
- Missing outputs requiring reprocessing
Recovery strategy
If a batch fails partially:
- Export failed rows.
- Fix root cause in workflow or mapping.
- Re-run only failed rows using the same batch ID convention.