Jobs API
Jobs represent workflow executions on agents.
Core endpoints
POST /api/jobs/launchqueue one jobPOST /api/jobs/batchqueue many jobs from uploaded mappingsGET /api/job-statusfetch status and result metadataPOST /api/job-webhookingest worker status callbacksPOST /api/jobs/{id}/claim-resultsingest claim-level outcome batches (x-mimic-signaturerequired)POST /api/job-logsingest batched worker logs and refresh agent work activityPOST /api/pilot/run-summaryingest Dentrix run summaries into pilot batch/claim/stop/ROI tablesGET /api/pilot/roicompute ROI snapshot for a date range using pilot claim outcomes plus persistedrun_costtotalsGET /api/jobs/{id}fetch normalized job status/progressDELETE /api/jobs/{id}hard-delete a job and dependent telemetry (authenticated UI sessions only)POST /api/schedules/executeexecute due schedules for one organization session/API key contextGET|POST /api/cron/scheduled-runsexecute due schedules via cron secret authGET|POST /api/cron/stale-jobsmark timed-out runs failed and kill hung agent processes (x-cron-keyorAuthorization: Bearer, backed byCRON_SECRET)
Typical status lifecycle
queued -> running -> completed | failed | cancelled
Treat callbacks as at-least-once delivery and keep transitions idempotent.
Runtime timeout model
- Default timeout is 12 hours per job.
- Per-run override is supported via
max_runtime_msin worker config. - Worker sends periodic processing heartbeats.
- Stale-job watchdogs fail stuck runs, reset agent state, and trigger retry flow.
- Retry metadata (
maxRetries,retryDelayMs) is carried on the job record for requeue flow.
Screenshot evidence delivery
- Job detail loaders normalize legacy screenshot references (raw S3 URLs,
/api/assets/viewURLs, and plain keys). - Evidence is served as short-lived pre-signed S3 URLs (1 hour TTL) instead of long-lived proxy URLs.
- If signing fails, screenshot fields are returned as
nulland the run timeline remains available.
Claim outcomes ingestion
POST /api/jobs/{id}/claim-results accepts up to 100 claim outcomes per request and updates aggregate counters on the parent job:
- attempted
- EOB uploaded
- no-EOB found
- failed
Use this endpoint for healthcare RCM reconciliation and pilot export reporting.
Job logs ingestion
POST /api/job-logs accepts batched worker logs:
jobIdlevelmessagetimestamp(unix seconds)
Behavior:
- Inserts logs into
jobLogs. - Resolves
agentIdfrom the first log’sjobId. - Touches agent activity for idle-stop accounting (
lastActivityAt).
This keeps the idle monitor aligned with real execution work, including long-running runs that emit logs between step callbacks.
Dentrix run-summary ingestion
POST /api/pilot/run-summary accepts Dentrix full-loop summaries and persists:
pilotBatchespilotClaimResultspilotStopEventspilotRoiMetrics
Supported payload envelopes:
- top-level camelCase summary (
batchSummary,claimResults,stopEvents) - nested
dentrixRunSummary - nested snake_case
dentrix_run_summary(batch_summary,claim_results,stop_events)
Key behavior:
- Normalizes camelCase/snake_case to one persistence contract.
- Accepts job-backed and jobless summaries.
- Uses deterministic internal claim row ids (
{batchId}:claim:{index}) to avoid collisions when clientclaimIdrepeats. - Re-ingest deletes existing batch-linked rows in deterministic FK-safe order before insert:
process_library_session_timelineprocess_library_session_summarypilot_stop_eventpilot_claim_resultpilot_batchpilot_roi_metric
- Triggers process-library batch session summary generation after ingest.
- Returns structured diagnostic error codes and shape metadata on invalid payloads or ingest failures.
- Upserts computed Dentrix run-cost breakdowns for job-backed summaries and writes normalized cost fields back to the job record.
Common error codes:
INVALID_RUN_SUMMARY_PAYLOADMISSING_BATCH_IDJOB_NOT_FOUNDDENTRIX_RUN_SUMMARY_INGEST_FAILEDRUN_SUMMARY_REQUEST_FAILED
Dentrix ROI endpoint
GET /api/pilot/roi supports date-scoped ROI snapshots.
Query params:
from(optional): ISO date (YYYY-MM-DD) start boundary.to(optional): ISO date (YYYY-MM-DD) end boundary.
Behavior:
- Reads batch/job context from pilot tables.
- Aggregates claim throughput and processing-time metrics from
pilotClaimResults. - Aggregates variable compute cost from persisted
runCosts.totalCostUsdfor job IDs in scope. - Returns
source: "computed"when deriving ROI from raw rows.
Launch request example
{ "workflowId": "uuid-workflow", "agentId": "uuid-agent", "variables": { "member_id": "A12345", "date_of_birth": "1988-02-14" }}Response example
{ "jobId": "uuid-job", "status": "queued"}Metering and billing behavior
- Usage counters are incremented when jobs reach terminal states.
- Prevent duplicate billing with a
billingCountedAtstyle guard. - Keep webhook replay handling idempotent.
Deletion semantics
DELETE /api/jobs/{id} removes:
- execution events
- step records and metrics
- AI usage rows
- job logs
- cost and pilot artifacts
It also nulls nullable references (schedules.lastRunJobId, pilotBatches.jobId) before deleting the job row.
Operational guidance
- Poll
job-statusfor near-real-time progress. - Use webhook callbacks for production-grade event processing.
- Record agent logs and screenshots for failure triage.
- Keep timeouts explicit for long-running RPA workloads to preserve deterministic failure handling.