/v1/review-runsCreate a review run from an AI output and supplied sources.
API reference
Send Kivo the output, intended use, citations, and supplied source documents. The API returns a review-run record you can poll or subscribe to through webhooks.
POST /v1/review-runs
{
"intended_use": "customer_facing",
"ai_output": "According to source 1...",
"sources": [
{
"title": "Refund policy",
"media_type": "text/plain",
"content": "Refunds may be offered..."
}
]
}The API keeps capture boring and explicit. Kivo is not a prompt playground or trace database.
/v1/review-runsCreate a review run from an AI output and supplied sources.
/v1/review-runs/{reviewRunId}Read run status, claims, sources, and review metadata.
/v1/review-runs/{reviewRunId}/claimsList extracted atomic claims for a run.
/v1/review-runs/{reviewRunId}/briefReturn the compiled review brief when available.
Use scoped API keys or organization-authenticated requests depending on the workflow.
Inputs stay explicit: output, sources, intended use, strictness, and optional workflow metadata.
Every response should include or propagate a request ID for debugging and audit trails.