Capture
A review run starts with the AI output, intended use, supplied sources, citations, and workflow metadata. Capture can come from the API or structured import.
How it works
The flow is intentionally narrow. Kivo reviews an existing AI output against supplied evidence, then compiles a brief reviewers can trust enough to inspect.
A review run starts with the AI output, intended use, supplied sources, citations, and workflow metadata. Capture can come from the API or structured import.
Kivo extracts atomic claims and keeps offsets where possible, so each material assertion can be inspected independently.
Claims are checked against source spans. Unsupported, partially supported, contradicted, and insufficient-context claims stay visible.
The reviewer gets a source-linked brief, not a generic model score. Humans remain the decision authority.
Reviewers do not need another trace viewer. They need a structured packet that names the claims, points to source spans, and separates evidence problems from policy decisions.