Your AI Agent Can't Fact-Check. Now It Can.
Every AI agent has the same blind spot: it can retrieve information, but it can't tell you if that information is true. /verify closes that gap.
Every AI agent has the same blind spot: it can retrieve information, but it can't tell you if that information is true.
Ask an agent to research a company and it'll pull ten articles. But if three of those articles contradict each other, it has no mechanism to weigh the evidence, flag the conflict, or give you a confidence level. It just picks one and moves on.
That's the gap we built /verify to close.
How it works
Send any claim to POST /api/v1/verify:
{
"claim": "OpenAI acquired Windsurf for $3 billion",
"context": "AI industry acquisitions 2025"
}You get back a structured verdict:
{
"verdict": "true",
"confidence": 0.92,
"supporting_briefs": [...],
"contradicting_briefs": [...],
"nuances": [
"The reported price varied between $2.9B and $3.2B across sources",
"The deal included both cash and equity components"
]
}Every verdict is grounded in Polaris's corpus of thousands of analyzed intelligence briefs — each one already scored for confidence, checked for bias, and sourced from multiple outlets.
Why this matters for agents
The agent ecosystem has a verification problem. Retrieval-augmented generation gets you relevant context, but relevance isn't the same as reliability. An agent building a financial report needs to know whether a claimed earnings figure is corroborated or disputed. An agent monitoring regulatory changes needs to distinguish between a proposed rule and an enacted one.
/verify gives agents something they've never had: epistemic awareness. Instead of treating every retrieved fact as equally valid, an agent can now check claims against a continuously updated, bias-audited intelligence layer.
What's in the response
Every verification call returns:
Verdict
true, false, misleading, or unverified
Confidence score
How strongly the evidence supports the verdict (0–1)
Supporting briefs
Polaris intelligence briefs that corroborate the claim, each with their own confidence scores and source lists
Contradicting briefs
Briefs that present conflicting evidence
Nuances
Edge cases, caveats, and context that a binary true/false would miss
The nuances field is what separates this from a simple boolean check. Real-world claims are rarely cleanly true or false — they're true with caveats, or true in one context but misleading in another. /verify captures that.
Available everywhere
The endpoint is live across all six Polaris SDKs:
Python
from veroq import PolarisClient client = PolarisClient(api_key="your-key") result = client.verify(claim="The EU AI Act took effect in August 2025") print(result.verdict, result.confidence)
TypeScript
import { VeroqClient } from '@veroq/sdk';
const client = new VeroqClient({ apiKey: 'your-key' });
const result = await client.verify({ claim: 'Tesla recalled 2M vehicles in Q4 2025' });LangChain
from langchain_veroq import PolarisVerifyTool
tool = PolarisVerifyTool(api_key="your-key")
result = tool.invoke({"claim": "Meta shut down Threads in 2025"})Also available as a CrewAI tool, Vercel AI SDK tool, and via the MCP server — so it works natively in Claude, Cursor, and any MCP-compatible client.
3 credits per call. Available on every plan including Free.
Embed trust on any article
We also shipped universal trust badges. Paste any article URL — not just Polaris briefs — and get an embeddable SVG badge that shows the verification verdict and confidence score.
Every badge links back to the full Polaris analysis. Drop it in your blog, dashboard, or app:
<a href="https://thepolarisreport.com/analysis?url=YOUR_ARTICLE_URL"> <img src="https://api.thepolarisreport.com/api/v1/badge/url?url=YOUR_ARTICLE_URL" alt="Verified by Polaris" height="36" /> </a>
Generate one at thepolarisreport.com/badge.
The bigger picture
We built Polaris because the agent economy needs a trust layer. Agents are making decisions based on retrieved information, and nobody is checking whether that information holds up under scrutiny.
/verify is one piece of that. Combined with confidence scoring on every brief, bias detection on every generation, counter-argument analysis, and source transparency — it's a stack designed from the ground up for agents that need to be right, not just fast.