Built for developers shipping AI to production
Stop shipping hallucinations.
One function call. Every claim extracted, verified against live evidence, and corrected if wrong. Works with any LLM — OpenAI, Anthropic, Google, Meta, Mistral. From prototype to enterprise, cloud to air-gapped.
pip install veroqnpm install @veroq/sdkdocker run veroq/shield// shield()
Three lines to verify any LLM output.
pip install veroq from veroq import shield result = shield(any_llm_output) print(result.trust_score) # 0.94
npm install @veroq/sdk
import { shield } from "@veroq/sdk";
const result = await shield(llmOutput);
console.log(result.trustScore);// zero-config middleware
Wrap your LLM provider. Every response verified automatically.
from veroq.middleware import openai_shield import openai client = openai_shield(openai.OpenAI()) # Every response now has .veroq_shield
from veroq.middleware import anthropic_shield import anthropic client = anthropic_shield(anthropic.Anthropic()) # Every response now verified
// high-volume caching
Identical text returns instantly from local cache. Zero API calls. Zero credits.
from veroq import CachedShield
cached = CachedShield(max_cache=1000, ttl_seconds=3600)
result = cached("NVIDIA reported $22B in Q4 revenue") # API call
result = cached("NVIDIA reported $22B in Q4 revenue") # instant, 0 credits
print(cached.stats()) # {'hits': 1, 'misses': 1, 'hit_rate': 0.5}// ci/cd shield
Verify AI outputs before they ship. Claims below the trust threshold fail the build.
- uses: veroq-ai/shield-action@v1
with:
api-key: ${{ secrets.VEROQ_API_KEY }}
threshold: 0.7
fail-on-contradiction: truenpx @veroq/cli test prompts.json \ --threshold 0.7 # Fails if any output drops # below the trust threshold
// framework integrations
Shield plugs into the tools you already use.
// private knowledge base
Upload your company's documents. Shield verifies against your knowledge base first, then live evidence. Corrections cite your docs.
# Upload your docs
client.knowledge_upload(content, "earnings.md", agent_id="my-bot")
# Shield checks YOUR docs first
result = shield("NVIDIA reported $22B revenue",
agent_id="my-bot", knowledge_base=True)
# → contradicted by nvidia-earnings.md: "Revenue was $39.3B"// enterprise self-hosted
Run Shield inside your VPC. Your models. Your data. Nothing leaves your network.
docker run -p 3000:3000 \ -e LLM_BASE_URL=http://localhost:11434/v1 \ -e LLM_MODEL=llama3 \ -e LLM_API_KEY=none \ veroq/shield
Verify claims against your documents. Fully local. Zero external calls.
Verify claims against real-world evidence via VeroQ API. Opt-in.
OpenAI, Ollama, vLLM, NVIDIA NIM, Groq — any OpenAI-compatible API.
// two verification modes
Most guardrail tools check if LLM output follows a contract. Shield does that and checks if it's actually true.
Groundedness
“Did it follow the rules?”
Pass your documents, contracts, or expected output alongside the LLM response. Shield extracts every claim and verifies it against your context. Catches hallucinations, format violations, and off-task drift.
- ✓ Contract enforcement
- ✓ RAG grounding validation
- ✓ Task alignment checking
- ✓ Runs locally — zero external calls
Factual
“Is it actually true?”
Verifies every claim against live real-world evidence — web search, financial data, public records. Catches errors that contract enforcement misses: stale data, wrong numbers, fabricated facts.
- ✓ Real-time evidence chains
- ✓ Corrections with sources
- ✓ Permanent verification receipts
- ✓ Works with any LLM output
Run both together for maximum coverage. A RAG system can be perfectly grounded — the LLM faithfully cited the document — but the document itself can be wrong. Groundedness catches hallucinations. Factual catches reality. Read more →
// shield vs guardrails
| Capability | VeroQ Shield | Typical Guardrails |
|---|---|---|
| Factual accuracy verification | ✓ | — |
| Evidence chains with sources | ✓ | — |
| Contract/schema enforcement | ✓ | ✓ |
| RAG groundedness checking | ✓ | ✓ |
| Real-time web verification | ✓ | — |
| Corrections with citations | ✓ | — |
| Permanent verification receipts | ✓ | — |
| Self-hosted / air-gapped | ✓ | Some |
| Works with any LLM | ✓ | ✓ |
| One function call | ✓ | Config files |
// what shield returns
| Property | Description |
|---|---|
| trust_score | Overall confidence (0-1) |
| is_trusted | True if no claims contradicted |
| corrections | Corrections for wrong claims |
| verified_text | Text with corrections inline |
| claims | All extracted claims with verdicts |
| receipt_ids | Permanent verification receipt IDs |
| credits_used | API credits consumed |