Skip to content

Built for developers shipping AI to production

Stop shipping hallucinations.

One function call. Every claim extracted, verified against live evidence, and corrected if wrong. Works with any LLM — OpenAI, Anthropic, Google, Meta, Mistral. From prototype to enterprise, cloud to air-gapped.

pip install veroqnpm install @veroq/sdkdocker run veroq/shield

// shield()

Three lines to verify any LLM output.

Python
pip install veroq

from veroq import shield
result = shield(any_llm_output)
print(result.trust_score)  # 0.94
TypeScript
npm install @veroq/sdk

import { shield } from "@veroq/sdk";
const result = await shield(llmOutput);
console.log(result.trustScore);
shield()
0/5000 characters

// zero-config middleware

Wrap your LLM provider. Every response verified automatically.

Python — OpenAI
from veroq.middleware import openai_shield
import openai

client = openai_shield(openai.OpenAI())
# Every response now has .veroq_shield
Python — Anthropic
from veroq.middleware import anthropic_shield
import anthropic

client = anthropic_shield(anthropic.Anthropic())
# Every response now verified

// high-volume caching

Identical text returns instantly from local cache. Zero API calls. Zero credits.

from veroq import CachedShield

cached = CachedShield(max_cache=1000, ttl_seconds=3600)
result = cached("NVIDIA reported $22B in Q4 revenue")  # API call
result = cached("NVIDIA reported $22B in Q4 revenue")  # instant, 0 credits
print(cached.stats())  # {'hits': 1, 'misses': 1, 'hit_rate': 0.5}

// ci/cd shield

Verify AI outputs before they ship. Claims below the trust threshold fail the build.

GitHub Action
- uses: veroq-ai/shield-action@v1
  with:
    api-key: ${{ secrets.VEROQ_API_KEY }}
    threshold: 0.7
    fail-on-contradiction: true
CLI
npx @veroq/cli test prompts.json \
  --threshold 0.7

# Fails if any output drops
# below the trust threshold

// framework integrations

Shield plugs into the tools you already use.

LangChain
pip install langchain-veroq
40 tools
CrewAI
pip install crewai-veroq
40 tools
Haystack
pip install haystack-veroq
Verifier + Shield
Vercel AI
npm install @veroq/ai
53 tools
MCP
npm install -g veroq-mcp
62 tools
n8n
n8n-nodes-veroq
38 operations

// private knowledge base

Upload your company's documents. Shield verifies against your knowledge base first, then live evidence. Corrections cite your docs.

# Upload your docs
client.knowledge_upload(content, "earnings.md", agent_id="my-bot")

# Shield checks YOUR docs first
result = shield("NVIDIA reported $22B revenue",
    agent_id="my-bot", knowledge_base=True)
# → contradicted by nvidia-earnings.md: "Revenue was $39.3B"

// enterprise self-hosted

Run Shield inside your VPC. Your models. Your data. Nothing leaves your network.

docker run -p 3000:3000 \
  -e LLM_BASE_URL=http://localhost:11434/v1 \
  -e LLM_MODEL=llama3 \
  -e LLM_API_KEY=none \
  veroq/shield
Groundedness

Verify claims against your documents. Fully local. Zero external calls.

Factual

Verify claims against real-world evidence via VeroQ API. Opt-in.

Any LLM

OpenAI, Ollama, vLLM, NVIDIA NIM, Groq — any OpenAI-compatible API.

// two verification modes

Most guardrail tools check if LLM output follows a contract. Shield does that and checks if it's actually true.

📋

Groundedness

“Did it follow the rules?”

Pass your documents, contracts, or expected output alongside the LLM response. Shield extracts every claim and verifies it against your context. Catches hallucinations, format violations, and off-task drift.

  • Contract enforcement
  • RAG grounding validation
  • Task alignment checking
  • Runs locally — zero external calls
🔍

Factual

“Is it actually true?”

Verifies every claim against live real-world evidence — web search, financial data, public records. Catches errors that contract enforcement misses: stale data, wrong numbers, fabricated facts.

  • Real-time evidence chains
  • Corrections with sources
  • Permanent verification receipts
  • Works with any LLM output

Run both together for maximum coverage. A RAG system can be perfectly grounded — the LLM faithfully cited the document — but the document itself can be wrong. Groundedness catches hallucinations. Factual catches reality. Read more →

// shield vs guardrails

CapabilityVeroQ ShieldTypical Guardrails
Factual accuracy verification
Evidence chains with sources
Contract/schema enforcement
RAG groundedness checking
Real-time web verification
Corrections with citations
Permanent verification receipts
Self-hosted / air-gappedSome
Works with any LLM
One function callConfig files

// what shield returns

PropertyDescription
trust_scoreOverall confidence (0-1)
is_trustedTrue if no claims contradicted
correctionsCorrections for wrong claims
verified_textText with corrections inline
claimsAll extracted claims with verdicts
receipt_idsPermanent verification receipt IDs
credits_usedAPI credits consumed