Skip to content
← Blog
6 min read

Why Your RAG Pipeline Needs Two Verification Layers

Your RAG system can be perfectly grounded and still completely wrong. Groundedness catches hallucinations. Factual verification catches reality. You need both.

Your RAG system can be perfectly grounded and still completely wrong.

Most teams building RAG pipelines think about hallucination as a single problem: did the LLM make something up? So they add a groundedness check — does the response match the retrieved documents?

That's half the problem.

The Two Types of RAG Errors

Type 1: Hallucination

The LLM invents claims that aren't in the retrieved documents.

Context: “Q3 revenue was $2.1B, up 12% YoY”

LLM says: “Revenue was $2.4B, up 15%”

Groundedness check catches this.

This is what most eval frameworks test for. Ragas, DeepEval, TruLens — they all measure faithfulness to the retrieved context. Important, but incomplete.

Type 2: Grounded But Wrong

The LLM faithfully cites the document, but the document itself is outdated, incomplete, or incorrect.

Internal note (3 months old): “Apple Q1 FY2025 revenue was $124.3B”

LLM says: “Apple had $124B in Q1 2025 revenue”

Groundedness check: PASS

Real-world data: Apple Q1 2025 revenue was $144B

The LLM did nothing wrong. It accurately cited the document. But the document was stale, and the answer shipped to production with a $20 billion error.

No groundedness check catches this. You need a second layer.

One Layer Isn't Enough

Here's what we saw when we ran both verification modes on the same claims:

ClaimGroundednessFactualAction
Revenue was $2.4BContradictedContradictedBlock (hallucination)
Revenue was $124BSupportedContradictedFlag (stale source)
CEO is Tim CookSupportedSupportedPass
IPO was March 2025UnverifiableContradictedBlock (fabrication)

Row 2 is the dangerous one. A single groundedness check says “all good.” Only the second layer catches the error.

How to Add Both Layers

Layer 1: Groundedness (local)

After your LLM generates a response, extract the claims and cross-reference each one against the retrieved context. This runs inside your infrastructure:

bash
curl -X POST http://localhost:3000/shield \
  -H "Content-Type: application/json" \
  -d '{
    "text": "The company reported $2.4B in Q3 revenue, up 15% YoY.",
    "context": "Q3 2024 Earnings: Revenue was $2.1B, up 12% YoY.",
    "mode": "groundedness"
  }'

Layer 2: Factual (external evidence)

For claims that pass groundedness, verify against real-world data. This catches the “grounded but wrong” cases:

bash
curl -X POST http://localhost:3000/shield \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Apple had $124B in Q1 2025 revenue.",
    "context": "Internal note: Apple Q1 FY2025 revenue $124.3B.",
    "mode": "both"
  }'

Returns two verdicts per claim — one from your documents, one from the real world. When they disagree, you know something is stale or wrong.

When to Use Each Mode

ScenarioRecommended
Internal knowledge base QAGroundedness only
Customer-facing financial dataBoth
Legal document analysisGroundedness + manual review
Real-time market intelligenceFactual only
Healthcare / complianceBoth + human-in-the-loop
Air-gapped / classifiedGroundedness only

The Enterprise Problem

The reason most teams only run groundedness is practical, not technical: sending internal documents to a third-party API is a non-starter for enterprise.

VeroQ Shield (Self-Hosted) solves this by running inside your VPC. Groundedness mode is fully local. Factual mode is opt-in and only sends the extracted claim text — never your documents. For air-gapped environments, use a local model:

bash
docker run -p 3000:3000 \
  -e LLM_BASE_URL=http://localhost:11434/v1 \
  -e LLM_MODEL=llama3 \
  -e LLM_API_KEY=none \
  veroq/shield

No internet required. Full verification. Your models, your data.

The Bottom Line

One verification layer catches hallucinations. Two layers catch reality.

If your RAG pipeline serves anything where accuracy matters — financial data, legal analysis, healthcare, compliance — you need both. Groundedness tells you the LLM was faithful. Factual verification tells you the truth.