← Blog
5 min read

How to Add Real-Time World Knowledge to Your LangChain Agent

Step-by-step guide to adding verified intelligence to LangChain agents. Tools, RAG retriever, and trending entity monitoring with confidence scoring.

Share:

Your LangChain agent can answer questions about code, documents, and databases. But ask it “What happened in AI this week?” and it's stuck at its training cutoff.

Here's how to give it real-time, verified world knowledge in under 5 minutes using the langchain-polaris package.

Install

bash
pip install langchain-polaris langchain-anthropic

Option 1: Tools (Agent Can Search News)

Give your agent 7 intelligence tools it can call on demand:

python
from langchain_polaris import (
    PolarisSearchTool,
    PolarisFeedTool,
    PolarisCompareTool,
    PolarisEntityTool,
    PolarisBriefTool,
    PolarisExtractTool,
    PolarisResearchTool,
)
from langchain_anthropic import ChatAnthropic
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate

# Initialize tools
tools = [
    PolarisSearchTool(api_key="demo"),
    PolarisFeedTool(api_key="demo"),
    PolarisCompareTool(api_key="demo"),
    PolarisEntityTool(api_key="demo"),
]

# Create agent
llm = ChatAnthropic(model="claude-sonnet-4-20250514")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a news analyst. Use your tools to find and analyze current events."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# Ask it anything
result = executor.invoke({
    "input": "What's happening in AI regulation this week? "
             "Compare how different outlets are covering it."
})
print(result["output"])

The agent will search for AI regulation briefs, then use the compare tool to analyze source coverage — all automatically.

Option 2: RAG Retriever (News as Context)

Use PolarisRetriever to inject relevant news into any chain as context:

python
from langchain_polaris import PolarisRetriever
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser

retriever = PolarisRetriever(
    api_key="demo",
    min_confidence=0.7,
    limit=5,
)

llm = ChatAnthropic(model="claude-sonnet-4-20250514")

prompt = ChatPromptTemplate.from_template("""
Based on these verified news briefs:

{context}

Answer this question: {question}

Cite confidence scores and sources.
""")

chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

answer = chain.invoke("What are the biggest developments in AI this week?")
print(answer)

Every document returned by the retriever includes confidence scores, bias analysis, and source metadata in its metadata field — so your chain can reason about trustworthiness.

Option 3: Entity Lookup for Proactive Agents

Build an agent that looks up news coverage for specific entities:

python
from langchain_polaris import PolarisEntityTool

entity_tool = PolarisEntityTool(api_key="demo")
result = entity_tool.invoke({"name": "OpenAI"})
print(result)

Use this in a scheduled job to monitor coverage of specific companies, people, or technologies across verified news sources.

Why Not Just Use Web Search?

LangChain has Tavily and other web search tools. Why add Polaris?

Web search returns raw content — no confidence scoring, no bias detection, no counter-arguments. Your agent has no way to evaluate whether a source is trustworthy or how it's framed.

Polaris returns verified intelligence. Every result includes a confidence score (0–1), bias rating, counter-arguments, and full source provenance. Your agent can filter by confidence, compare source framing, and present balanced analysis. Use Tavily for general web search. Use Polaris when your agent needs structured world knowledge.

What's Next

Share: