LangChain + Veto
Runtime authorization for LangChain agents. Validate every tool call from ReAct agents, AgentExecutor, and LangGraph ToolNodes before execution. Python SDK with zero changes to your agent architecture.
The problem with LangChain agents today
LangChain makes it trivial to give agents tools. Decorate a function with @tool, pass it to an agent, and the LLM can call it. But LangChain provides no authorization layer between the model's decision and the tool's execution. The framework "cheerfully parses the JSON and executes whatever parameters the LLM outputs."
The security track record makes this worse. In December 2025, CVE-2025-68664 (CVSS 9.3) disclosed a serialization injection vulnerability in langchain-core that enabled secret exfiltration and arbitrary code execution. In March 2026, additional CVEs exposed SQL injection in LangGraph's checkpoint system (CVE-2025-67644) and path traversal in LangChain's prompt loader (CVE-2026-34070). These are framework-level vulnerabilities. Your tools are the application-level attack surface that needs its own protection.
LangChain's ToolNode and AgentExecutor execute tool calls with no built-in authorization check. The LLM picks the tool. Your code runs it.
Serialization injection in langchain-core (CVSS 9.3). Attacker steers an agent via prompt injection to craft outputs that extract secrets from environment variables.
LangChain doesn't log tool calls with authorization context. For SOC2, HIPAA, or financial compliance you need to know what was attempted and what was blocked.
Before and after Veto
The left tab shows a standard LangChain agent. The LLM picks tools and LangChain executes them unconditionally. The right tab adds Veto inside each tool function. Same agent, same tools, every call evaluated against policy first.
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain_core.tools import tool
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to a recipient."""
return email_service.send(to, subject, body)
@tool
def delete_records(table: str, condition: str) -> str:
"""Delete records from a database table."""
return db.execute(f"DELETE FROM {table} WHERE {condition}")
@tool
def process_payment(amount: float, recipient: str) -> str:
"""Process a payment transaction."""
return payment_service.charge(amount, recipient)
llm = ChatOpenAI(model="gpt-5.4")
tools = [send_email, delete_records, process_payment]
agent = create_react_agent(llm, tools)
executor = AgentExecutor(agent=agent, tools=tools)
# The LLM decides which tools to call and with what arguments.
# LangChain executes them. No authorization. No limits.
# A prompt injection could trigger delete_records on your users table.
result = executor.invoke({"input": user_message})LangGraph ToolNode integration
LangGraph's create_react_agent builds a state machine with a ToolNode that executes tool calls. Veto validates inside each tool function, so the graph topology and state management stay unchanged.
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode, create_react_agent
from langchain_core.tools import tool
from veto import Veto
veto = Veto(api_key="veto_live_xxx")
@tool
def query_database(query: str, table: str) -> str:
"""Execute a database query."""
decision = veto.guard(
tool="query_database",
arguments={"query": query, "table": table},
)
if decision.decision != "allow":
return f"Blocked: {decision.reason}"
return db.execute(query, table)
@tool
def write_file(path: str, content: str) -> str:
"""Write content to a file."""
decision = veto.guard(
tool="write_file",
arguments={"path": path, "content": content},
context={"environment": "production"},
)
if decision.decision != "allow":
return f"Blocked: {decision.reason}"
return fs.write(path, content)
tools = [query_database, write_file]
# create_react_agent builds the full LangGraph with ToolNode
# Veto validates inside each tool before side effects occur
agent = create_react_agent(
model="openai:gpt-5.4",
tools=tools,
)
result = agent.invoke(
{"messages": [{"role": "user", "content": "Query the users table"}]},
config={"configurable": {"thread_id": "session-123"}},
)Policy configuration
Define what your LangChain agents can and cannot do. Declarative YAML, version controlled, no prompt engineering.
rules:
- name: block_destructive_queries
description: Prevent DELETE, DROP, TRUNCATE on any table
tool: delete_records
when: context.environment == "production"
action: deny
message: "Destructive operations blocked in production"
- name: read_only_database
description: Only allow SELECT queries
tool: query_database
when: "!args.query.upper().startswith('SELECT')"
action: deny
message: "Only read-only queries are permitted"
- name: block_sensitive_tables
description: Block access to credentials and PII tables
tool: query_database
when: args.table in ["credentials", "passwords", "ssn"]
action: deny
message: "Access to sensitive tables is prohibited"
- name: approve_external_email
description: Require approval for non-company emails
tool: send_email
when: "!args.to.endswith('@yourcompany.com')"
action: require_approval
approvers: [compliance-team]
- name: payment_limits
description: Cap payments at $5,000
tool: process_payment
when: args.amount > 5000
action: deny
message: "Payments over $5,000 require manual processing"
- name: block_system_paths
description: Prevent writes to system directories
tool: write_file
when: args.path.startswith("/etc") || args.path.startswith("/sys")
action: deny
message: "Cannot write to system directories"Quickstart
Install
pip install veto langchain langchain-openaiDefine policies
Create veto/policies.yaml with rules for each tool. Match on name, constrain arguments, set actions.
Add veto.guard() to each tool
Call veto.guard() at the top of each tool function. Check the decision. Execute only if allowed. Your agent, executor, and graph stay untouched.
What Veto covers for LangChain agents
Works with every agent type
ReAct agents, Plan-and-Execute, create_react_agent, custom AgentExecutor subclasses. Veto validates inside the tool function, so agent type is irrelevant.
Argument constraints
Block SQL containing DELETE/DROP. Restrict file paths to safe directories. Cap payment amounts. Enforce at the data level, not the prompt level.
Human-in-the-loop
Route sensitive tool calls to human approval queues. Works alongside LangGraph's interrupt() for graph-level pause/resume. Different mechanisms, complementary goals.
Full audit trail
Every tool call logged with name, arguments, decision, reason, and timestamp. Queryable via API. Export for compliance. LangSmith traces show what happened; Veto logs show what was authorized.
Frequently asked questions
Does Veto replace LangChain's middleware system?
Does it work with LangGraph workflows?
What happens when a tool call is denied?
Can I use different policies for different users?
Related integrations
Guardrails for your LangChain agents in minutes.