LangGraph + Veto

Runtime authorization for LangGraph agents. State-aware policies for ToolNode, create_react_agent, and multi-agent graph workflows. Validate every tool call without changing your graph topology.

The problem with LangGraph agents today

LangGraph gives agents state machines, checkpointing, and multi-agent coordination. But its ToolNode executes tool calls with no authorization check. The LLM picks a tool, the ToolNode runs it. In a multi-agent graph, once one agent hands work to another, there is no built-in mechanism for scoped delegation or tool-level enforcement.

This matters because LangGraph is where the high-stakes agents live. Financial workflows, customer operations, infrastructure automation. CVE-2025-67644 demonstrated SQL injection through LangGraph's own SQLite checkpoint system. If the framework's internal state management had injection vulnerabilities, your tools are certainly an attack surface. LangGraph's interrupt() provides human-in-the-loop at the graph level, but it doesn't evaluate tool arguments against policies.

ToolNode has no auth

LangGraph's prebuilt ToolNode executes any tool call the LLM produces. It parses JSON arguments and runs the function. No policy. No validation. No approval.

No scoped delegation

In multi-agent graphs, agents share tools without boundaries. A research agent could call a payment tool if the LLM decides to. There is no role-based isolation.

CVE-2025-67644

SQL injection in LangGraph's SQLite checkpoint (CVSS 7.3). If the framework's own state management had injection flaws, tools exposed to LLM-generated arguments need external protection.

Before and after Veto

The left tab shows a standard LangGraph agent built with create_react_agent. The ToolNode executes everything unconditionally. The right tab adds Veto inside each tool function.

from langgraph.prebuilt import create_react_agent
from langchain_core.tools import tool

@tool
def process_payment(amount: float, customer_id: str) -> str:
    """Process a payment for a customer."""
    return payment_service.charge(amount, customer_id)

@tool
def query_database(query: str, tables: list[str]) -> str:
    """Query the customer database."""
    return db.execute(query, tables)

@tool
def delete_records(table: str, condition: str) -> str:
    """Delete records matching a condition."""
    return db.execute(f"DELETE FROM {table} WHERE {condition}")

# create_react_agent builds a ToolNode internally
# Every tool call from the LLM is executed without authorization
agent = create_react_agent(
    model="openai:gpt-5.4",
    tools=[process_payment, query_database, delete_records],
)

# The LLM decides what to call. LangGraph's ToolNode executes it.
# Prompt injection could trigger delete_records on the users table.
result = agent.invoke(
    {"messages": [{"role": "user", "content": user_message}]},
)

Multi-agent graph with per-agent policies

LangGraph's power is multi-agent coordination. Veto adds role-based authorization: a researcher agent gets read-only policies while an executor agent gets write access with approval requirements. Same tools, different policies per agent context.

multi_agent_graph.pypython
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import create_react_agent
from langchain_core.tools import tool
from veto import Veto

veto = Veto(api_key="veto_live_xxx")

@tool
def search_web(query: str) -> str:
    """Search the web for information."""
    decision = veto.guard(
        tool="search_web",
        arguments={"query": query},
        context={"agent": "researcher"},
    )
    if decision.action != "allow":
        return f"Blocked: {decision.reason}"
    return web_search.run(query)

@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to a recipient."""
    decision = veto.guard(
        tool="send_email",
        arguments={"to": to, "subject": subject, "body": body},
        context={"agent": "executor"},
    )
    if decision.action == "require_approval":
        return f"Email requires approval (ID: {decision.approval_id})"
    if decision.action != "allow":
        return f"Blocked: {decision.reason}"
    return email_service.send(to, subject, body)

researcher = create_react_agent(model="openai:gpt-5.4", tools=[search_web])
executor = create_react_agent(model="openai:gpt-5.4", tools=[send_email])

def router(state: MessagesState):
    last = state["messages"][-1].content
    if "send" in last.lower() or "email" in last.lower():
        return "executor"
    return "researcher"

workflow = StateGraph(MessagesState)
workflow.add_node("researcher", researcher)
workflow.add_node("executor", executor)
workflow.add_conditional_edges(START, router)
workflow.add_edge("researcher", END)
workflow.add_edge("executor", END)

graph = workflow.compile()

Policy configuration

Policies can condition on agent identity, user role, environment, and tool arguments. Define different rules for different agents in the same graph.

veto/policies.yamlyaml
rules:
  - name: block_destructive_writes
    description: Prevent DELETE in production
    tool: delete_records
    when: context.environment == "production"
    action: deny
    message: "Destructive writes blocked in production"

  - name: approve_large_payments
    description: Human approval for payments over $1,000
    tool: process_payment
    when: args.amount > 1000
    action: require_approval
    approvers: [finance-team]
    timeout: 30m

  - name: viewer_payment_block
    description: Viewers cannot process any payments
    tool: process_payment
    when: context.user_role == "viewer"
    action: deny
    message: "Viewers cannot process payments"

  - name: restrict_sensitive_tables
    description: Block access to credentials tables
    tool: query_database
    when: '"credentials" in args.tables || "passwords" in args.tables'
    action: deny
    message: "Access to sensitive tables is prohibited"

  - name: executor_external_email_approval
    description: Require approval for external emails
    tool: send_email
    when: context.agent == "executor" && !args.to.endswith("@yourcompany.com")
    action: require_approval
    approvers: [compliance-team]

  - name: researcher_no_actions
    description: Researcher agent cannot take actions
    tool: send_email
    when: context.agent == "researcher"
    action: deny
    message: "Researcher agent cannot send emails"

Quickstart

1

Install

pip install veto langgraph langchain-openai
2

Define policies

Create veto/policies.yaml with rules per tool and agent context.

3

Add veto.guard() to each tool

Call veto.guard() at the top of each tool function. Your graph topology, checkpointing, and state management stay untouched.

What Veto covers for LangGraph agents

Per-agent policies

Different agents in the same graph get different authorization rules. Pass agent identity as context. A researcher gets read-only; an executor gets write with approval.

Works with checkpointing

Veto's authorization decisions are stateless. They work with LangGraph's checkpoint and persistence system. Resume from a checkpoint and authorization re-evaluates.

Complements interrupt()

LangGraph's interrupt() pauses the graph for human review. Veto's require_approval pauses a single tool call. Use interrupt() for graph-level decisions, Veto for tool-level authorization.

Full audit trail

Every tool call logged with agent context, arguments, decision, and timestamp. See which agent attempted what and what was authorized.

Frequently asked questions

How does Veto integrate with LangGraph's state machine?
Veto validates inside tool functions, not at the graph level. The graph topology, state transitions, conditional edges, and checkpointing are all unaffected. When the ToolNode calls a tool function, Veto evaluates the call against policies before the side effect occurs.
Can I use different policies for different graph nodes?
Yes. Pass the agent or node name as context in veto.guard(). Policies can condition on context fields, so the same tool can have different rules depending on which agent in the graph is calling it.
How do approval workflows work with LangGraph's execution model?
When veto.guard() returns require_approval, the tool function returns a message to the LLM explaining the pause. The approval ID can be stored in graph state. Your application polls for approval status or receives webhooks. On approval, invoke the graph again to retry.
Does Veto work with LangGraph's streaming mode?
Yes. Veto validates at the tool function boundary, not on streamed tokens. When the ToolNode invokes a tool during a streaming run, Veto evaluates the complete call synchronously. The streaming response continues after the decision.

Related integrations

Add guardrails to your LangGraph agents in minutes.