Integrations/PydanticAI

PydanticAI + Veto

Type-safe runtime authorization for Python AI agents. Wrap PydanticAI tools with policy-based guardrails using Veto's Python SDK. Full compatibility with dependency injection, tool preparation, and streaming.

What is PydanticAI?

PydanticAI is the Python agent framework from the creators of Pydantic. It brings type safety to AI agents through validated response models, dependency injection via RunContext, and tool definitions with @agent.tool. It supports OpenAI, Anthropic, Gemini, and local models through a unified interface. Veto adds the runtime authorization layer that PydanticAI's type system cannot express.

The problem: type safety is not security

PydanticAI validates that tool arguments match your schema. If you declareamount: float, Pydantic ensures it's a float. But Pydantic cannot express "amounts over $10,000 require approval" or "only admin users can delete accounts." That's authorization, not validation.

PydanticAI also supports human-in-the-loop tool approval, but it's a binary gate. Veto provides the policy engine that decides whether to approve, deny, or escalate based on the tool name, arguments, user context, and any custom rules.

Valid but dangerous

DELETE FROM users is a perfectly valid string. Pydantic will happily pass it through.

Context-blind

Type validation doesn't know who is calling the tool, what role they have, or whether this operation needs approval.

No audit trail

Pydantic logs nothing about tool calls. When something goes wrong, you have no record of what the agent tried to do or why it was allowed.

Quickstart

1. Install

pip install veto pydantic-ai

2. Add authorization to your agent tools

Veto integrates with PydanticAI's RunContext and dependency injection. Pass user context through deps, authorize inside the tool body.

agent.pypython
from pydantic_ai import Agent, RunContext
from dataclasses import dataclass
from veto import Veto

veto = Veto(api_key="veto_live_...")

@dataclass
class Deps:
    db: DatabaseClient
    current_user: str
    current_role: str

agent = Agent(
    "openai:gpt-5.4",
    deps_type=Deps,
    system_prompt="You are a database administrator.",
)

@agent.tool
async def run_query(ctx: RunContext[Deps], query: str) -> str:
    """Execute a SQL query against the database."""
    decision = await veto.guard(
        tool="run_query",
        arguments={"query": query},
        context={
            "user": ctx.deps.current_user,
            "role": ctx.deps.current_role,
        },
    )

    if decision.decision == 'deny':
        return f"Query blocked: {decision.reason}"

    if decision.decision == 'require_approval':
        return f"Query requires approval: {decision.approval_id}"

    rows = await ctx.deps.db.fetch(query)
    return f"Returned {len(rows)} rows"

@agent.tool
async def create_backup(ctx: RunContext[Deps], table: str) -> str:
    """Create a backup of a database table."""
    decision = await veto.guard(
        tool="create_backup",
        arguments={"table": table},
        context={"user": ctx.deps.current_user},
    )

    if decision.decision == 'deny':
        return f"Blocked: {decision.reason}"

    await ctx.deps.db.execute(
        f"CREATE TABLE {table}_backup AS SELECT * FROM {table}"
    )
    return f"Backup created: {table}_backup"

result = await agent.run(
    "Back up the users table, then show me users who haven't logged in for 90 days",
    deps=Deps(db=db, current_user="alice", current_role="dba"),
)
print(result.data)

3. Define policies

veto/policies.yamlyaml
version: "1.0"
name: PydanticAI database agent policies

rules:
  - id: block-destructive-queries
    tools: [run_query]
    action: deny
    conditions:
      - field: arguments.query
        operator: matches
        value: "^(DROP|TRUNCATE|DELETE FROM)\\s"
    reason: "Destructive SQL operations are never allowed via agent"

  - id: approve-schema-changes
    tools: [run_query]
    action: require_approval
    conditions:
      - field: arguments.query
        operator: matches
        value: "^(ALTER|CREATE|RENAME)\\s"
    approval:
      timeout_minutes: 30
      notify: [dba-team@company.com]

  - id: restrict-backup-to-dba
    tools: [create_backup]
    action: deny
    conditions:
      - field: context.role
        operator: not_equals
        value: "dba"
    reason: "Only DBA role can create backups"

  - id: block-external-emails
    tools: [send_email]
    action: deny
    conditions:
      - field: arguments.to
        operator: not_matches
        value: "^.+@company\\.com$"
    reason: "Agent can only email internal addresses"

  - id: approve-user-deletion
    tools: [delete_user]
    action: require_approval
    conditions:
      - field: context.caller_role
        operator: not_equals
        value: "admin"
    approval:
      timeout_minutes: 60
      notify: [security@company.com]

Before and after

Your agent definition stays the same. Authorization wraps tool execution, not agent logic. The LLM never sees Veto. It just sees tools that sometimes return "blocked" responses.

Without Veto
before.pypython
from pydantic_ai import Agent, RunContext
from pydantic import BaseModel
from dataclasses import dataclass

@dataclass
class Deps:
    db: DatabaseClient
    mailer: EmailClient

agent = Agent("openai:gpt-5.4", deps_type=Deps)

@agent.tool
async def delete_user(ctx: RunContext[Deps], user_id: str) -> str:
    """Delete a user account and all associated data."""
    await ctx.deps.db.execute(
        "DELETE FROM users WHERE id = $1", user_id
    )
    return f"Deleted user {user_id}"

@agent.tool
async def send_email(
    ctx: RunContext[Deps],
    to: str,
    subject: str,
    body: str,
) -> str:
    """Send an email to a recipient."""
    await ctx.deps.mailer.send(to=to, subject=subject, body=body)
    return f"Sent email to {to}"

result = await agent.run(
    "Delete inactive users and notify the team",
    deps=Deps(db=db, mailer=mailer),
)
With Veto
after.pypython
from pydantic_ai import Agent, RunContext
from pydantic import BaseModel
from dataclasses import dataclass
from veto import Veto

veto = Veto(api_key="veto_live_...")

@dataclass
class Deps:
    db: DatabaseClient
    mailer: EmailClient
    user_role: str
    user_id: str

agent = Agent("openai:gpt-5.4", deps_type=Deps)

@agent.tool
async def delete_user(ctx: RunContext[Deps], user_id: str) -> str:
    """Delete a user account and all associated data."""
    decision = await veto.guard(
        tool="delete_user",
        arguments={"user_id": user_id},
        context={
            "caller_role": ctx.deps.user_role,
            "caller_id": ctx.deps.user_id,
        },
    )

    if decision.decision == 'deny':
        return f"Blocked: {decision.reason}"

    if decision.decision == 'require_approval':
        return f"Pending approval (id: {decision.approval_id})"

    await ctx.deps.db.execute(
        "DELETE FROM users WHERE id = $1", user_id
    )
    return f"Deleted user {user_id}"

@agent.tool
async def send_email(
    ctx: RunContext[Deps],
    to: str,
    subject: str,
    body: str,
) -> str:
    """Send an email to a recipient."""
    decision = await veto.guard(
        tool="send_email",
        arguments={"to": to, "subject": subject, "body": body},
        context={"caller_role": ctx.deps.user_role},
    )

    if decision.decision == 'deny':
        return f"Blocked: {decision.reason}"

    await ctx.deps.mailer.send(to=to, subject=subject, body=body)
    return f"Sent email to {to}"

result = await agent.run(
    "Delete inactive users and notify the team",
    deps=Deps(
        db=db,
        mailer=mailer,
        user_role="admin",
        user_id="usr_123",
    ),
)

Advanced: tool preparation with Veto

PydanticAI's prepare function lets you conditionally hide tools from the LLM based on runtime context. Combined with Veto, you can remove tools entirely for users who can never call them, reducing the LLM's attack surface.

prepare_example.pypython
from pydantic_ai import Agent, RunContext
from pydantic_ai.tools import ToolDefinition
from veto import Veto

veto = Veto(api_key="veto_live_...")

async def veto_prepare(
    ctx: RunContext[Deps],
    tool_def: ToolDefinition,
) -> ToolDefinition | None:
    """
    PydanticAI prepare function that checks Veto policies
    before the tool is even shown to the LLM.
    If the user's role can never call this tool, hide it entirely.
    """
    can_use = await veto.check_access(
        tool=tool_def.name,
        context={
            "user": ctx.deps.current_user,
            "role": ctx.deps.current_role,
        },
    )
    if not can_use:
        return None
    return tool_def

@agent.tool(prepare=veto_prepare)
async def drop_table(ctx: RunContext[Deps], table: str) -> str:
    """Drop a database table. Requires admin role."""
    decision = await veto.guard(
        tool="drop_table",
        arguments={"table": table},
        context={"user": ctx.deps.current_user},
    )

    if decision.decision == 'deny':
        return f"Blocked: {decision.reason}"

    if decision.decision == 'require_approval':
        return f"Pending approval: {decision.approval_id}"

    await ctx.deps.db.execute(f"DROP TABLE {table}")
    return f"Dropped table {table}"

This is defense in depth: the prepare function hides the tool so the LLM never even considers calling it. If the LLM somehow still tries (via prompt injection or a different tool path), the authorize call inside the tool body blocks execution.

How Pydantic validation and Veto interact

1

LLM generates tool call

The model decides to call run_query with arguments {"query": "DROP TABLE users"}.

2

Pydantic validates the schema

PydanticAI checks that query is a string. It is. Validation passes. The argument is well-formed.

3

Veto evaluates the policy

Veto sees the tool name, the validated arguments, and the user context. The policy matches block-destructive-queries and returns denied with reason "Destructive SQL operations are never allowed."

4

Agent receives the denial

The tool returns "Query blocked: Destructive SQL operations are never allowed." The agent can inform the user, try a different approach, or request elevated permissions.

Frequently asked questions

Does Veto work with PydanticAI's streaming mode?
Yes. Veto intercepts tool calls before execution, independent of how the response is delivered. Streaming responses work normally. Authorization happens synchronously inside the tool body before any side effects.
Can I use Veto with any LLM provider?
Absolutely. PydanticAI is model-agnostic and so is Veto. Whether you're using OpenAI, Anthropic, Gemini, Groq, or a local model, the authorization layer works identically. Switch models without changing your policies.
How does this differ from PydanticAI's built-in tool approval?
PydanticAI's human-in-the-loop tool approval is a binary gate: require approval or don't. Veto evaluates arguments, user context, and policy rules to make fine-grained decisions. You can allow SELECTs but block DROPs, allow internal emails but require approval for external ones, and rate limit expensive operations.
What's the performance impact?
Minimal. Veto's Python SDK evaluates policies in-process, typically under 10ms. The LLM inference that triggers the tool call takes 100-2000ms. Authorization overhead is negligible. Cloud mode for audit logging and approval workflows adds a network hop but doesn't block the critical path.
Can I use Veto with PydanticAI toolsets?
Yes. PydanticAI's toolset pattern composes tools from multiple sources. Wrap each tool's implementation with veto.guard() inside the tool body. The toolset composition layer doesn't need to change.

Related integrations

Ship PydanticAI agents that respect boundaries.