Integrations/Vercel AI SDK

Vercel AI SDK + Veto

Runtime tool authorization for streaming agents. Intercept every tool call in your generateText and streamText workflows, enforce policies, and route sensitive operations to human approval.

What are Vercel AI SDK guardrails?

Vercel AI SDK guardrails are runtime controls that intercept tool calls made by AI agents built with the AI SDK. When an agent calls generateText or streamText with tools, each tool invocation is evaluated against your authorization policies before execution. Allowed calls proceed. Denied calls return an error the agent can reason about. Sensitive operations get routed to human approval.

The problem: agents that can do anything

AI SDK 6 introduced first-class agent support with multi-step tool calling, streaming responses, and human-in-the-loop via needsApproval. But needsApproval is a per-tool boolean. It cannot express "allow deletes in /tmp but block deletes in /etc" or "require approval for emails to external domains." Real authorization requires policy logic, not flags.

An agent hallucinating a wrong answer is annoying. An agent hallucinating a wrong tool call can delete your production database, send emails to customers, or deploy untested code. The risk scales with the number of tools you expose.

Filesystem access

Agent asked to "clean up logs" decides to delete config files, environment variables, or SSH keys in the process.

Database mutations

Agent running analytics queries decides a TRUNCATE or DROP would be a faster way to "reset" a table.

External communication

Agent with email tools sends messages to external addresses, leaking internal information or triggering compliance violations.

Infrastructure changes

Agent with deploy tools pushes unreviewed code to production or scales infrastructure beyond budget limits.

Quickstart

Install the SDK, wrap your tool executions with veto.guard(), and define policies in YAML. Takes about 5 minutes per tool.

1. Install

npm install veto-sdk ai @ai-sdk/openai zod

2. Wrap your tools with a guard helper

agent.tstypescript
import { generateText, tool } from "ai"
import { openai } from "@ai-sdk/openai"
import { z } from "zod"
import { Veto } from "veto-sdk"

const veto = await Veto.init({ apiKey: process.env.VETO_API_KEY })

function guardedTool<T extends z.ZodType>(opts: {
  description: string
  parameters: T
  toolName: string
  execute: (args: z.infer<T>) => Promise<unknown>
}) {
  return tool({
    description: opts.description,
    parameters: opts.parameters,
    execute: async (args) => {
      const decision = await veto.guard({
        tool: opts.toolName,
        arguments: args,
      })

      if (decision.decision === 'deny') {
        return { error: `Blocked: ${decision.reason}` }
      }

      if (decision.decision === 'require_approval') {
        return { pending: true, approvalId: decision.approvalId }
      }

      return opts.execute(args)
    },
  })
}

const deleteFile = guardedTool({
  toolName: "delete_file",
  description: "Delete a file from the filesystem",
  parameters: z.object({
    path: z.string().describe("File path to delete"),
  }),
  execute: async ({ path }) => {
    await fs.unlink(path)
    return { deleted: path }
  },
})

const queryDatabase = guardedTool({
  toolName: "query_database",
  description: "Run a SQL query",
  parameters: z.object({
    query: z.string().describe("SQL query to execute"),
  }),
  execute: async ({ query }) => {
    const rows = await db.query(query)
    return { rows, count: rows.length }
  },
})

const result = await generateText({
  model: openai("gpt-5.4"),
  tools: { delete_file: deleteFile, query_database: queryDatabase },
  maxSteps: 10,
  prompt: "Clean up stale user sessions older than 30 days",
})

3. Define authorization policies

veto/policies.yamlyaml
version: "1.0"
name: Vercel AI SDK agent policies

rules:
  - id: block-system-file-deletion
    tools: [delete_file]
    action: deny
    conditions:
      - field: arguments.path
        operator: matches
        value: "^/(etc|usr|bin|sys|proc)/.*"
    reason: "System directory deletion is never allowed"

  - id: approve-production-deploys
    tools: [deploy]
    action: require_approval
    conditions:
      - field: context.environment
        operator: equals
        value: "production"
    approval:
      timeout_minutes: 15
      notify: [ops-team@company.com]

  - id: limit-email-recipients
    tools: [send_email]
    action: deny
    conditions:
      - field: arguments.to
        operator: not_matches
        value: "^.+@company\.com$"
    reason: "Agents can only email internal addresses"

  - id: block-destructive-queries
    tools: [query_database]
    action: deny
    conditions:
      - field: arguments.query
        operator: matches
        value: "^(DROP|TRUNCATE|DELETE FROM)\\s"
    reason: "Destructive SQL operations blocked"

Before and after

Your agent code stays the same. Authorization wraps the tool execution, not the agent logic.

Without Veto
before.tstypescript
import { generateText, tool } from "ai"
import { openai } from "@ai-sdk/openai"
import { z } from "zod"

const result = await generateText({
  model: openai("gpt-5.4"),
  tools: {
    delete_file: tool({
      description: "Delete a file from the filesystem",
      parameters: z.object({
        path: z.string().describe("File path to delete"),
      }),
      execute: async ({ path }) => {
        await fs.unlink(path)
        return { deleted: path }
      },
    }),
    send_email: tool({
      description: "Send an email",
      parameters: z.object({
        to: z.string(),
        subject: z.string(),
        body: z.string(),
      }),
      execute: async ({ to, subject, body }) => {
        await mailer.send({ to, subject, body })
        return { sent: true }
      },
    }),
  },
  maxSteps: 10,
  prompt: "Delete old logs and email the team a summary",
})
With Veto
after.tstypescript
import { generateText, tool } from "ai"
import { openai } from "@ai-sdk/openai"
import { z } from "zod"
import { Veto } from "veto-sdk"

const veto = await Veto.init({
  apiKey: process.env.VETO_API_KEY,
  projectId: "proj_abc123",
})

const result = await generateText({
  model: openai("gpt-5.4"),
  tools: {
    delete_file: tool({
      description: "Delete a file from the filesystem",
      parameters: z.object({
        path: z.string().describe("File path to delete"),
      }),
      execute: async ({ path }) => {
        const decision = await veto.guard({
          tool: "delete_file",
          arguments: { path },
          context: { user: currentUser.id },
        })

        if (decision.decision === 'deny') {
          return { error: decision.reason }
        }

        if (decision.decision === 'require_approval') {
          return {
            status: "pending_approval",
            approvalId: decision.approvalId,
          }
        }

        await fs.unlink(path)
        return { deleted: path }
      },
    }),
    send_email: tool({
      description: "Send an email",
      parameters: z.object({
        to: z.string(),
        subject: z.string(),
        body: z.string(),
      }),
      execute: async ({ to, subject, body }) => {
        const decision = await veto.guard({
          tool: "send_email",
          arguments: { to, subject, body },
          context: { user: currentUser.id },
        })

        if (decision.decision === 'deny') {
          return { error: decision.reason }
        }

        await mailer.send({ to, subject, body })
        return { sent: true }
      },
    }),
  },
  maxSteps: 10,
  prompt: "Delete old logs and email the team a summary",
})

Streaming authorization

When agents stream responses with streamText, tool calls happen mid-stream. Veto evaluates each call in under 10ms, so streaming stays responsive. Denied tool calls return error messages that the agent can reason about and adapt to in real time.

streaming.tstypescript
import { streamText, tool } from "ai"
import { openai } from "@ai-sdk/openai"
import { Veto } from "veto-sdk"

const veto = await Veto.init({ apiKey: process.env.VETO_API_KEY })

const result = streamText({
  model: openai("gpt-5.4"),
  tools: {
    deploy: tool({
      description: "Deploy to production",
      parameters: z.object({
        service: z.string(),
        version: z.string(),
      }),
      execute: async ({ service, version }) => {
        const decision = await veto.guard({
          tool: "deploy",
          arguments: { service, version },
          context: {
            environment: "production",
            user: currentUser.id,
            role: currentUser.role,
          },
        })

        if (decision.decision === 'deny') {
          return { error: decision.reason }
        }

        if (decision.decision === 'require_approval') {
          return {
            status: "awaiting_approval",
            approvalId: decision.approvalId,
            message: "Production deploy requires team lead approval",
          }
        }

        await deployService(service, version)
        return { deployed: true, service, version }
      },
    }),
  },
  maxSteps: 5,
  prompt: "Deploy the billing service v2.3.1 to production",
})

for await (const chunk of result.textStream) {
  process.stdout.write(chunk)
}

How it works with AI SDK 6

AI SDK 6 introduced needsApproval for basic human-in-the-loop control. Veto complements this with fine-grained policy evaluation.

CapabilityAI SDK needsApprovalVeto
Per-tool approval flag
Argument-level conditions
User/role-based policies
Approval routing (Slack, email)
YAML policy-as-code
Audit logging
Rate limiting per tool
Dashboard + monitoring

Authorization patterns

In-process evaluation

Policy evaluation runs in your Node.js process. No network hop for local policies. Sub-10ms decisions keep streaming agents responsive.

Context-aware rules

Policies can reference user identity, role, environment, time of day, and session state for dynamic authorization decisions.

Graceful denials

Denied tool calls return structured error responses. The agent receives the denial reason and can retry with different arguments or inform the user.

Multi-step safety

With maxSteps, agents chain multiple tool calls. Each step is authorized independently, preventing escalation across a multi-step workflow.

Frequently asked questions

How is this different from AI SDK's built-in needsApproval?
needsApproval is a per-tool boolean or function that pauses execution for all invocations of that tool. Veto evaluates the actual arguments, user context, and policy rules to make fine-grained decisions. You can allow delete_file for /tmp paths while blocking /etc paths, or allow emails to internal addresses while requiring approval for external ones. needsApproval can't express these conditions.
Does authorization slow down streaming responses?
No measurable impact. Veto's in-process SDK evaluates policies in under 10ms. LLM inference takes 100-2000ms per step. Authorization is negligible by comparison. Cloud mode adds a network hop for approval workflows and audit logging, but doesn't block the critical path for allowed operations.
Does this work with useChat and useCompletion hooks?
Yes. Authorization operates at the tool execution level, which works with all AI SDK patterns: generateText, streamText, useChat, useCompletion, and custom server actions. Tool calls are intercepted the same way regardless of how the agent is invoked.
What happens when an agent's tool call is denied mid-stream?
The tool returns an error response that streams to the client. The agent sees the denial reason and can retry with modified arguments, try an alternative approach, or explain the limitation to the user. All denials are logged with full context for auditing.
Can I use Veto with AI SDK's agent abstraction?
Yes. AI SDK 6's agent abstraction composes tools from separate files. Wrap each tool's execute function with veto.guard() at definition time. The agent is unaware of the authorization layer. Tools just work or return errors.

Related integrations

Ship AI SDK agents that respect boundaries.