OpenAI + Veto

Runtime authorization for OpenAI agents built with the Chat Completions API, Responses API, and Agents SDK. Intercept every function call before execution. Block, approve, or audit.

The problem with OpenAI agents today

OpenAI's function calling lets GPT models trigger real actions in your system: send emails, process payments, modify databases, call external APIs. The model decides what to call, generates the arguments, and your code executes it. There is no authorization step between the model's decision and your code's execution.

OpenAI's own Agents SDK includes input and output guardrails, but these operate on the conversation level, not the tool level. They can check whether a user's message is appropriate, but they cannot enforce "this specific function call with these specific arguments should be blocked." OpenAI acknowledged in their December 2025 Atlas disclosure that prompt injection is "unlikely to ever be fully solved." Authorization is not a model problem. It's an infrastructure problem.

No tool-level auth

OpenAI's Agents SDK guardrails check input/output messages, not individual function calls. A blocked message stops the agent entirely, not a single dangerous tool call.

Hallucinated arguments

GPT models can hallucinate function arguments: wrong email addresses, inflated amounts, nonexistent table names. Pydantic validation catches type errors, not business logic violations.

No audit trail

Neither the Chat Completions API nor the Agents SDK log tool calls with authorization context. For SOC2, HIPAA, or financial compliance, you need to know what was attempted, what was allowed, and what was blocked.

Before and after Veto

The left tab shows standard OpenAI function calling. The model returns tool_calls, your code executes them unconditionally. The right tab adds Veto. Same agent, same tools, every call evaluated against policy first.

import OpenAI from 'openai'

const openai = new OpenAI()

const tools = [
  {
    type: 'function',
    function: {
      name: 'send_email',
      description: 'Send an email',
      parameters: {
        type: 'object',
        properties: {
          to: { type: 'string' },
          subject: { type: 'string' },
          body: { type: 'string' },
        },
        required: ['to', 'subject', 'body'],
      },
    },
  },
  {
    type: 'function',
    function: {
      name: 'process_payment',
      description: 'Process a payment',
      parameters: {
        type: 'object',
        properties: {
          amount: { type: 'number' },
          recipient: { type: 'string' },
        },
        required: ['amount', 'recipient'],
      },
    },
  },
]

const response = await openai.chat.completions.create({
  model: 'gpt-5.4',
  messages: [{ role: 'user', content: userMessage }],
  tools,
})

// GPT says "call process_payment with amount: 50000"
// Your code does it. No policy. No limit. No approval.
const toolCall = response.choices[0].message.tool_calls?.[0]
if (toolCall) {
  const args = JSON.parse(toolCall.function.arguments)
  await executeTool(toolCall.function.name, args)
}

OpenAI Agents SDK (Python)

OpenAI's Agents SDK provides @function_tool for defining tools with automatic schema generation. Veto validates inside each tool function before the side effect occurs. The SDK's own guardrails remain active for input/output screening.

openai_agent_with_veto.pypython
from agents import Agent, Runner, function_tool
from veto import Veto

veto = Veto(api_key="veto_live_xxx")

@function_tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to a recipient."""
    decision = veto.validate(
        tool="send_email",
        arguments={"to": to, "subject": subject, "body": body},
        context={"user_role": "support_agent"},
    )
    if decision.action != "allow":
        return f"Blocked: {decision.reason}"

    return email_service.send(to, subject, body)

@function_tool
def process_payment(amount: float, recipient: str) -> str:
    """Process a payment transaction."""
    decision = veto.validate(
        tool="process_payment",
        arguments={"amount": amount, "recipient": recipient},
    )
    if decision.action == "require_approval":
        return f"Payment of ${amount} requires approval (ID: {decision.approval_id})"
    if decision.action != "allow":
        return f"Blocked: {decision.reason}"

    return payment_service.charge(amount, recipient)

agent = Agent(
    name="Operations Agent",
    instructions="You handle customer operations including emails and payments.",
    tools=[send_email, process_payment],
)

result = Runner.run_sync(agent, "Send a $5000 payment to vendor@example.com")

Policy configuration

Define authorization rules in declarative YAML. Version control alongside your code. No prompt engineering required. Policies apply regardless of what the model decides.

veto/policies.yamlyaml
rules:
  - name: block_competitor_emails
    description: Block emails to competitor domains
    tool: send_email
    when: args.to.endsWith("@competitor.com")
    action: deny
    message: "Cannot send emails to competitor domains"

  - name: approve_large_payments
    description: Require approval for payments over $500
    tool: process_payment
    when: args.amount > 500
    action: require_approval
    approvers: [finance-team]
    timeout: 30m

  - name: block_after_hours_payments
    description: No payments outside business hours
    tool: process_payment
    when: context.time.hour < 9 || context.time.hour > 17
    action: deny
    message: "Payments require business hours (9am-5pm)"

  - name: daily_payment_cap
    description: Cap total daily payments at $10,000
    tool: process_payment
    when: context.daily_total + args.amount > 10000
    action: deny
    message: "Daily payment cap of $10,000 reached"

Quickstart

1

Install

npm install veto-sdk openai

Python: pip install veto openai-agents

2

Define policies

Create veto/policies.yaml with rules for each tool. Match on tool name, constrain arguments, set actions.

3

Validate before executing

Call veto.guard() in your tool handler. Check the decision. Execute only if allowed.

Supported models

Veto works with every OpenAI model that supports function calling.

GPT-5.4
GPT-5.4 Thinking
GPT-5.4-mini
GPT-5.4-nano
GPT-5.3-Codex

What Veto covers for OpenAI agents

Function allowlists

Allowlist which functions the agent can call. Block everything else by default. Different rules per environment, user role, or time of day.

Argument validation

Constrain function arguments by pattern, range, or business rule. Block emails to competitor domains. Cap payment amounts. Restrict SQL to read-only.

Approval workflows

Route sensitive function calls to human approval queues. Approvers get Slack or email notifications with full context. One-click allow or deny.

Audit logging

Every function call logged with name, arguments, decision, and timestamp. Queryable via API or dashboard. Export for SOC2 and compliance reporting.

Frequently asked questions

How is Veto different from OpenAI's built-in guardrails?
OpenAI's Agents SDK guardrails are input/output guards: they check the user's message or the agent's final response. Veto operates at the function call level. It evaluates each individual tool invocation against your policies before the function runs. They complement each other: use OpenAI guardrails for content screening, Veto for tool-level authorization.
Does this work with the Responses API and Assistants API?
Yes. Veto validates tool calls regardless of which OpenAI API produced them. Whether you use Chat Completions, the Responses API, or the Assistants API, the integration point is the same: validate before executing the function.
What happens when a function call is blocked?
You control the behavior. Return an error message to the model so it can adjust. Return a fallback value. Or raise an exception. All blocked calls are logged with full context including the denial reason.
Does Veto add latency?
Policy evaluation typically completes in under 10ms. Negligible compared to model inference time. Human approval workflows pause until a reviewer responds, but standard allow/block decisions are immediate.

Related integrations

Secure your OpenAI agents in minutes.