OpenAI + Veto
Runtime authorization for OpenAI agents built with the Chat Completions API, Responses API, and Agents SDK. Intercept every function call before execution. Block, approve, or audit.
The problem with OpenAI agents today
OpenAI's function calling lets GPT models trigger real actions in your system: send emails, process payments, modify databases, call external APIs. The model decides what to call, generates the arguments, and your code executes it. There is no authorization step between the model's decision and your code's execution.
OpenAI's own Agents SDK includes input and output guardrails, but these operate on the conversation level, not the tool level. They can check whether a user's message is appropriate, but they cannot enforce "this specific function call with these specific arguments should be blocked." OpenAI acknowledged in their December 2025 Atlas disclosure that prompt injection is "unlikely to ever be fully solved." Authorization is not a model problem. It's an infrastructure problem.
OpenAI's Agents SDK guardrails check input/output messages, not individual function calls. A blocked message stops the agent entirely, not a single dangerous tool call.
GPT models can hallucinate function arguments: wrong email addresses, inflated amounts, nonexistent table names. Pydantic validation catches type errors, not business logic violations.
Neither the Chat Completions API nor the Agents SDK log tool calls with authorization context. For SOC2, HIPAA, or financial compliance, you need to know what was attempted, what was allowed, and what was blocked.
Before and after Veto
The left tab shows standard OpenAI function calling. The model returns tool_calls, your code executes them unconditionally. The right tab adds Veto. Same agent, same tools, every call evaluated against policy first.
import OpenAI from 'openai'
const openai = new OpenAI()
const tools = [
{
type: 'function',
function: {
name: 'send_email',
description: 'Send an email',
parameters: {
type: 'object',
properties: {
to: { type: 'string' },
subject: { type: 'string' },
body: { type: 'string' },
},
required: ['to', 'subject', 'body'],
},
},
},
{
type: 'function',
function: {
name: 'process_payment',
description: 'Process a payment',
parameters: {
type: 'object',
properties: {
amount: { type: 'number' },
recipient: { type: 'string' },
},
required: ['amount', 'recipient'],
},
},
},
]
const response = await openai.chat.completions.create({
model: 'gpt-5.4',
messages: [{ role: 'user', content: userMessage }],
tools,
})
// GPT says "call process_payment with amount: 50000"
// Your code does it. No policy. No limit. No approval.
const toolCall = response.choices[0].message.tool_calls?.[0]
if (toolCall) {
const args = JSON.parse(toolCall.function.arguments)
await executeTool(toolCall.function.name, args)
}OpenAI Agents SDK (Python)
OpenAI's Agents SDK provides @function_tool for defining tools with automatic schema generation. Veto validates inside each tool function before the side effect occurs. The SDK's own guardrails remain active for input/output screening.
from agents import Agent, Runner, function_tool
from veto import Veto
veto = Veto(api_key="veto_live_xxx")
@function_tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to a recipient."""
decision = veto.validate(
tool="send_email",
arguments={"to": to, "subject": subject, "body": body},
context={"user_role": "support_agent"},
)
if decision.action != "allow":
return f"Blocked: {decision.reason}"
return email_service.send(to, subject, body)
@function_tool
def process_payment(amount: float, recipient: str) -> str:
"""Process a payment transaction."""
decision = veto.validate(
tool="process_payment",
arguments={"amount": amount, "recipient": recipient},
)
if decision.action == "require_approval":
return f"Payment of ${amount} requires approval (ID: {decision.approval_id})"
if decision.action != "allow":
return f"Blocked: {decision.reason}"
return payment_service.charge(amount, recipient)
agent = Agent(
name="Operations Agent",
instructions="You handle customer operations including emails and payments.",
tools=[send_email, process_payment],
)
result = Runner.run_sync(agent, "Send a $5000 payment to vendor@example.com")Policy configuration
Define authorization rules in declarative YAML. Version control alongside your code. No prompt engineering required. Policies apply regardless of what the model decides.
rules:
- name: block_competitor_emails
description: Block emails to competitor domains
tool: send_email
when: args.to.endsWith("@competitor.com")
action: deny
message: "Cannot send emails to competitor domains"
- name: approve_large_payments
description: Require approval for payments over $500
tool: process_payment
when: args.amount > 500
action: require_approval
approvers: [finance-team]
timeout: 30m
- name: block_after_hours_payments
description: No payments outside business hours
tool: process_payment
when: context.time.hour < 9 || context.time.hour > 17
action: deny
message: "Payments require business hours (9am-5pm)"
- name: daily_payment_cap
description: Cap total daily payments at $10,000
tool: process_payment
when: context.daily_total + args.amount > 10000
action: deny
message: "Daily payment cap of $10,000 reached"Quickstart
Install
npm install veto-sdk openaiPython: pip install veto openai-agents
Define policies
Create veto/policies.yaml with rules for each tool. Match on tool name, constrain arguments, set actions.
Validate before executing
Call veto.guard() in your tool handler. Check the decision. Execute only if allowed.
Supported models
Veto works with every OpenAI model that supports function calling.
What Veto covers for OpenAI agents
Function allowlists
Allowlist which functions the agent can call. Block everything else by default. Different rules per environment, user role, or time of day.
Argument validation
Constrain function arguments by pattern, range, or business rule. Block emails to competitor domains. Cap payment amounts. Restrict SQL to read-only.
Approval workflows
Route sensitive function calls to human approval queues. Approvers get Slack or email notifications with full context. One-click allow or deny.
Audit logging
Every function call logged with name, arguments, decision, and timestamp. Queryable via API or dashboard. Export for SOC2 and compliance reporting.
Frequently asked questions
How is Veto different from OpenAI's built-in guardrails?
Does this work with the Responses API and Assistants API?
What happens when a function call is blocked?
Does Veto add latency?
Related integrations
Secure your OpenAI agents in minutes.