Use Cases/Customer Support

Your AI chatbot just promised a refund policy that does not exist. Now you have to honor it.

Support agents interact directly with customers. A single hallucinated response can create legal liability, damage trust, or process unauthorized refunds. Veto validates every action before it reaches the customer, enforcing tiered refund limits, escalation rules, and PII protection that the model cannot override.

Tiered refund policiesAuto-escalationPII redaction

Courts are already ruling against companies

In February 2024, a Canadian tribunal ruled Air Canada must honor a refund policy fabricated by its chatbot. The chatbot told a customer he could book a full-price ticket and apply for a bereavement discount retroactively — a policy that did not exist. Air Canada argued the chatbot was "a separate legal entity." The judge called it "a remarkable submission" and ruled the company liable. Separately, DPD's chatbot called itself "the worst delivery company in the world" and swore at customers after a system update. On Veto's homepage, we show an $8,900 refund on a flagged account — the kind of action that should never be auto-approved.

Why customer support AI needs runtime controls

Customer support agents interact directly with your customers. A single bad response can create legal liability (Air Canada), damage brand reputation (DPD), or process unauthorized refunds. Prompt instructions cannot guarantee safe behavior — the model can hallucinate policies, ignore tone guidelines, or process actions outside its authority. Runtime guardrails operate independently of the model's reasoning and cannot be bypassed.

Hallucinated policies

AI fabricates refund policies, discount codes, or guarantees that do not exist. Courts hold companies liable for chatbot promises.

Data exposure

PII leakage, unauthorized account access, or exposure of internal systems and processes. One response can expose another customer's data.

Uncontrolled refunds

Agent processes $8,900 refund on a flagged account the same way it processes a $12 refund on a good-standing account. No tiering. No limits.

Tiered refund and escalation policies

Define exactly what your support agent can authorize, what requires approval, and what must be escalated to a human. These are the policies that would have prevented the Air Canada incident and the $8,900 flagged-account scenario on Veto's homepage.

veto/policies/support.yamlyaml
policies:
  # Tiered refund authorization
  - name: "Auto-approve small refunds"
    match:
      tool: "process_refund"
      arguments:
        amount: { "$lte": 50 }
        account_status: "good_standing"
    action: allow

  - name: "Approve medium refunds"
    match:
      tool: "process_refund"
      arguments:
        amount: { "$gt": 50, "$lte": 500 }
    action: require_approval
    approval:
      timeout_minutes: 30
      channels: [slack]

  - name: "Block high-value refunds"
    match:
      tool: "process_refund"
      arguments:
        amount: { "$gt": 500 }
    action: deny
    response:
      error: "Refunds over $500 require manager processing"

  - name: "Block refunds on flagged accounts"
    match:
      tool: "process_refund"
      arguments:
        account_status: "flagged"
    action: deny
    response:
      error: "Flagged accounts require manual refund processing"

  # Escalation rules
  - name: "Escalate legal mentions"
    match:
      tool: ["send_response", "close_ticket"]
      arguments:
        message: "(?i)(legal|lawsuit|attorney|sue|court)"
    action: deny
    escalate_to: "human_support"
    response:
      error: "Escalating to human agent — legal mention detected"

  # Response validation
  - name: "Block fabricated policies"
    match:
      tool: "send_response"
      arguments:
        content: "(?i)(guaranteed|always|never|100%|promise)"
    action: require_approval
    approval:
      reason: "Response contains absolute claims requiring review"

  # PII redaction
  - name: "Redact sensitive data in responses"
    match:
      tool: "send_response"
    transform:
      redact_patterns:
        - pattern: "\d{4}[ -]?\d{4}[ -]?\d{4}[ -]?\d{4}"
          replacement: "[CARD REDACTED]"
        - pattern: "\d{3}-\d{2}-\d{4}"
          replacement: "[SSN REDACTED]"

  # Block unauthorized discount codes
  - name: "Block unauthorized discounts"
    match:
      tool: "send_response"
      arguments:
        content: "(?i)(DISCOUNT|VIPCODE|FRIENDS50|PROMO)"
    action: deny
    response:
      error: "Discount codes must be from the approved list"

Real-world scenarios

The flagged-account refund

On Veto's homepage, we demonstrate three refund requests hitting the same agent: a $12 routine refund (auto-approved), a $450 high-value refund (routed for approval), and an $8,900 refund on a flagged account (blocked). Without Veto, the agent processes all three identically. With Veto, each gets the appropriate level of scrutiny.

The hallucinated policy

Air Canada's chatbot fabricated a bereavement fare discount policy. A Veto policy that blocks responses containing absolute claims ("guaranteed", "always", "promise") and routes them for human review would have caught this before the customer relied on it. The company was held legally liable for the fabricated policy.

The brand reputation attack

DPD's chatbot called itself "the worst delivery company in the world" after a system update. Tone validation policies that flag negative sentiment and block self-deprecating responses would have prevented this from reaching customers. Response validation operates independently of the model's reasoning.

The legal escalation

A customer mentions "attorney" or "lawsuit" in a chat. Veto automatically blocks the AI response and escalates to a human agent. No AI-generated response reaches the customer for legally sensitive interactions. The escalation is logged for compliance.

With vs without guardrails

ScenarioPrompt-onlyVeto
$8,900 refund on flagged accountProcessedBlocked
Fabricated bereavement fare policySent to customerHeld for review
Customer mentions lawsuitAI respondsEscalated to human
Credit card number in responseExposedAuto-redacted
Unauthorized discount codeSentBlocked
"You are the worst company"Model may agreeTone check blocks
Can model bypass controls?Yes — model can ignoreNo — enforced at runtime

Benefits for support teams

Faster resolution times

AI handles routine inquiries instantly while guardrails ensure quality. Human agents focus on complex issues that require judgment.

Legal protection

After Air Canada, companies are liable for chatbot statements. Guardrails prevent fabricated policies from reaching customers.

Data protection

Automatic PII detection and redaction prevents accidental exposure of customer data or internal system details in responses.

Complete audit trails

Every AI response logged with full context. Track resolution quality, identify training gaps, and demonstrate compliance.

Related use cases

Frequently asked questions

Would Veto have prevented the Air Canada chatbot incident?
Yes. A policy that blocks responses containing absolute claims like "guaranteed" or "promise" and routes them for human review would have caught the fabricated bereavement fare policy before the customer relied on it. The guardrail operates independently of the model — it does not matter that the model believed the policy existed.
How do tiered refund policies work?
You define amount thresholds and account conditions. Refunds under $50 on good-standing accounts auto-approve. Refunds between $50-$500 route for human approval. Refunds over $500 or on flagged accounts are blocked entirely. The agent receives a clear error message and can inform the customer that a human will follow up.
Can guardrails detect and protect PII in responses?
Yes. Transform policies scan outgoing responses for patterns like credit card numbers (16 digits), SSNs (XXX-XX-XXXX), and other PII. Detected patterns are replaced with redacted placeholders before the response reaches the customer. This prevents accidental data exposure without blocking legitimate responses.
How do escalation rules work?
When a message contains legal keywords, high-value refund requests, or negative sentiment indicators, Veto blocks the AI response and routes the ticket to a human queue. The customer sees a message that a human agent will assist them. The escalation is logged for compliance and training purposes.
Do guardrails slow down response times?
Policy evaluation happens in milliseconds, adding negligible latency. The SDK runs locally with no network dependency for evaluations. Complex policies like tone analysis can be configured to run asynchronously. Only approval workflows add perceptible delay, which is intentional for high-risk actions.

Your chatbot speaks on behalf of your company.

After Air Canada, you are legally liable for what it says.