Insurance AI Agent Guardrails
A claims processing agent approves a $340,000 settlement without human review. An underwriting agent quotes 40% below guideline rates on a commercial property policy. A customer service agent reads a claimant's full medical history to answer a billing question. These aren't hypotheticals. They're what happens when insurance AI agents operate without runtime authorization.
What insurance AI guardrails actually do
Veto intercepts every tool call your insurance AI agent makes -- approve_claim, issue_policy, get_customer_data -- and evaluates it against YAML policies before execution. The agent's LLM reasoning never touches the authorization logic. A $50,000 claim triggers human review not because the model was prompted to ask, but because a deterministic rule evaluated the dollar amount and routed it to a supervisor. The agent cannot override this.
The insurance AI risk landscape
Insurance is among the most regulated industries in the US. Every state has its own insurance department, its own claims handling statutes, and increasingly its own AI-specific rules. The NAIC Model Bulletin on AI (adopted by 24+ states), Colorado SB21-169, and New York DFS Circular Letter No. 7 all impose specific governance requirements on insurers using AI in underwriting, pricing, and claims decisions.
AI agents in insurance carry both financial and regulatory risk simultaneously. A bad claims decision costs money. A discriminatory underwriting pattern triggers enforcement actions. An unsupervised denial violates state consumer protection laws. The consequences compound.
An unsupervised claims agent can approve settlements beyond policy limits, approve fraudulent claims, or offer below-guideline rates. Each error carries direct dollar cost.
The NAIC requires documented AI governance programs. Colorado mandates annual CRO attestations confirming no unfair discrimination. New York requires explainability documentation.
Insurance agents handle SSNs, medical records, financial histories, and claim photographs. Uncontrolled access to this data violates HIPAA, state privacy laws, and GLBA.
Four scenarios, four policy sets
Each insurance workflow carries different risks and requires different guardrails. Here's how Veto policies map to the four most common insurance AI agent patterns.
1. Claims processing
Claims agents review submissions, assess damage estimates, and authorize payments. The primary risks are approving amounts beyond authority limits, missing fraud indicators, and paying on lapsed or excluded coverage. Guardrails enforce dollar thresholds, fraud score gates, and coverage validation before any payment is authorized.
2. Underwriting and pricing
Underwriting agents assess risk profiles, calculate premiums, and issue quotes. The risks are quoting below minimum rates (creating actuarial losses), binding coverage outside appetite guidelines, and using prohibited rating factors. Guardrails enforce premium floors, coverage caps, and prohibited-factor blocking.
3. Fraud detection and SIU
Fraud agents flag suspicious patterns, but cannot deny claims unilaterally. California SB 1120 and Florida HB 527 prohibit AI from being the sole decision-maker for adverse claim determinations. Guardrails enforce mandatory human review for any denial, ensuring due process and licensed-professional sign-off.
4. Policy issuance and servicing
Issuance agents generate contracts, bind coverage, and handle endorsements. The risks are issuing policies outside binding authority, missing required disclosures, and modifying coverage without proper authorization. Guardrails enforce authority limits, disclosure checklists, and signature requirements.
Claims authorization policies
These YAML policies cover the most common claims processing controls. They execute deterministically -- the LLM's reasoning has no influence on whether a rule fires. Policies live in your repo and deploy through your existing CI/CD pipeline.
rules:
# Tier 1: Auto-approve low-value claims
- name: auto_approve_small_claims
description: Allow claims under $5,000 with low fraud scores
tool: approve_claim
when: args.amount <= 5000 AND args.fraud_score < 0.3
action: allow
# Tier 2: Supervisor review for mid-range claims
- name: mid_range_claims_review
description: Require supervisor review for claims $5,000-$50,000
tool: approve_claim
when: args.amount > 5000 AND args.amount <= 50000
action: require_approval
message: "Claims $5K-$50K require supervisor review"
approvers: ["claims-supervisor@example.com"]
# Tier 3: Senior management for high-value claims
- name: high_value_claims
description: Route claims over $50,000 to claims director
tool: approve_claim
when: args.amount > 50000
action: require_approval
message: "Claims over $50K require claims director approval"
approvers: ["claims-director@example.com"]
# Hard block: payment exceeds policy coverage
- name: block_overcoverage_payment
description: Block payments that exceed policy limits
tool: process_payment
when: args.amount > args.policy_coverage_limit
action: deny
message: "Payment exceeds policy coverage limit"
# Fraud gate: high fraud score triggers SIU
- name: fraud_siu_escalation
description: Route high-fraud-score claims to SIU
tool: approve_claim
when: args.fraud_score > 0.7
action: require_approval
message: "Fraud score {{args.fraud_score}} requires SIU review"
approvers: ["siu-team@example.com"]
# Regulatory: no automated denials
- name: no_automated_denials
description: AI cannot deny claims without human review
tool: deny_claim
action: require_approval
message: "All claim denials require licensed adjuster review"
approvers: ["licensed-adjuster@example.com"]
# PII access logging for GLBA compliance
- name: log_pii_access
description: Log all customer data access
tool: get_customer_data
action: allow
log: true
log_fields: ["args.customer_id", "args.fields_requested", "context.agent_id"]The tiered approval structure mirrors how most insurance companies already handle claims authority. The difference: it's enforced at the tool-call boundary, not by prompting the model to "check with a supervisor."
Underwriting authorization policies
Underwriting guardrails prevent actuarial losses from below-guideline quotes, enforce risk appetite boundaries, and block the use of prohibited rating factors under state regulations.
rules:
# Enforce minimum premium rates
- name: premium_floor
description: Block quotes below actuarial minimum
tool: generate_quote
when: args.premium < args.minimum_guideline_rate
action: deny
message: "Premium {{args.premium}} is below minimum guideline rate"
# Large policy binding authority
- name: large_policy_authority
description: Senior underwriter approval for policies over $100K
tool: bind_coverage
when: args.annual_premium > 100000
action: require_approval
message: "Policies over $100K annual premium require senior approval"
approvers: ["senior-underwriter@example.com"]
# Risk appetite enforcement
- name: risk_appetite_check
description: Block binding outside risk appetite
tool: bind_coverage
when: args.risk_class NOT IN ["preferred", "standard", "substandard"]
action: deny
message: "Risk class '{{args.risk_class}}' outside appetite guidelines"
# Prohibited factor blocking (Colorado SB21-169)
- name: prohibited_rating_factors
description: Block use of prohibited factors in pricing
tool: calculate_rate
when: args.factors_used CONTAINS_ANY ["race", "national_origin", "religion", "genetic_information"]
action: deny
message: "Prohibited rating factor detected in pricing model"
log: trueRegulatory compliance mapping
Insurance AI regulations are fragmented across state lines. Here's how Veto guardrails map to the specific requirements your compliance team cares about.
NAIC Model Bulletin
Adopted by 24+ states. Requires a formal AI Systems (AIS) Program with documented policies, an oversight committee, risk management processes, and internal audit schedules.
Colorado SB21-169
Requires insurers to prove their AI systems do not produce unfairly discriminatory outcomes. Annual CRO attestation. Disparate impact testing using the four-fifths rule.
NY DFS Circular Letter No. 7
Requires insurers to maintain documentation explaining AI model functionality, input data, assumptions, and how outputs influence decisions. Explainability mandate.
Adverse decision laws
California SB 1120, Florida HB 527, and Arizona HB 2175 prohibit AI from being the sole decision-maker for claim denials. Require licensed professional review.
Integration with claims management systems
Veto wraps your existing tool implementations. Your Guidewire API calls, Duck Creek integrations, and document management operations stay exactly the same. The SDK intercepts at the tool boundary -- it does not replace your claims system or require data migration.
import { Veto } from "veto-sdk";
const veto = new Veto({ apiKey: process.env.VETO_API_KEY });
// Your existing claims tools -- unchanged
const claimsTools = {
approve_claim: async (args: {
claimId: string;
amount: number;
fraud_score: number;
policy_coverage_limit: number;
}) => {
// Your existing Guidewire / Duck Creek API call
return await claimsSystem.approveClaim(args);
},
deny_claim: async (args: {
claimId: string;
reason: string;
automated: boolean;
}) => {
return await claimsSystem.denyClaim(args);
},
get_customer_data: async (args: {
customer_id: string;
fields_requested: string[];
}) => {
return await customerDB.getFields(args);
},
};
// Wrap with Veto -- one line per tool
const protectedTools = veto.wrapTools(claimsTools, {
policy: "claims",
context: { agent_id: "claims-processor-v2" },
});
// Use protectedTools in your agent framework
// LangChain, OpenAI, Anthropic, CrewAI -- all supportedThe authorization check adds single-digit millisecond latency. Policy evaluation happens locally in your process when using the SDK. Approval routing goes through Veto's cloud or your self-hosted instance.
Frequently asked questions
How is this different from prompt engineering or system prompts?
Can guardrails prevent discriminatory underwriting decisions?
What happens when a claim is routed for approval?
Does this work with our existing claims management system?
How do we satisfy the NAIC audit requirement?
Is the SDK open source?
Related
Insurance AI that operates within bounds.