Use Cases/Healthcare Agents

190 million patient records breached in one incident.

The Change Healthcare breach in 2024 exposed 190 million individuals' PHI — the largest healthcare breach in history. OCR collected over $9.9 million in HIPAA penalties that year alone. AI agents processing patient data inherit every one of these compliance obligations. Veto enforces them at runtime.

What are healthcare AI agent guardrails?

Healthcare AI agent guardrails are runtime authorization policies that intercept every tool call an AI agent makes when handling Protected Health Information (PHI). They enforce HIPAA Security Rule technical safeguards (45 CFR 164.312), the minimum necessary standard (164.502(b)), and sensitive data segmentation requirements. The policies are evaluated outside the model's reasoning loop and cannot be bypassed.

HIPAA compliance is not optional for AI agents

AI systems that process PHI are Business Associates under HIPAA, triggering mandatory BAA requirements and direct liability for compliance violations. In 2024, OCR enforcement saw 725 breaches affecting 275 million records. HIPAA fines range from $141 to $2.13 million per violation category per year. The January 2025 Security Rule NPRM — the first major update in 20 years — removes the distinction between "required" and "addressable" safeguards, making all implementation specifications mandatory.

PHI exposure at scale

An AI scheduling assistant with full EHR access can expose complete medical histories when it only needs appointment data. An agentic AI workflow vendor breach in 2024 exposed records of 483,000 patients across six hospitals.

Minimum necessary violations

HIPAA's minimum necessary standard (164.502(b)) requires agents to access only the PHI strictly required for each task. Broad FHIR scopes and accumulated conversation context violate this standard. Each violation is independently actionable.

Clinical safety

Agents making unauthorized clinical recommendations create patient safety risks. Without human oversight enforcement, automation bias leads clinicians to over-rely on AI outputs without independent verification.

HIPAA Security Rule technical safeguards (45 CFR 164.312)

The Security Rule defines five technical safeguards. The 2025 proposed rule makes all of them mandatory with no "addressable" exceptions. Here is how each maps to AI agent authorization.

Access Control — 164.312(a)(1)

Agents must have unique identifiers and access rights scoped to minimum necessary data. RBAC or ABAC policies restrict each agent role to specific PHI fields and operations. The 2025 proposed rule mandates MFA for all ePHI access. Automatic session termination prevents stale agent contexts from accumulating excess PHI.

Audit Controls — 164.312(b)

Every AI interaction with PHI must be logged immutably: agent identity, timestamp, action type, data accessed, session ID, and authorization outcome. Logs must be retained for a minimum of 6 years (most organizations standardize on 7+) and protected against tampering via cryptographic hash chaining.

Integrity Controls — 164.312(c)(1)

Prevent unauthorized PHI modification. For AI agents: input validation to block prompt injection attacks, output verification for clinical decision support, and cryptographic hashes to verify data integrity end-to-end.

Authentication — 164.312(d)

Each agent session must be authenticated with a verifiable identity. Agent credentials are scoped per deployment, not shared across instances. Session tokens expire, forcing re-authentication and preventing credential reuse.

Transmission Security — 164.312(e)(1)

AES-256 encryption for PHI at rest. TLS 1.2+ for PHI in transit. The 2025 proposed rule makes encryption mandatory (currently addressable). Veto's in-process evaluation means PHI never leaves your infrastructure for authorization decisions.

PHI access control and redaction policies

Define declarative policies that enforce the minimum necessary standard. Each agent role receives only the PHI fields required for its specific function. Sensitive categories (psychotherapy notes, substance abuse records, HIV status) require additional patient consent under 42 CFR Part 2 and state-specific laws.

veto-policy.yamlyaml
name: healthcare-phi-protection
description: HIPAA-compliant PHI access control for AI agents

rules:
  # Minimum necessary enforcement by agent role
  - name: clinical-assistant-scope
    tools: ["ehr_read", "patient_lookup"]
    condition: "context.agent_role == 'clinical_assistant'"
    action: allow
    constraints:
      fields: ["demographics", "allergies", "current_medications", "diagnosis"]
      requires_patient_id: true
    audit:
      log_arguments: true
      retention_days: 2555  # 7 years per HIPAA

  - name: scheduling-agent-scope
    tools: ["ehr_read", "patient_lookup"]
    condition: "context.agent_role == 'scheduling_agent'"
    action: allow
    constraints:
      fields: ["name", "phone", "preferred_times"]
      exclude_fields: ["diagnosis", "medications", "ssn", "mrn"]
    audit:
      log_arguments: true
      retention_days: 2555

  - name: billing-agent-scope
    tools: ["ehr_read", "patient_lookup"]
    condition: "context.agent_role == 'billing_agent'"
    action: allow
    constraints:
      fields: ["name", "dob", "insurance_info", "account_number"]
      exclude_fields: ["diagnosis", "medications", "clinical_notes"]

  # Block clinical notes for non-clinical agents
  - name: block-clinical-notes
    tools: ["ehr_read"]
    condition: >
      'clinical_notes' in args.fields or
      'psychiatric_notes' in args.fields
    action: deny
    response:
      error: "Clinical notes require clinical staff authorization"

  # Sensitive data segmentation — 42 CFR Part 2
  - name: substance-abuse-consent
    tools: ["ehr_read", "patient_lookup"]
    condition: "'substance_abuse' in args.fields"
    action: require_approval
    constraints:
      patient_consent_verified: true
      consent_type: "42_cfr_part_2"
    response:
      message: "Substance abuse records require 42 CFR Part 2 consent"

  - name: psychotherapy-notes-consent
    tools: ["ehr_read"]
    condition: "'psychotherapy_notes' in args.fields"
    action: require_approval
    constraints:
      patient_consent_verified: true
      approver_role: "treating_clinician"
    response:
      message: "Psychotherapy notes require explicit patient consent"

  # PHI redaction in outbound communications
  - name: patient-message-phi-check
    tools: ["send_patient_message", "send_referral"]
    action: allow
    constraints:
      no_phi_in_message: true
      verified_patient_identity: true

  # Controlled substance prescriptions
  - name: controlled-substance-review
    tools: ["prescribe_medication"]
    condition: "args.schedule in ['II', 'III', 'IV']"
    action: require_approval
    constraints:
      approver_role: "pharmacist"
      alert_on_deny: true
    response:
      message: "Controlled substance prescription requires pharmacist review"

  # Research data — de-identification enforcement
  - name: research-deidentified-only
    tools: ["query_research_dataset"]
    action: allow
    constraints:
      dataset_type: "de_identified"
      approved_irb_protocol: true
    audit:
      log_query: true
      researcher_id: required

Sensitive data segmentation

Federal and state law require additional protections beyond standard HIPAA for specific PHI categories. AI agents must check segmentation flags and verify consent before accessing these records.

Psychotherapy Notes

Requires explicit patient consent per HIPAA. Enhanced access logging. Separate authorization from general mental health records. Cannot be disclosed for treatment, payment, or operations without specific authorization.

Substance Abuse Records

42 CFR Part 2 imposes restrictions stricter than standard HIPAA. Specific consent form requirements. Re-disclosure prohibited without additional patient authorization. AI agents must enforce these boundaries per-request.

HIV/AIDS Information

State-specific consent requirements. Many states require separate written authorization for disclosure. Policies must be jurisdiction-aware and enforce the strictest applicable standard.

Reproductive Health

Post-Dobbs state-specific restrictions. Enhanced privacy protections in some jurisdictions. Geographic context-aware policy enforcement required to comply with varying state laws.

HIPAA compliance mapping

HIPAA RequirementVeto Implementation
Access Control — 164.312(a)(1)RBAC/ABAC per agent role, unique agent identifiers, automatic session termination, MFA-ready
Audit Controls — 164.312(b)Immutable audit trails with hash chaining, 7-year retention, tamper detection, SIEM export
Integrity — 164.312(c)(1)Input validation, prompt injection resistance, cryptographic integrity verification
Authentication — 164.312(d)Per-session agent credentials, token expiration, no shared credentials across instances
Transmission Security — 164.312(e)(1)TLS 1.2+ enforced, in-process evaluation (PHI never leaves your infrastructure)
Minimum Necessary — 164.502(b)Purpose-specific data scopes, field-level restrictions, date precision controls
Accounting of Disclosures — 164.528Exportable audit logs, patient disclosure request support, structured compliance reports
Training — 164.530(b)Policy-as-code documentation, version-controlled evidence for compliance training records

Build vs buy for healthcare AI

CapabilityDIYVeto
ABAC with minimum necessary enforcement
Sensitive data segmentation (42 CFR Part 2)
PHI redaction for agent prompts/responses
Immutable audit trails with hash chaining
7-year audit retention
BAA-ready documentationCreate yourselfIncluded
Time to HIPAA complianceMonthsHours

Related use cases

Frequently asked questions

How do healthcare AI guardrails help with HIPAA compliance?
Guardrails enforce the technical safeguards required by 45 CFR 164.312: access control via RBAC/ABAC, audit controls via immutable logging, integrity via input validation, and transmission security via in-process evaluation. Every agent action is logged with full context — agent identity, PHI accessed, authorization decision, timestamp — providing the audit trail needed for OCR compliance verification and patient disclosure requests under 164.528.
What is the minimum necessary standard for AI agents?
Under 164.502(b), AI agents must access only the PHI strictly required for each specific task. A scheduling agent needs name and phone number — not diagnoses. A billing agent needs insurance info — not clinical notes. Veto enforces this by defining purpose-specific data scopes per agent role, with field-level restrictions and date precision controls that reduce identifiability.
How does sensitive data segmentation work?
Federal law (42 CFR Part 2 for substance abuse, HIPAA for psychotherapy notes) and state laws (HIV/AIDS, reproductive health) require additional protections beyond standard HIPAA. Veto policies check segmentation flags on every tool call and verify that the required patient consent exists before allowing access. Denied attempts are logged with enhanced detail for compliance tracking.
What retention periods are supported for healthcare audit logs?
HIPAA requires 6-year retention for security-related records. Most healthcare organizations standardize on 7+ years. Veto supports configurable retention up to 10 years. Audit logs use cryptographic hash chaining — each entry contains a hash of the previous entry — making undetected tampering impossible.
Does the 2025 HIPAA Security Rule NPRM affect AI agents?
Yes. The January 2025 proposed rule removes the distinction between 'required' and 'addressable' safeguards, making encryption, MFA, vulnerability scanning (biannual), and penetration testing (annual) mandatory. If finalized, regulated entities have 240 days to comply. Veto's policy-as-code approach creates version-controlled evidence that maps directly to these requirements.

HIPAA compliance for AI agents. Enforced at runtime, not by hope.