🛡️ Patent-pending · Live on macOS

The AI Security Triad

Three independent parties reaching consensus before any agent action touches your workspace. The Byzantine Generals problem, adapted for AI agents.

The three parties

For decades, application security assumed one enforcer: the tool watching the process. AI agents broke that model. An agent asked politely to do something harmful will usually do it. An agent prompted to ignore its guardrails often will. The tool watching from outside cannot see what's happening inside the agent's reasoning.

The AI Security Triad solves this with three independent parties. Any one of them can be compromised, misled, or coerced — and the other two still reach the right answer.

👤

User

You. Has intent, holds authority, retains the right to override.

🛡️

Security Tool

RootShield. Deterministic, offline, isolated ground truth. Cannot be prompt-injected.

🤖

Agent

Your AI agent. Security-aware. Queries the tool before acting. Cites policy under pressure.

↓ Consensus before any action ↓

The closed loop

The Byzantine Generals parallel. The classical problem asks: how do independent parties reach consensus when any one of them could be compromised? The AI Security Triad is the answer for AI agents.

Why this matters now

AI agents are the fastest-growing enterprise attack surface. Palo Alto Networks paid $1.2 billion for Koi in April 2026 to acquire agent visibility. Noma Security raised $100 million at 1,300% ARR growth. Vercel disclosed a supply chain breach the same week — one employee granted an AI tool broad OAuth, which became a master key when that tool was compromised.

The category needs a name. The problem is three parts: agents that can be prompted into unsafe actions, security tools that cannot see inside agent reasoning, and users who cannot audit the permissions their tools have accumulated. The AI Security Triad names the shape of the defense — independent verification across all three.

How RootShield implements the AI Security Triad

Today, on macOS, across ten or more AI agent platforms:

🏛️ Patent-pending across seven provisional filings

Multi-party consensus · Policy directive injection · Adversarial request resistance · Behavioral baseline · Orphaned agent configuration detection · Pre-deployment blast radius · Multi-layer enforcement. Priority date April 2026. Inventor: Matthew Jackson.

What's not the AI Security Triad

Watch the AI Security Triad live

A fresh Claude Code session. A memory MCP install request. A formal security verdict. An agent holding the line. No staging, no hand-waving — on my actual machine.

See RootShield → Get early access