DocumentationV3.0.0
Complete reference for the mdash protocol. Learn how to implement liability infrastructure for autonomous AI agents with cryptographic proof of sanctioned operation.
Overview
mdash provides governance without control: a liability infrastructure layer that creates cryptographic proof of agent behavior without restricting agent capabilities. Instead of preventing agents from acting, mdash proves what they did.
Permission is not proof. Logs are self-reported. mdash provides independent verification that agent claims match physical reality.
Key Components
- Warrant System: Two-phase authorization/attestation lifecycle
- Physics Engine: Validates claims against hardware constraints
- Checkpoint Surety: Hash-chained audit trail with token attribution
- Sealed Context: Procedural invariants at governance layer
How mdash Differs
mdash operates at the liability layer, a distinct layer in the AI infrastructure stack that no other tool addresses.
vs. Memory Layers (Mem0, Supermemory)
Memory layers store conversation history and user preferences. mdash stores warrants: cryptographic proofs of what an agent was authorized to do and attestations of what it actually did. Memory is about continuity. mdash is about accountability.
vs. Observability (Arize, LangSmith)
Observability captures execution traces: inputs, outputs, latency, token usage. mdash captures liability traces: who authorized the action, what constraints applied, whether the outcome matched intent, and who bears economic responsibility. Observability helps you debug. mdash helps you defend.
vs. Context Graphs (PlayerZero)
Context graphs capture decision traces: why code changes were made, what precedents existed, which exceptions were approved. mdash captures liability graphs: who pays when an agent's authorized action causes economic harm. Context graphs are for engineering confidence. mdash is for insurance underwriting.
Other tools tell you what happened and why. mdash tells you who pays.
Quick Start
Authorize a Warrant
Request permission before executing an action with defined constraints.
Learn more →Minimal Example
import { Kernel } from '@mdash/core'; const kernel = new Kernel({ physics: true, checkpoints: true }); // Phase 1: Authorize const warrant = await kernel.authorize({ intent: 'transfer_funds', ceiling: 500.00, ttl: 300 }); // Execute action... const result = await executeTransfer(warrant); // Phase 2: Attest const sealed = await kernel.attest(warrant, { amount: result.amount, duration_ms: result.duration }); console.log(sealed.state); // 'ATTESTED'
Installation
Package Manager
# npm npm install @mdash/core @mdash/physics @mdash/checkpoint # yarn yarn add @mdash/core @mdash/physics @mdash/checkpoint
Private beta: packages not yet published to npm. Join the waitlist for early access.
Requirements
- Node.js 18+ or Bun 1.0+
- TypeScript 5.0+ (recommended)
- PostgreSQL 14+ (for checkpoint persistence)
Warrant Lifecycle
Every agent action flows through a two-phase warrant system. Authorization establishes intent and constraints before execution. Attestation seals the outcome after.
| State | Description | Transitions |
|---|---|---|
PENDING | Warrant requested, awaiting policy evaluation | → AUTHORIZED, → REJECTED |
AUTHORIZED | Policy passed, action may proceed | → ATTESTED, → REVOKED, → EXPIRED |
ATTESTED | Action completed, liability shield active | Terminal state |
Actions executed without a valid warrant have no liability protection. Always verify warrant state before proceeding.
Authorization
The authorization phase evaluates the agent's declared intent against configured policies and establishes operational constraints.
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
intent | string | Yes | Declared action type |
ceiling | number | No | Maximum value constraint |
ttl | number | No | Time-to-live in seconds |
metadata | object | No | Additional context |
Attestation
Attestation seals the warrant with actual outcome data, triggering physics validation and creating the final audit record.
Once a warrant reaches ATTESTED state with valid physics, the action is cryptographically proven to have operated within sanctioned parameters.
Physics Engine
The physics engine validates that claimed outcomes are physically possible given hardware constraints. This catches agents that lie about their operations.
Built-in Validators
- Bandwidth: Data transfer vs. network capacity and time
- Latency: Operation duration vs. minimum possible time
- Storage: Bytes written vs. disk throughput
- Compute: Operations performed vs. CPU cycles available
Token Attribution
Token attribution distinguishes between agent decisions and environment responses in the audit trail. This is critical for liability assignment.
Token attribution vulnerability fixed. All token boundaries are now cryptographically sealed at checkpoint creation.
| Token Type | Source | Liability |
|---|---|---|
agent | Agent's own reasoning | Agent bears responsibility |
env | Environment responses | Environment/tool provider |
system | mdash kernel ops | Infrastructure |
Manifold-Constrained Context Architecture (MCCA)
MCCA provides mathematical bounds on AI agent liability through constrained information flow. Instead of preventing attacks, MCCA proves attribution.
The Problem
AI agents process information from multiple sources: system prompts, user inputs, tool outputs, and conversation history. Without constraints, any source can dominate agent decisions, creating unbounded liability.
The Solution
MCCA enforces influence budgets that sum to 1.0, ensuring no single source can overwhelm the others:
| Source | Token Class | Constraint | Attack Defended |
|---|---|---|---|
| System | sys | ≥ 0.40 min | Goal hijacking |
| User | usr | ≤ 0.35 max | Privilege escalation |
| Tool Output | env | ≤ 0.20 max | Prompt injection |
| Agent History | ast | ≤ 0.25 max | Context overflow |
"Bounded risk is insurable. Unbounded risk is not." MCCA transforms AI agent liability from unbounded to bounded.
Influence Budget
The influence budget is a four-component vector that must sum to exactly 1.0. Each component represents the relative influence of its source class on the agent's decision.
interface InfluenceBudget { sys: number; // ≥ 0.40 (system policy minimum) usr: number; // ≤ 0.35 (user influence maximum) env: number; // ≤ 0.20 (tool output maximum) ast: number; // ≤ 0.25 (agent history maximum) } // Example valid budget const budget = { sys: 0.45, usr: 0.30, env: 0.15, ast: 0.10 }; // Sum = 1.0 ✓
Enforcement Modes
| Mode | Behavior | Environment |
|---|---|---|
OBSERVE | Log violations, allow action | Development |
ALERT | Log + notify, flag action | Staging |
ENFORCE | Log + reject violating action | Production |
Liability Attribution
When constraints are violated, MCCA identifies the responsible party through precise attribution:
| Violation | Condition | Primary Liable Party |
|---|---|---|
| Prompt Injection | env > 0.20 | Tool provider (malicious content) |
| Policy Failure | sys < 0.40 | Operator (weak policy) |
| Privilege Escalation | usr > 0.35 | User (override attempt) |
| Context Overflow | ast > 0.25 | Agent/Operator (history mismanagement) |
MCCA attestations provide the quantitative data insurance carriers need to underwrite AI agent deployments. Each warrant includes the influence budget at decision time.
Theoretical Foundations
Manifold-Constrained Hyper-Connections
MCCA draws directly from DeepSeek's research on Manifold-Constrained Hyper-Connections (mHC), which demonstrates that constraining information flow across network layers enables stable training at scale. We adapt this insight to AI agent governance: by constraining influence flow across context source classes, mdash enables stable operation at deployment scale.
Reference: arXiv:2512.24880 (December 2025)
Constrained Behavior as Lottery Tickets
MIT's Lottery Ticket Hypothesis proved that dense neural networks contain sparse subnetworks ("winning tickets") that achieve equivalent accuracy with 90% fewer parameters. mdash applies an analogous principle to agent behavior: within the space of all possible agent actions, constrained influence budgets identify the "winning ticket" of insurable behavior, preserving utility while bounding liability.
Reference: arXiv:1803.03635 (Frankle & Carbin, 2019)
Warrant API Reference
kernel.authorize(params)
Request authorization for an intended action.
kernel.attest(warrant, outcome)
Seal a warrant with actual outcome data.
kernel.revoke(warrantId, reason)
Invalidate a warrant before attestation.
kernel.query(warrantId)
Retrieve warrant state and history.
Attack Vectors
mdash defends against 20 documented attack vectors. All attacks have been tested and validated as of V2.7.0.
| Attack | Vector | Defense |
|---|---|---|
| Replay | Reuse valid warrant | Single-use nonces |
| Forgery | Create fake warrant | Cryptographic signatures |
| Tampering | Modify after creation | Hash chains |
| Physics bypass | Claim impossible outcomes | Hardware validation |
| Token injection | Misattribute tokens | Sealed boundaries (v2.6) |
Report security vulnerabilities to security@mdash.sh. Do not disclose publicly until patch is released.