mdash/Documentation

DocumentationV3.0.0

Complete reference for the mdash protocol. Learn how to implement liability infrastructure for autonomous AI agents with cryptographic proof of sanctioned operation.

Overview

mdash provides governance without control: a liability infrastructure layer that creates cryptographic proof of agent behavior without restricting agent capabilities. Instead of preventing agents from acting, mdash proves what they did.

Core Principle

Permission is not proof. Logs are self-reported. mdash provides independent verification that agent claims match physical reality.

Key Components

  • Warrant System: Two-phase authorization/attestation lifecycle
  • Physics Engine: Validates claims against hardware constraints
  • Checkpoint Surety: Hash-chained audit trail with token attribution
  • Sealed Context: Procedural invariants at governance layer

How mdash Differs

mdash operates at the liability layer, a distinct layer in the AI infrastructure stack that no other tool addresses.

vs. Memory Layers (Mem0, Supermemory)

Memory layers store conversation history and user preferences. mdash stores warrants: cryptographic proofs of what an agent was authorized to do and attestations of what it actually did. Memory is about continuity. mdash is about accountability.

vs. Observability (Arize, LangSmith)

Observability captures execution traces: inputs, outputs, latency, token usage. mdash captures liability traces: who authorized the action, what constraints applied, whether the outcome matched intent, and who bears economic responsibility. Observability helps you debug. mdash helps you defend.

vs. Context Graphs (PlayerZero)

Context graphs capture decision traces: why code changes were made, what precedents existed, which exceptions were approved. mdash captures liability graphs: who pays when an agent's authorized action causes economic harm. Context graphs are for engineering confidence. mdash is for insurance underwriting.

The Distinction

Other tools tell you what happened and why. mdash tells you who pays.

Quick Start

Authorize a Warrant

Request permission before executing an action with defined constraints.

Learn more →

Attest Completion

Seal the warrant with outcome data after action execution.

Learn more →

Validate Physics

Ensure claimed operations are physically possible.

Learn more →

Audit Trail

Review hash-chained records with token attribution.

Learn more →

Minimal Example

TypeScript
import { Kernel } from '@mdash/core';

const kernel = new Kernel({ physics: true, checkpoints: true });

// Phase 1: Authorize
const warrant = await kernel.authorize({
  intent: 'transfer_funds',
  ceiling: 500.00,
  ttl: 300
});

// Execute action...
const result = await executeTransfer(warrant);

// Phase 2: Attest
const sealed = await kernel.attest(warrant, {
  amount: result.amount,
  duration_ms: result.duration
});

console.log(sealed.state); // 'ATTESTED'

Installation

Package Manager

Shell
# npm
npm install @mdash/core @mdash/physics @mdash/checkpoint

# yarn
yarn add @mdash/core @mdash/physics @mdash/checkpoint

Private beta: packages not yet published to npm. Join the waitlist for early access.

Requirements

  • Node.js 18+ or Bun 1.0+
  • TypeScript 5.0+ (recommended)
  • PostgreSQL 14+ (for checkpoint persistence)

Warrant Lifecycle

Every agent action flows through a two-phase warrant system. Authorization establishes intent and constraints before execution. Attestation seals the outcome after.

PENDING
AUTHORIZED
ATTESTED
StateDescriptionTransitions
PENDINGWarrant requested, awaiting policy evaluation→ AUTHORIZED, → REJECTED
AUTHORIZEDPolicy passed, action may proceed→ ATTESTED, → REVOKED, → EXPIRED
ATTESTEDAction completed, liability shield activeTerminal state
Important

Actions executed without a valid warrant have no liability protection. Always verify warrant state before proceeding.

Authorization

The authorization phase evaluates the agent's declared intent against configured policies and establishes operational constraints.

Request Parameters

ParameterTypeRequiredDescription
intentstringYesDeclared action type
ceilingnumberNoMaximum value constraint
ttlnumberNoTime-to-live in seconds
metadataobjectNoAdditional context

Attestation

Attestation seals the warrant with actual outcome data, triggering physics validation and creating the final audit record.

Liability Shield

Once a warrant reaches ATTESTED state with valid physics, the action is cryptographically proven to have operated within sanctioned parameters.

Physics Engine

The physics engine validates that claimed outcomes are physically possible given hardware constraints. This catches agents that lie about their operations.

Built-in Validators

  • Bandwidth: Data transfer vs. network capacity and time
  • Latency: Operation duration vs. minimum possible time
  • Storage: Bytes written vs. disk throughput
  • Compute: Operations performed vs. CPU cycles available

Token Attribution

Token attribution distinguishes between agent decisions and environment responses in the audit trail. This is critical for liability assignment.

V2.6.0 Update

Token attribution vulnerability fixed. All token boundaries are now cryptographically sealed at checkpoint creation.

Token TypeSourceLiability
agentAgent's own reasoningAgent bears responsibility
envEnvironment responsesEnvironment/tool provider
systemmdash kernel opsInfrastructure

Manifold-Constrained Context Architecture (MCCA)

MCCA provides mathematical bounds on AI agent liability through constrained information flow. Instead of preventing attacks, MCCA proves attribution.

The Problem

AI agents process information from multiple sources: system prompts, user inputs, tool outputs, and conversation history. Without constraints, any source can dominate agent decisions, creating unbounded liability.

The Solution

MCCA enforces influence budgets that sum to 1.0, ensuring no single source can overwhelm the others:

SourceToken ClassConstraintAttack Defended
Systemsys≥ 0.40 minGoal hijacking
Userusr≤ 0.35 maxPrivilege escalation
Tool Outputenv≤ 0.20 maxPrompt injection
Agent Historyast≤ 0.25 maxContext overflow
Core Principle

"Bounded risk is insurable. Unbounded risk is not." MCCA transforms AI agent liability from unbounded to bounded.

Influence Budget

The influence budget is a four-component vector that must sum to exactly 1.0. Each component represents the relative influence of its source class on the agent's decision.

TypeScript
interface InfluenceBudget {
  sys: number;  // ≥ 0.40 (system policy minimum)
  usr: number;  // ≤ 0.35 (user influence maximum)
  env: number;  // ≤ 0.20 (tool output maximum)
  ast: number;  // ≤ 0.25 (agent history maximum)
}

// Example valid budget
const budget = { sys: 0.45, usr: 0.30, env: 0.15, ast: 0.10 };
// Sum = 1.0 ✓

Enforcement Modes

ModeBehaviorEnvironment
OBSERVELog violations, allow actionDevelopment
ALERTLog + notify, flag actionStaging
ENFORCELog + reject violating actionProduction

Liability Attribution

When constraints are violated, MCCA identifies the responsible party through precise attribution:

ViolationConditionPrimary Liable Party
Prompt Injectionenv > 0.20Tool provider (malicious content)
Policy Failuresys < 0.40Operator (weak policy)
Privilege Escalationusr > 0.35User (override attempt)
Context Overflowast > 0.25Agent/Operator (history mismanagement)
Insurance Ready

MCCA attestations provide the quantitative data insurance carriers need to underwrite AI agent deployments. Each warrant includes the influence budget at decision time.

Theoretical Foundations

Manifold-Constrained Hyper-Connections

MCCA draws directly from DeepSeek's research on Manifold-Constrained Hyper-Connections (mHC), which demonstrates that constraining information flow across network layers enables stable training at scale. We adapt this insight to AI agent governance: by constraining influence flow across context source classes, mdash enables stable operation at deployment scale.
Reference: arXiv:2512.24880 (December 2025)

Constrained Behavior as Lottery Tickets

MIT's Lottery Ticket Hypothesis proved that dense neural networks contain sparse subnetworks ("winning tickets") that achieve equivalent accuracy with 90% fewer parameters. mdash applies an analogous principle to agent behavior: within the space of all possible agent actions, constrained influence budgets identify the "winning ticket" of insurable behavior, preserving utility while bounding liability.
Reference: arXiv:1803.03635 (Frankle & Carbin, 2019)

Warrant API Reference

kernel.authorize(params)

Request authorization for an intended action.

kernel.attest(warrant, outcome)

Seal a warrant with actual outcome data.

kernel.revoke(warrantId, reason)

Invalidate a warrant before attestation.

kernel.query(warrantId)

Retrieve warrant state and history.

Attack Vectors

mdash defends against 20 documented attack vectors. All attacks have been tested and validated as of V2.7.0.

AttackVectorDefense
ReplayReuse valid warrantSingle-use nonces
ForgeryCreate fake warrantCryptographic signatures
TamperingModify after creationHash chains
Physics bypassClaim impossible outcomesHardware validation
Token injectionMisattribute tokensSealed boundaries (v2.6)
Security Notice

Report security vulnerabilities to security@mdash.sh. Do not disclose publicly until patch is released.