Deep dive

Trust Covenant

Five rules. Each one is enforced by code, not by a promise. The 1Password / Signal trust model applied to agentic state.

Updated May 2, 2026


Most “AI memory” products ask you to trust them. Trust their cloud, trust their prompt, trust that the model didn’t hallucinate the fact you’re about to act on. Ouroboros makes a different deal: five rules, all enforced by code. Not policy pages, not a TOS clause, not a model that promises to behave. Real guardrails in real software you run on your own machine.

The model behind it is the same one that lets you trust 1Password with every password and Signal with every message: the vendor cannot do the bad thing, because the bad thing is not architecturally possible. Below are the five rules, what each one means in plain terms, and what stops the system from breaking them.

1. Every claim verified

Every fact mined from a document carries a verbatim quote from the source — capped at 300 characters. Before that fact is written to the graph, the extractor takes the quote and substring-checks it against the document text. If the quote is not literally present, the fact is dropped on the floor.

The check is a for loop. It is not a model. It does not have a bad day. A hallucinated fact has nowhere to land, because there is no quote to anchor it to, and the anchor is enforced on write.

model emits claim → quote-check → graph or drop
flowchart LR
  A[Document text] --> B[Model emits claim + quote]
  B --> C{Quote &le; 300 chars<br/>and substring of source?}
  C -- yes --> D[Write to knowledge graph]
  C -- no --> E[Drop claim<br/>log rejection]
  D --> F[Fact carries quote + offset<br/>forever]

The fact in the graph keeps the quote. Every reader — your agent, the SPA, an audit run — can re-verify it against the source at any time. There is no “trust me, I read it somewhere” layer.

2. Every edge grounded

Knowledge in Ouroboros isn’t a pile of statements. Every relationship and every fact is a dated observation by a named observer. The schema makes you record who said it, when, and at what trust tier.

You — the human at the keyboard — write at trust_tier = 'human'. The model, when it mines a document, writes at trust_tier = 'extracted'. A scheduled sweeper writes at trust_tier = 'system'. When two observers disagree, the higher tier wins, but the lower-tier observation is not erased — it is superseded. You can always see what the model thought and when you overrode it.

Corrections are append-only. You don’t overwrite a fact; you write a new one that supersedes the old. The history is the audit trail.

3. Every write reversible

Every mutation — every insert, update, soft-delete, rename, supersession — is journaled with full before/after JSON. The journal lives in the same database as the data. There is a Time Machine view in the tray app: scroll back, find the change, click revert. The row goes back to its prior state. The journal records the revert too.

System actors — the mining pipeline, schema migrations, the orphan-fact sweeper — operate under a stricter rule: they cannot hard-delete user data. They can soft-deactivate, tombstone, or supersede. Hard delete is a deliberate action you take from the tray, not something a background job can do to you while you sleep.

// Every mutation, regardless of source, lands here first
await journal.write({
table: 'knowledge_facts',
row_id: factId,
before: prior,           // full JSON snapshot
after:  next,            // full JSON snapshot
actor:  'mining-pipeline',
reason: 'extracted-supersedes-extracted',
ts:     Date.now(),
});
// revert_mutation(mutation_id) replays before → after in reverse

4. Every agent scoped

A new MCP connection is minted by a click in the tray app. There is no config file you edit to grant access. The tray shows you what the agent will be able to read, you click approve, and a bearer is written to a single file at 0600 permissions in your home directory. The agent reads it from there.

Each connection has a scope — a list of entity ids it is allowed to see. The scope is enforced in SQL, not in the application layer. Every read query gets AND e.id IN (?) injected at the daemon, with the connection’s scope bound as parameters. A connection scoped to one client cannot return rows for any other client, no matter what the agent asks.

-- What every read query looks like after the daemon rewrites it.
-- The IN (?) is bound to the connection's scope. The agent never
-- sees this clause and cannot remove it.

SELECT kf.predicate, kf.object, kf.quote
FROM knowledge_facts kf
JOIN entities e ON kf.entity_id = e.id
WHERE e.id IN (?)               -- &larr; scope, injected by daemon
 AND kf.deleted_at IS NULL
 AND kf.predicate = ?
ORDER BY kf.observed_at DESC
LIMIT ?;

Even an agent that tries to escape scope by composing raw SQL through the V8 isolate hits the same clause, because the scoped database handle is what the isolate is given. There is no unscoped handle in the agent’s reach.

5. Every provider yours

Ouroboros ships with no built-in cloud LLM. There is no vendor key embedded in the binary. There is no silent fallback that picks a model for you when you didn’t configure one — the provider registry throws instead of degrading. If a task needs a model and you haven’t supplied one, the task fails loud.

You configure providers in Settings. Bring your own Anthropic key, OpenAI key, Gemini key — or run a local model through Ollama. Most users come in through Claude Code or Codex and use their existing key. Ollama is one option among several for people who want a fully local-only setup; it is not the default story.

The covenant: never picks a model for you, never silently spends, never embeds a vendor key in the binary.

Backed by

The five rules above are policy enforced in code. Underneath them sit the mechanisms that make tampering hard:

  • Encrypted libsql at rest for the operations database that holds bearers, connection scopes, and the journal. Your subscriber data is next on the list (see below).
  • Ed25519-signed wiki pages — durable notes carry a signature so a later actor cannot quietly rewrite history.
  • OS user-account boundary — the daemon runs as your user, the bearer file is 0600, the database lives under your home directory. Another user on the box cannot read it without root.
  • Append-only journal — the audit trail of every mutation lives in the same DB as the data, so a backup of the data is a backup of the audit trail.

Where this is headed

  • SQLCipher across all subscriber tables — currently the mcp-ops DB is encrypted; the larger subscriber DB (knowledge, documents, codebase) is not yet. Bringing SQLCipher across the whole subscriber surface is the next step.
  • Tray-app surfacing of provider registry state — today you see configured providers in Settings. The tray will surface live state: which provider served the last skim, which key is active for embeddings, what failed and why.
  • Hardware-backed key storage — on platforms that expose a secure enclave (macOS Keychain, Windows TPM, Linux TPM2 where available), the bearer file gets wrapped by a hardware-held key. Lifting the file off disk no longer hands an attacker the credential.

This is a beta. One person uses it daily. The five rules are real today; the cryptographic backing is staged. Where a guardrail isn’t fully in place, the docs say so.


← Back to overview