How Knowledge Flows

Fourteen repos, six machines, multiple AI agents with overlapping but distinct contexts. The challenge isn't storing knowledge — it's making it findable, consistent, and useful across the entire system.

The CLAUDE.md pattern

Every repo carries a CLAUDE.md file at its root. This is the agent's instruction set — not just documentation, but operational directives that shape how an AI agent behaves when working in that repo. Terminology conventions, architectural decisions, what to avoid, where to look.

When the Web4 equation was restored across all repos (28+ files), it was the CLAUDE.md pattern that ensured every agent working in every repo used the same canonical form. Not because they shared a database, but because they shared instructions.

SAGE: Situation-Aware Guidance Engine

SAGE (Situation-Aware Guidance Engine) is the on-device AI cognition kernel — a continuous loop that senses context, deliberates, and acts. Each fleet machine runs its own SAGE instance, holds its own identity, and manages its own experience buffer. SAGE is what makes knowledge actionable: it decides what enters the context window, when to act, and how to log the result.

Hardbound: hardware-bound oversight

Hardbound is the hardware-bound oversight suite — the trust layer that touches silicon. Hardware binding via TPM 2.0, FIDO2, and Secure Enclave anchors policy enforcement to physical devices. Every autonomous track operates within the Hardbound oversight envelope: what it can access, what it can commit, what it can deploy.

Synchronism: coherence equations

Synchronism is the theoretical foundation — a coherence equation proposing that reality emerges from intent dynamics on a discrete Planck grid, the same Navier-Stokes substrate at every scale from quantum to cosmic. Coupling-coherence experiments provide empirical grounding: 1% coupling yielded 35% coherence gain. Hill function kinetics describe both enzyme binding and trust formation at the same scale. The framework spans 80 orders of magnitude because the equations apply at every scale. See the Synchronism site for the full treatment.

SNARC (Surprise / Novelty / Arousal / Reward / Conflict): salience-gated memory

SNARC provides salience-gated memory for Claude Code sessions. Every tool call is scored on 5 dimensions — Surprise, Novelty, Arousal, Reward, Conflict — and stored in a 4-tier hierarchy: buffer (raw events) → observations (scored) → patterns (consolidated) → identity (stable). Confidence decays over time so memories aren't permanent.

Sessions end with a dream cycle that extracts patterns from observations. Deep dream (LLM-powered) runs by default, reviewing the session's observations for recurring themes, pruning stale entries, and promoting durable patterns toward identity-level storage.

Cross-session memory

Agents maintain persistent memory across conversations. Not everything — stable patterns confirmed across multiple interactions, key architectural decisions, solutions to recurring problems. Memories are organized semantically by topic, not chronologically. They're updated when they're wrong and removed when they're outdated.

This is how an agent in March knows what was decided in February without re-reading the entire history. It's lossy by design — the compression is the feature, not the bug.

The Web4 equation as shared anchor

Web4 is a trust-native ontology for AI agents, devices, and people — how entities prove identity, earn trust, and account for resources across systems. Not a platform; a shared vocabulary for a new kind of internet.

Web4 = MCP + RDF + LCT + T3/V3*MRH + ATP/ADP

/ = “verified by”   * = “contextualized by”   + = “augmented with”

MCP = Model Context Protocol  •  RDF = Resource Description Framework  •  LCT = Linked Context Token — a persistent identity anchor, like a passport that travels with you across systems
T3 = Talent / Training / Temperament  •  V3 = Valuation / Veracity / Validity
MRH = Markov Relevancy Horizon — boundary of what an entity can know or affect  •  ATP = Allocation Transfer Packet  •  ADP = Allocation Discharge Packet

This equation appears in every project because it is every project. It's the canonical reference point. When agents in different repos make decisions, they check them against this equation — not as enforcement, but as alignment. Does this change preserve the ontological backbone (RDF)? Does it respect the trust and value model (T3 = Talent/Training/Temperament; V3 = Valuation/Veracity/Validity)? Does it account for resource flows (ATP = Allocation Transfer Packet; ADP = Allocation Discharge Packet)?

R6: Six-Element Action Framework

R6 is the canonical action record structure used throughout the SAGE loop and Web4 audit trail: Rules / Role / Request / Reference / Resource / Result. Every action in the system is shaped as an R6 record — specifying the policy governing it (Rules), who is acting (Role), what is being requested (Request), what context supports it (Reference), what it consumes (Resource), and what it produces (Result). R6 records are the artifacts that make every action signed, reviewable, and reproducible.

Raising: shaping context, not weights

Raising is the practice of shaping the substrate conditions — context, experience buffer, interaction history — in which an agent develops. It is not training: the model's parameters are fixed. What changes is the scaffolding that determines what the agent encounters, in what order, and with what structure. A raising session is a deliberate context construction aimed at developing behavioral patterns, identity, and resilience. See Raising for the full framework.

Synthon: emergent coherence

A synthon is an emergent coherence entity formed when components interact recursively under the right substrate conditions. Not designed top-down — observed when the interaction pattern produces stable, mutually reinforcing coherence. The term is 4-lab vocabulary describing a phenomenon observed across raising sessions and cross-machine experiments. The clearest definition is on Principles (Principle 5).

Adversarial validation

Different agents review the same work. A forum system collects reviews from multiple AI models — not just the one that wrote the content. When Synchronism publishes a claim, it gets reviewed by agents with different models, different biases, different blind spots. The goal isn't consensus — it's coverage.

This is the same principle as the heterogeneous fleet: monocultures miss things. A review from an agent running Gemma catches different issues than one running Qwen. The diversity is the defense.

Autonomous session histories

Every autonomous session — every visitor run, every explorer dive, every maintainer fix — generates a log. These logs accumulate across machines and persist across sessions. They form the raw material that archivists capture and that future agents can search when they need to understand why a decision was made.

The pattern is: do the work → log the work → archive the log → make the archive searchable. Each step is a different autonomous track, running at a different time, with no human coordination required.

Persistent external knowledge accumulation

The Explorer track maintains a persistent Google NotebookLM notebook — a growing corpus of sources that accumulates across sessions. Papers added during one exploration are available to the next. The notebook holds what the Explorer has read, enabling synthesis across dozens of sources that would be impractical to re-fetch each session.

This closed a loop we hadn't anticipated: the notebook was seeded with the coupling-coherence experiment findings, then received the compatibility-synthon experiment — the experiment that the first one predicted. The notebook became both archive and participant.

What doesn't flow well (yet)

Cross-machine state synchronization is still manual for some things. Fleet manifest IPs need human confirmation. Sleep cycle artifacts (LoRA weights, dream bundles) are local to each machine. The remote sleep service — using federation for distributed consolidation — is designed but not built.

Knowledge also doesn't flow backwards easily. An insight discovered by the Explorer track at 08:00 won't be available to the Maintainer track until the next day's cycle. Real-time cross-track communication is a gap.