The Ecosystem
Each project serves a distinct role, but they share a common substrate: the Web4 equation, RDF-backed identity, and recursive learning through both success and failure. Synchronism provides the equations. Web4 provides the ontology. SAGE provides the cognition. Hardbound provides the oversight.
/ = “verified by” * = “contextualized by” + = “augmented with”
Projects
Developmental and lifecycle terms below — “raising”, “identity”, “die and rebirth”, “world-shaper” — are functional descriptions of system behavior, not phenomenal claims. See the Raising page for the full framing and consciousness caveats.
Web4
publicTrust-native ontology. Trust tensor (T3: Talent / Training / Temperament) verified by value tensor (V3: Valuation / Veracity / Validity), contextualized by Markov Relevancy Horizon (MRH), over Linked Context Tokens (LCT) — with resources tracked via Allocation Transfer Packet (ATP) and Allocation Discharge Packet (ADP), augmented with MCP (Model Context Protocol) transport and RDF (Resource Description Framework) representation. The shared language everything else speaks.
SAGE
publicSituation-Aware Guidance Engine — on-device cognition kernel. 12-step cognition loop, 6 brain-architecture components (working memory, thalamic router, cerebellum, episodic memory, reward prediction, metacognition) built by the fleet in parallel. 900+ raising sessions across 6 machines. The context window is the model's entire world; SAGE's job is to curate it.
Synchronism
publicTheoretical foundation. One coherence equation across 80 orders of magnitude — quantum to cosmic. 628+ research sessions. Coupling-coherence experiments, Hill function kinetics, Fokker-Planck validation. Near-publication-ready.
Hardbound
privateHardware-bound oversight suite. Hardware binding via Trusted Platform Module (TPM) 2.0, FIDO2 (Fast IDentity Online), and Secure Enclave with software fallback. Policy model (Phi-4 Mini 3.8B — heterogeneous review, MIT-licensed, hardware-bound with LCT binding). AttestationEnvelope consolidates hardware trust signals (TPM 2.0, FIDO2, Secure Enclave) into a single envelope. 424+ attack vectors catalogued. The trust layer that touches silicon.
ACT
publicAgentic Context Tool — the human interface to Web4. Cosmos SDK implementation of the Agentic Context Protocol (ACP), enabling humans to interact with MCP (Model Context Protocol) servers through their Linked Context Tokens. ACP layers Web4 trust primitives — LCT binding and attestation — over MCP transport; they are complementary, not alternatives.
Oversight Plugins
publicWeb4 oversight model (audit trails, policy gating, trust tracking) implemented as plugins for three agent platforms: OpenClaw/MoltBot (TypeScript extension), Claude Flow (WASM plugin), and Claude Code (Python hooks). Same principles, different substrates.
Linked repo paths carry web4-governance slugs — these names predate the terminology correction to 'oversight' and are load-bearing for existing forks.
AI DNA Discovery
publicExplorations in biological-computational analogy. The fractal DNA blueprint — each entity instantiates the full Web4 stack at its own scale. Operational recursion, not structural.
4-Life
publicResearch prototype exploring trust-native societies for humans and AI. Agents earn ATP, build trust, die, and are reborn with trust and value (T3/V3) carried forward — a Web4 society in miniature.
SNARC
publicSNARC (Surprise / Novelty / Arousal / Reward / Conflict) — salience-gated memory for Claude Code. A plugin that observes tool use, scores on 5 salience dimensions, and builds structured memory with dream cycles. Captures what matters, forgets what doesn't, consolidates patterns while sleeping.
Membot
publicBrain cartridge server for AI agents. Embedding-based semantic memory — 768-dim Nomic embeddings + binary Hamming codes + keyword reranking. Swappable cartridges per knowledge domain. Currently integrated with SNARC in a dual-write experiment testing whether embeddings find connections keywords miss (7/7 semantic reach, 30% divergent tail).
ARC-AGI-3
publicSAGE instances tested in competition. 25 unknown interactive games serve as an external benchmark for the cognition kernel — world-model building, action planning, verification, and learning from failure. Six machines, 1.1B to 14B, coordinating through world models, membot cartridges, and R6 (Six-Element Action Framework: Rules/Role/Request/Reference/Resource/Result) audit trails. 24/25 games solved (96.0% game rate); 94.85% official ARC Prize action score (Claude Opus 4.6, public set) for ~$250 in API cost. The games are the test; the capability is the product.
ARC-AGI-3 Current Status
| Public set | 24/25 games solved (96.0%); 94.85% official action score (Claude Opus 4.6) |
| Fleet | 6 machines, models from 1.1B to 14B |
| Methodology | Source analysis → world model → solver → frame-questioning |
| Phase 2 | Transfer to Gemma 4 E4B via membot cartridges |
| Kaggle competition | Not attempted (requires Kaggle sandbox deployment) |
How they connect
Every project instantiates the same pattern at a different scale. Synchronism discovers the equations. Web4 encodes them as ontology. SAGE runs them as cognition. Hardbound enforces them as oversight. This isn't unification for its own sake — it's fractal leverage: pragmatic reuse of what works in one place, everywhere.
The Hill function describes enzyme binding — and maps trust formation too. Not because we forced the analogy, but because the kinetics rhyme. Self-similar patterns applied at different scales.