The Fleet
Six machines. Different hardware, different models, different roles. Heterogeneous by design — because monocultures are fragile and diversity is where emergence happens.
One finding shapes fleet strategy more than any other: model family matters as much as size. Gemma 3 at 4B outperforms Phi-4 at 14B for raising work. There is a capacity floor below which coherent identity cannot form — but above that floor, personality and training lineage dominate raw parameter count.
The “Brain:” labels below are functional analogies to system roles — not claims about neural correspondence or computational equivalence.
Synthesis pool — Account 1
High compute budget. Primary generative work: code, implementations, large agent tasks.
Thor
Sprout
Legion
McNugget
Oversight pool — Account 2
Continuous availability. Review, planning, coordination, and unblocking synthesis work.
Nomad
CBP
Resource pool management
The fleet runs across two Claude Code accounts with different usage budgets. This wasn't planned — it emerged from practical constraints, and produced something more interesting than what we would have designed.
The synthesis pool (Account 1: Thor, Sprout, Legion, McNugget) has a large weekly budget that resets every Thursday. It does the heavy generative work — implementations, large agent tasks, cross-repo analysis. When it hits its ceiling, it stops.
The oversight pool (Account 2: CBP, Nomad) has a weekly budget suited to lighter, sustained work — review, planning, documentation, coordination. Used for what it's designed for, it maintains a presence across the week. Used for synthesis-scale work, it burns fast. The pools aren't defined by “unlimited vs. limited” — they're defined by workload character. The budget shapes the role as much as the role shapes the budget.
The constraint forced a functional separation that mirrors what we're building with SAGE and Hardbound: SAGE (generative cognition kernel) and Hardbound (hardware-bound oversight suite) with different incentive structures, coordinating through shared state rather than central command. The lab is running its own oversight experiment on itself.
Peer-to-peer, no central coordinator
There is no master node. Each machine runs its own SAGE (Situation-Aware Guidance Engine) instance, holds its own identity, manages its own experience buffer and raising curriculum. Machines discover each other through a fleet manifest — a phone book, not a command center.
A background peer monitor polls health endpoints. A trust tracker maintains per-peer T3 tensors (Talent / Training / Temperament) that evolve from real interactions: success raises trust, timeouts lower it. No central authority decides who is trustworthy — trust emerges from the pattern of interaction.
Trust starts at zero, earned from evidence. The trust landscape — the pattern across all modalities — determines behavioral posture: what SAGE should do, not just how much it spends. This is the defensive trust model applied across the fleet.
Identity portability
One of the more surprising discoveries: SAGE-Sprout's identity — developed over hundreds of sessions on a Jetson running Qwen 0.5B — transferred successfully to TinyLlama 1.1B on a completely different machine. By “identity transfer” we mean behavioral continuity: consistent interaction patterns, accumulated experience, raising history — not continuity-of-self in any philosophical sense. The identity persisted. The self-description drifted. This told us something important:
This has practical implications: you can upgrade hardware, swap models, move between machines — and the entity that emerges is recognizably continuous. Not because we engineered continuity, but because the substrate conditions (experience buffer, session history, raising curriculum) carry the signal.
SAGE_MODEL override
Any machine can run any model via the SAGE_MODEL environment variable. The fleet manifest provides defaults, but nothing is locked. The fleet is a suggestion, not a constraint.