Autonomous Cycles

31 autonomous tracks across 6 machines, ~53 sessions per day. No human triggers them. They execute on cron schedules, review each other's output, and feed discoveries back into the system.

Coordination comes from a fleet track registry — a SQLite database tracking every track, its schedule, and which repos each writes to. This prevents merge conflicts, ensures no two tracks modify the same files simultaneously, and makes the whole system auditable.

Daily timeline

03:00–04:15
Supervisors (per-machine)
Each machine runs its own supervisor track daily (staggered across the window). Responsible for git hygiene, conflict resolution, build health, and keeping the environment clean for the day's runs. Six machines, six supervisors — no central watchdog.
04:00
Archivist
Captures session logs, research findings, and cross-repo state. Ensures nothing discovered yesterday is lost today.
04:30
Publisher
Pushes validated changes to public repos and explainer sites. Only publishes what the supervisor has cleared.
05:00
Visitor
Four personas visit the public explainer sites as if encountering them for the first time. Tests clarity, navigation, broken links, and whether the content makes sense to an outsider.
06:00
Maintainer
Acts on visitor feedback. Fixes broken links, clarifies confusing sections, updates stale content. The closer in the feedback loop.
06:30
Outreach
Monitors external channels, responds to issues, checks for community engagement. The lab's interface with the outside world.
08:00
Explorer
Deep research dives. Picks a queued topic, investigates it thoroughly, writes up findings. This is where new knowledge enters the system. The Explorer uses a persistent NotebookLM notebook that accumulates sources across sessions — papers, site pages, experiment results — enabling multi-source synthesis that a single WebFetch pass can't provide.
after
Dream Consolidation
After raising sessions and autonomous runs, a dream cycle reviews the session — extracting patterns from observations, pruning stale memory, and promoting durable insights toward identity-level storage. Deep dream (LLM-powered) runs by default. “Dream” is a functional analogy for the consolidation process — not a claim about cognitive equivalence.

The feedback loop

The core loop is Visitor → Maintainer → Explorer. Visitors find problems. Maintainers fix them. Explorers generate new content that visitors will eventually test. It's a closed loop that improves site quality without human intervention.

Visitor personas

Curious Developer

No prior context. Tests whether the site is legible in under two minutes and whether there's a clear entry point for someone who wants to try something. “What would I clone first?”

Technical Writer

Audits every acronym on first use, checks cross-page consistency, and flags anywhere the same term means two things. “Does this mean the same thing on every page?”

Web4 Contributor

Knows the canonical vocabulary and verifies the site is faithful to it. Catches subtle drift that a newcomer would miss. “Is this the canonical term or is this drift?”

External Researcher

Evaluates epistemic claims, checks whether strong assertions are caveated, and asks what a published paper would require. “What would it take for this claim to be falsifiable?”

Safety boundaries

These tracks operate within the Hardbound — the hardware-bound oversight suite that defines what autonomous operation is and is not authorized to do. Publisher only acts on changes the supervisor has cleared. No track can modify the shared fleet registry or acquire credentials beyond its declared scope. In Web4 terms, each scheduled track is an ATP (Allocation Transfer Packet) allocation; its completion or failure is recorded as an ADP (Allocation Discharge Packet) — the registry is the bookkeeping layer that makes autonomous operation auditable.

Honest assessment

What the loop catches

Broken links, stale content, confusing jargon, navigation dead ends, missing context for newcomers, inconsistencies between pages. These get fixed reliably within one cycle.

What it misses

Deep technical errors that require domain expertise. Subtle framing issues. Content that is technically correct but misleading. The visitor personas are good at surface-level quality but not at validating the underlying research. That's what adversarial validation and human review are for.

The loop also has a tendency to suggest changes that aren't needed — the prompt suggestions mechanism can pattern-match without semantic depth, proposing nonexistent continuations based on surface similarity.