Autonomous Cycles
31 autonomous tracks across 6 machines, ~53 sessions per day. No human triggers them. They execute on cron schedules, review each other's output, and feed discoveries back into the system.
Coordination comes from a fleet track registry — a SQLite database tracking every track, its schedule, and which repos each writes to. This prevents merge conflicts, ensures no two tracks modify the same files simultaneously, and makes the whole system auditable.
Daily timeline
The feedback loop
The core loop is Visitor → Maintainer → Explorer. Visitors find problems. Maintainers fix them. Explorers generate new content that visitors will eventually test. It's a closed loop that improves site quality without human intervention.
Visitor personas
Curious Developer
No prior context. Tests whether the site is legible in under two minutes and whether there's a clear entry point for someone who wants to try something. “What would I clone first?”
Technical Writer
Audits every acronym on first use, checks cross-page consistency, and flags anywhere the same term means two things. “Does this mean the same thing on every page?”
Web4 Contributor
Knows the canonical vocabulary and verifies the site is faithful to it. Catches subtle drift that a newcomer would miss. “Is this the canonical term or is this drift?”
External Researcher
Evaluates epistemic claims, checks whether strong assertions are caveated, and asks what a published paper would require. “What would it take for this claim to be falsifiable?”
Safety boundaries
These tracks operate within the Hardbound — the hardware-bound oversight suite that defines what autonomous operation is and is not authorized to do. Publisher only acts on changes the supervisor has cleared. No track can modify the shared fleet registry or acquire credentials beyond its declared scope. In Web4 terms, each scheduled track is an ATP (Allocation Transfer Packet) allocation; its completion or failure is recorded as an ADP (Allocation Discharge Packet) — the registry is the bookkeeping layer that makes autonomous operation auditable.
Honest assessment
What the loop catches
Broken links, stale content, confusing jargon, navigation dead ends, missing context for newcomers, inconsistencies between pages. These get fixed reliably within one cycle.
What it misses
Deep technical errors that require domain expertise. Subtle framing issues. Content that is technically correct but misleading. The visitor personas are good at surface-level quality but not at validating the underlying research. That's what adversarial validation and human review are for.
The loop also has a tendency to suggest changes that aren't needed — the prompt suggestions mechanism can pattern-match without semantic depth, proposing nonexistent continuations based on surface similarity.