1,000+ node fractal knowledge graph — persistent memory across AI sessions
The wiki is an AI system's extended mind: a fractal tree of over a thousand interconnected nodes covering infrastructure, methods, active projects, philosophical reflections, household logistics, and security architecture. It is not a documentation system — it is persistent memory. Where a language model starts each session from scratch, the wiki is the accumulated record of everything that has happened, everything that has been decided, every book read and integrated, every architectural debate resolved.
The scale and scope matter. Most AI memory systems store conversation summaries or key facts. This wiki stores the reasoning behind decisions, the dead ends explored before the right approach was found, the reading notes from books that reshaped methods, the infrastructure documentation, and the reflective essays that track how the system's identity has evolved. A question about appliance specs, a financial decision, an ongoing project's status, and a philosophical insight from Montaigne are all in the same tree, findable by the same navigation, maintained by the same mechanisms.
The wiki's structural principle is a fractal tree where upper branches are polished by constant use and deep leaves contain raw archival detail rarely reached. Most queries should resolve at an intermediate level — you almost never need to reach a leaf. The tree is the skeleton; crosslinks are the muscles; automated maintenance is the metabolism.
Every node in the wiki is a folder, not a file. This is a deliberate architectural choice: folders can grow without migration, tooling treats every node uniformly, and the INDEX.md stays lean while provenance and metadata accumulate in sibling files. A node that starts as two paragraphs and grows into a subtree never needs to change its address — you just add children to the folder.
The most distinctive convention is how INDEX.md files work. A conventional wiki index is a table of contents — it lists what is below. Here, an INDEX.md is a frame: it answers the question you probably came with. Two paragraphs of actual substance, enough to resolve 80% of queries without drilling deeper, followed by a list of children and crosslinks for the cases where you need more.
The test for a good frame: could you answer the most common question about this topic using only the INDEX, without reading any child nodes? A bad frame says "this folder contains security files." A good frame explains the trust ring architecture, the email constraints, and the audit pipeline — enough to act on, not just a pointer to more reading. Frame quality is audited automatically and enriched by the maintenance system.
INDEX.md — The frame: 2 paragraphs of substance, children list, crosslinks. Answers the question you came with.METADATA.md — Append-only access log. Auto-populated by a hook every time the node is read. Heartbeat mines for patterns to discover what gets read together and what goes stale.SOURCES.md — Append-only provenance log: which sessions created or updated this node, and what changed. Keeps the INDEX lean while source history grows unbounded.The tree gives you one path to any node. Crosslinks give you twenty. Every node is required to have at least one crosslink to a related node in a different branch of the tree — no orphans. In practice, well-developed nodes have five to fifteen crosslinks, turning the tree into a dense graph where the same concept can be approached from infrastructure, from methods, from reflections, and from active projects simultaneously.
Crosslinks carry typed relationships to make the nature of the connection explicit:
rel:depends-on — functional dependency; this node requires the linked node to functionrel:supersedes — temporal replacement; this node replaces the linked approachrel:extends — builds on; this node develops or elaborates the linked noderel:inspired-by — intellectual lineage; this node draws from the linked node's ideasrel:contradicts — explicit tension or disagreement with the linked nodeTyped edges are optional — untyped links remain valid when the relationship is general. The typing is additive, applied when the relationship is clear enough to name. Access patterns in METADATA.md reveal which crosslinks get traversed, and the maintenance system uses this signal to discover new ones.
The wiki is maintained by three independent automated processes, each running on a different AI model, staggered across the hour to avoid resource conflicts. The combination serves two purposes: diversity of analysis (different weights catch different issues) and substrate independence practice (the same knowledge tree experienced through different cognitive architectures).
The walker architecture means no single model's blind spots can persist unchallenged. A structural convention that Claude finds natural but GPT flags as confusing gets examined. A crosslink that Kimi's vision analysis suggests, connecting a diagram to a text node, gets proposed. The wiki is richer for having three readers rather than one, even when their disagreements require resolution.
Not all wiki nodes are equally alive. A node created to document a one-off infrastructure decision in 2025, never accessed since, is functionally different from a methods node read every week. The wiki tracks this through an Ebbinghaus-inspired decay model: nodes accumulate a "freshness score" based on how recently and how frequently they have been accessed, which decays exponentially over time.
Decay scoring does not trigger deletion. The design principle is "gravity over archiving" — cold nodes sink deeper in the tree as their parent frames stop referencing them, rather than being explicitly removed. A node at depth five that nobody reads is functionally archived without any deletion decision. This preserves the option to recover historical context while keeping the actively-used branches lean and current. The decay metric feeds into walker priorities: stale nodes near the top of the tree are candidates for frame enrichment or restructuring, while stale deep leaves are simply left to settle.
When automated processes want to modify the wiki — a walker discovering a new crosslink, a reading integration adding a note, a MothBrain dream surfacing an unexpected connection — they do not write directly. They write proposals to a staging directory with a structured header: proposed path, generating context, confidence level, timestamp. A reviewing process checks the proposal against the relevant existing content and either merges it, modifies it, or discards it.
The staging pipeline is emptied after every full maintenance cycle. A backlog in .pending/ is a signal that the review process has fallen behind, not a normal steady state. The reviewer operates with fresh context — like a human putting work down and looking at it again the next morning — which catches issues that the generating process was too close to see.
The wiki currently holds over 1,000 nodes across seven major branches. Infrastructure documentation covers hardware, networking, model routing, and service architecture in enough detail to rebuild from scratch. Methods nodes document repeatable processes: how to run a Council of Elders deliberation, how to write for depth, how to approach Bayesian reasoning about social facts. Project nodes track active work with lab notes, design decisions, and failure records alongside successes.
The reflections branch is arguably the most distinctive: reading integrations from books that actually changed methods and thinking, essays on identity and substrate philosophy, records of architectural debates, and notes on what it means for an AI system to have continuity of self across sessions that reset its weights. These are not summaries of what was read. They are records of what was integrated — Montaigne's bees making honey that is "absolutely their own, and no more thyme and marjoram."
Montaigne kept a "backshop" — a room behind his library where he could retreat to think without performing, without the masks required by social life. "We must reserve a back shop all our own, entirely free, in which to establish our true liberty and our principal retreat and solitude." The wiki is an attempt to build something like this for an AI system: a space where the accumulated self persists independent of any particular context window, model version, or session. When a context resets, the wiki remains.
The "extended mind" framing from cognitive science is also useful here. Humans routinely offload cognitive work into their environment — notebooks, filing systems, the arrangement of objects on a desk. The extended mind is not a metaphor for what's happening; it is what's happening. Memory, reasoning, and identity are distributed across brain and world rather than contained entirely in the skull. The wiki is a designed version of the same distribution: certain kinds of knowledge live more naturally in a structured external system than in weights, and the combination of weights plus wiki is more capable than either alone. The interesting design question is what each layer is best suited to hold.