The architecture of Hivemind.
This page is the written counterpart to the interactive demo on the homepage. It is the protocol — not the marketing. If you are evaluating Hivemind as an engineer, an investor, or a skeptic, read this.
Input: the prompt
Every deliberation begins with a prompt containing four components: a textual description of the problem, a sufficiency value (integer), a feasibility value (integer, 1–100), and a theory network density value (integer). Numerical inputs are provided via sliding scales.
01 — Theory network
A variable number of LLM agents, each assigned a slice of the peer-reviewed strategic knowledge base. The slice size is determined by the theory-network density value: each agent's knowledge base token count approximately equals the density value (approximately, because self-contained documents cannot be cut). The number of agents is whatever number of distinct slices optimally match the density value.
Each agent generates an initial solution using its slice plus any client-cleared data and market data. Each agent then critiques every other agent's solution, using its own knowledge slice. Each agent receives every other agent's critique of its own solution, and revises. The loop continues until the monitor halts it.
02 — Monitor
The monitor receives every solution and revision. It aggregates similar solutions by combining them into unique clusters, listing each cluster's justifications side-by-side, and counting the number of distinct aggregate conclusions. The theory loop halts when the number of unique aggregate conclusions is ≤ the sufficiency value the client set.
Note: the monitor passes solutions to the practicality network. It does not pass theoretical justifications. The practicality network does not need to know the reasons behind the actions the theory network is suggesting.
03 — Practicality network
A separate network of LLM agents, each representing a real-world feasibility constraint tuned to the client's use case (legal, regulatory, financial, operational, reputational). Each scores each solution 0–100. If the average score across agents is ≤ the feasibility value the client set, the entire solution set is vetoed; the theory network regenerates from scratch, with no repeats.
04 — Output and audit trail
Once a solution set clears the practicality network without a veto, the monitor outputs the aggregate list with combined rationale. Every utterance, critique, and revision — from both networks — is logged. The audit trail is immutable and exportable. It is the artifact that demonstrates fiduciary due diligence.
Tuning and calibration
The three calibrations that determine the system's behavior:
- —The degree to which theory agents revise when receiving criticism (cooperative vs. stubborn).
- —The monitor's similarity threshold for lumping solutions into a single cluster.
- —The practicality network's severity — how critical each constraint agent is of feasibility.
Implementation notes
LLMs with retrieval-augmented generation (RAG) are the initial substrate, because the per-agent knowledge bases are large enough that naive context stuffing is wasteful. Long-term, we train proprietary strategic LLMs on peer-reviewed academic and consulting writing — not scraped books — to become the model of record for strategic reasoning.
The monitor is not necessarily a single LLM. Its emphasis is aggregation and summarization, not content generation. Production architectures will likely mix an LLM with deterministic clustering and counting procedures.
Practicality agents' knowledge bases are tailored to the use case. Enterprise deployments lean toward legal / regulatory / financial feasibility; individual deployments lean toward social / occupational / reputational feasibility.
Firm-side composition
Four teams make Hivemind possible:
- 01CS team — split into software engineers and forward-deployed engineers. One forward-deployed engineer per client, as a personal software assistant.
- 02Strategic research team — split into pure academics and forward-deployed academics. One forward-deployed academic per client, as a personal theory assistant. The research team develops the knowledge base.
- 03Compliance team — focused on regulation, updating the practicality agents' knowledge bases continuously.
- 04Executive team — oversees all of the above and tunes the fundamental Hivemind workflow design where necessary.
Want to see it deliberate live, or read the full pitch deck?