Public artefact · v1.0 The methodology behind DomanskiAI

Headcount
Zero.

The methodology behind two solo-built AI products. Production AI without an engineering team — the operating system, codified.

// FOUR LAYERS · BREAK ONE · BREAK THE SYSTEM
§ 01 · Thesis

Why this exists.

Software businesses are built around the assumption that engineers are the labour. Headcount is the lever. Hiring is the bottleneck. Scaling output means scaling team size, which means scaling capital, which means scaling fundraising or bootstrapping pain.

That assumption broke in 2024–2025.

AI agents can now do most of what an engineering team does — write, review, test, deploy, monitor, iterate — when they are orchestrated by one technically literate operator who understands the system. The bottleneck is no longer hiring. It is direction.

Headcount Zero is the methodology that operationalises that shift. It is how one person runs the work of a team without the team.

It is not a tool. It is not a stack. It is the operating system.

§ 02 · Four layers

Every system has four. Each one breaks the whole.

01
// The human

Direction

Human · 1

The technical director. One person. Sets scope, reviews output, kills bad work, decides architecture. Doesn't write code line-by-line; does write the briefs. The director is the only human in the loop.

02
// The brief layer

Orchestration

Artefact

Every unit of work is a brief. A structured document: acceptance criteria · dependencies · deliverables · evidence required to close. Sized to a single agent execution. Briefs are the contract between the director and the labour layer — the artefact that scales.

03
// The agents

Labour

Fungible

Multi-agent stacks, currently Claude Code. Different agent specialisations: investigation · building · testing · QA · content · evidence-gathering. Parallel where independent. Sequential where dependent. The labour layer is fungible — agents improve every six months; the methodology must not depend on which model is current.

04
// The gate

Evidence

Mechanical

Before any output ships, evidence gates verify it meets the threshold. Evidence is not "the agent says it's done." Evidence is: the test passed, the deploy succeeded, the metric moved, the artefact passed validation. Gates are mechanical, not aspirational. This layer is what separates production Headcount Zero from "I let Claude write some code."

§ 03 · Economics

What changes when this works.

A product team of ten ships maybe one significant feature per fortnight. A Headcount Zero operation ships one to three meaningful changes per day.

// CONVENTIONAL · 10-PERSON ENG TEAM
R10–14m /yr
Salaries, benefits, equipment, management overhead — all-in, SA market.
// HEADCOUNT ZERO · MZANSIEDGE SCALE
~R10–80k /mo
Token spend + one director. No coordination tax, no hiring cycle, no holiday cover, no notice periods.

Output is not 10× lower because there are no engineers. It is comparable or higher on the surfaces where the methodology is applied.

This is not a marginal efficiency claim. It is structural.

And the team stays perpetually current. New models, frameworks, and APIs ship weekly. Most engineering orgs are 6–12 months behind on integrations because eng cycles are slow. A Headcount Zero operation integrates new capabilities at the speed they ship — investigation brief on Tuesday, integration shipped by Friday. The methodology absorbs the model churn instead of being slowed by it.

§ 04 · Boundary

What this is NOT.

NOT "AI-assisted development." That is engineers using Copilot. Headcount Zero replaces the engineers on the surface where it is applied.
NOT "Low-code / no-code." That is buying a tool. Headcount Zero is operating a system.
NOT A vendor stack. Claude is the dominant labour layer right now; that will change. The methodology survives the change.
NOT Applicable to every product surface. Some workloads still need humans. The methodology applies where the work is iterative, well-bounded, and verifiable.
NOT Training. The director learns by operating, not by being lectured at.
§ 05 · Worked example

MzansiEdge.

The receipt.

Product. Sports-betting intelligence platform. Generates evidence-backed picks, narratives, social content, and publishes across web, Telegram, and email.

Headcount. 1 (Paul).

// The four layers in practice

  • DirectionPaul. ~3 hrs/day on the platform.
  • OrchestrationBrief library in Notion. ~40 active brief types.
  • LabourMulti-agent Claude Code stack. 6+ specialised agents.
  • EvidenceComposite framework (Edge V2, 7-signal). Threshold gates per tier.

// What got replaced

  • Writers2–3 → narrative-gen + verdict agents
  • Analysts1–2 → evidence-pack pipelines
  • Editor / QA1 → validator + arbiter loops
  • Publishing1 → channel automation
  • Designer0.5 → skill-prompted image gen
2–3writers
1–2analysts
1editor
1pub mgr
0.5designer

~5–6 FTEs replaced. Total cost: token spend + one director. This is the proof, not the claim.

§ 06 · Install

Want this installed in your business?

Three engagement shapes — Workshop (half-day), Diagnostic (3 weeks), Retainer (monthly). Pricing on the discovery call.

Speak to Paul