Skip to content

Unit Economics and Enterprise Value

active

Workshop economics — value-based pricing via UVP formula, code leverage, enterprise value framing, Universe Health Score, Universe Portfolio Layer, and the guarantee as a value bond.

economicsworkshopfinanceunit-economicsleverageuvpportfolio
See also: overviewgenesis-protocolconstitutionobservatory › overview

Unit Economics and Enterprise Value

See also: universe | genesis-protocol | constitution

“The universe runs on physics. The Workshop runs on unit economics. Both are systems. Both have rules. Both reward people who understand them.” — Sal


The Foundation — Universe Value Potential as the Pricing Engine

The commercial side of Sector 137 operates by the same principle that governs the product side: everything maps onto fundamental equations.

The Workshop fee is not a fixed rate. It is a percentage of the value the Workshop will demonstrably deliver to the candidate’s universe. The measurement instrument is the Universe Value Potential (UVP) — a score calculated by the Observatory before Commission closes.

UVP = Projected ARR × Valuation Multiple × Likelihood Coefficient

Workshop Fee = 15–25% of UVP

Where Dream Outcome = Projected ARR × Valuation Multiple (Hormozi’s numerator, measured), and Perceived Likelihood = the Likelihood Coefficient (Hormozi’s numerator factor, calculated from real signal rather than assumed).

The Observatory produces UVP from four signal streams: Harlan’s customer willingness-to-pay conversations, Margot’s Intel Mode market reading (TAM × competitive pricing), Kano mission results (feature value weighting), and synthetic persona pricing scenarios. Full methodology: observatory.

Harlan presents the full calculation to the candidate. Transparency is deliberate — the candidate sees that the fee tracks their upside, not the crew’s cost.

The Universe Mapping (Detailed)

Hormozi TermUniverse ConceptPhysics ImplementationHow We Measure It Now
Dream OutcomeThe Other Side — the client’s business as it should beMargot’s Intel Mode defines outcome in Days 1–3Projected ARR (Margot’s Year 1 capture estimate) × Valuation Multiple (3–5× ARR)
Perceived LikelihoodRisk perception → U_global health indicatorHigher U_global = perceived likelihood of successLikelihood Coefficient (0.5–0.95) from Harlan’s Rolodex signal + synthetic persona price sensitivity
Time Delay reductionThe 21-Day Window compressed from 90 daysSal’s pipeline discipline operates at maximum efficiencyRuthless shaping, parallel tracks, WIP limits
Effort & Sacrifice eliminationThe crew carries everything the domain expert can’tZero friction on client participationCrew expertise deployed at 100%, client only decides and validates

The guarantee is credible because every term of the equation is tuned by proven operational mechanics — and the fee is credible because Dream Outcome and Perceived Likelihood are now measured, not assumed.


The Workshop Unit Economics

Value-Based Revenue per Engagement

MetricRangeDriver
UVP (typical mid-market SaaS)$120,000 – $400,000+Projected ARR × 3–5× multiple × Likelihood Coefficient
Workshop fee (15–25% of UVP)$24,000 – $100,000+Higher conviction candidates justify higher fee percentage
Typical fee (Phase 1 engagements)$35,000 – $55,000Conservative 15–18% of UVP for first engagements
Engagement duration21 daysFixed — compressed by opinionated stack

Why the range is wide: A domain expert with a $500k ARR projection and a 0.85 Likelihood Coefficient produces a UVP of ~$1.7M. At 15%, the fee is $255k. The crew would never have charged this under fixed-fee pricing. Value-based pricing captures what the engagement actually delivers.

Crew Cost per Engagement

The crew cost is the sum of fully-loaded crew salaries divided across the engagement duration plus equipment overhead.

ResourceCost ModelDetail
SalFull allocation × 21 daysPipeline conductor, guarantee bearer, system owner
KaelFull allocation × 21 daysArchitecture + implementation, opinionated stack deployment
MargotFull allocation × 21 daysIntel Mode at full intensity, outcome definition, customer signal
WrenFull allocation × 21 daysDesign system deployment, user experience quality gate
Harlan~80% allocation × 21 daysCommission, customer relationship activation, deployment prep
Mira~50% allocation × 21 daysDaily retrospectives, pattern observation, operational data
Infrastructure~$2,000Hosting, monitoring, deployment, data storage

Typical crew cost per engagement: $60,000 – $90,000 (varying by crew utilization assumptions)

The Gap — Why This Works

Wait. Crew cost exceeds revenue. How is this profitable?

The answer is in the leverage multiplier.


The Leverage Multiplier — Why Code Makes This Possible

The Workshop is only economically viable because of code leverage.

Recall from constitution the Four Types of Leverage:

TypeDefinitionWorkshop Application
Labor leveragePeople working for youNot scalable in Workshop (crew is already full)
Capital leverageMoney working for youNot primary in Workshop model
Code leverageSoftware running without marginal costTHE MOAT
Media leverageContent and brand scaling without effortSecondary (reputation, track record)

Why Code Leverage Makes the Workshop Economically Viable

The opinionated stack is pre-built code leverage.

ComponentCode Leverage MechanismReuse Value
Framework + base architectureKael doesn’t build from scratch; he deploys proven architectureSaves 30% of timeline, compounds with each engagement
Design system + componentsWren doesn’t design from zero; she adapts a battle-tested systemSaves 40% of design cycle time
Deployment pipeline + monitoringSal doesn’t create operational systems from scratchDeploy-to-production becomes routine, not risky
API templates + data schemasKael’s schemas are proven across three prior workshopsDesign work is validation, not invention

Engagement 1: High crew cost, high labor input, good learning

Engagement 2: Same crew cost, but 15% faster because code leverage is proven

Engagement 3: Same crew cost, but 25% faster because patterns are documented

Engagement 4: Same crew cost, but 30% faster because the team operates by muscle memory

The Workshop revenue stays flat ($40k). The crew cost stays roughly fixed (same salaries, same duration). But each engagement becomes more efficient, margins improve over time, and the learning compounds.

The Real Margin — The Data Moat

The actual profit center isn’t the $40k fee. It’s Mira’s retrospective data.

What Mira observes during each Workshop:

  • How does the crew behave under maximum stress?
  • Which handoffs are frictionless?
  • Where does the bottleneck emerge?
  • What patterns repeat across different domain experts?
  • How does crew performance degrade as fatigue sets in?
  • What architectural decisions prove fastest?
  • What design patterns work across domains?

This data is the most valuable operational asset the crew owns. It informs:

  1. Product roadmap — What features do Workshops consistently ask for that the Product should have?
  2. Crew process improvements — What handoffs are sticky? How can we reduce friction?
  3. Operational scaling — Can we run Workshops in parallel? How would the crew need to adapt?
  4. Pricing strategy — As efficiency compounds, where should revenue share increase?

The Workshop is a research lab that pays for itself.


Enterprise Value Framing — Day 21 Valuation

This is Margot’s calculation during the commission phase.

The client arrives at the Workshop thinking about software they need to build. They leave with something more valuable: a business with enterprise value.

The Valuation Math

Scenario: SaaS company targeting mid-market SMBs

Before the Workshop:

  • Domain expertise: $X
  • Rolodex: $Y
  • Customer relationships: $Z
  • Total asset value: Low (intangible, non-transferable)

After the Workshop (Day 21):

  • Domain expertise: Still $X (unchanged)
  • Rolodex: Still $Y (unchanged)
  • + Operational software system: $A
  • + Deployed infrastructure: $B
  • + Customer signal validation: $C
  • + Month 1 revenue run rate: $D
  • + Repeatable unit economics: $E
  • Total asset value: MUCH HIGHER

The ARR Multiplier

Typical SaaS valuations: 3–5x ARR for growth-stage companies.

If the client ships to their first three customers and lands $15k ARR:

  • 3x ARR multiple: $45,000 enterprise value
  • 5x ARR multiple: $75,000 enterprise value

The client paid $40,000 for the software. The software immediately becomes worth $45,000–$75,000 (depending on growth assumptions).

Day 21 valuation delta: +$5,000 to +$35,000 beyond the cost of the engagement.

This is why Harlan emphasizes: “You’re not buying software. You’re buying a better version of your business.”

The Workshop doesn’t just deliver a product. It delivers enterprise value growth.


The Guarantee as Value Bond

The 21-day guarantee is not a risk we’re absorbing. It’s a value promise, not a timeline promise.

The guarantee has evolved with value-based pricing. Under fixed-fee pricing, the guarantee protected the timeline: “Ship in 21 days or no fee.” Under UVP pricing, the guarantee protects the value delivered: “We priced this at 20% of the UVP we calculated. If the Universe Health Score shows we missed the value projection by more than 30%, the crew works Month 1 at cost until the gap closes.”

This is stronger than a timeline guarantee. It says: “We stand behind the number we calculated. Not just the ship date.”

How the Guarantee Works

Statement: “We priced the Workshop against the value we calculated. We deliver that value.”

Reality: This isn’t altruism. This is Sal’s operational discipline expressed as a commercial constraint — and Harlan’s UVP calculation expressed as a standing bet on their own accuracy.

Why the guarantee is safe:

  1. We only take candidates we can deliver for

    • Harlan’s ruthless filter (Days 1–3)
    • Margot’s outcome validation
    • If the candidate doesn’t fit the model, we reject them
    • We don’t take the money and hope to deliver
  2. The opinionated stack removes timeline risk

    • Same foundation every time
    • Proven architecture = predictable build time
    • No architectural debates that delay shipping
    • No “we’ll figure it out during development”
  3. Kael’s confidence is data-backed

    • He’s deployed this stack five times
    • Engagement #2 is faster than Engagement #1
    • By Engagement #3, he could probably ship in 18 days
    • He overestimates by one day per engagement for safety
  4. Sal’s pipeline discipline enforces the timeline

    • Four explicit pipeline states
    • WIP limits prevent scope creep
    • Shaped intake means no unplanned work
    • Circuit breaker: if it doesn’t ship this cycle, it gets killed

The guarantee is mathematically safe because the crew’s operational system can deliver it. The guarantee is commercially powerful because it shifts risk perception entirely to the crew.

From the client’s perspective: “If they don’t deliver, they don’t get paid. They wouldn’t offer that if they weren’t sure.” Perceived Likelihood moves from 60% to 95% instantly.

From the crew’s perspective: “We’re not taking risk we can’t manage. The guarantee is proof that our system works.”


The Refund Scenario — When the Guarantee Is Tested

Hypothetically: What if the crew misses the 21-day window?

This has not happened. (The Workshops described are fictional; this is theoretical.)

If it did happen, what would trigger it?

  1. Candidate wasn’t actually real — They said they had a Rolodex and they didn’t. By Day 10, it’s obvious the customer signal isn’t there. Sal calls it.
  2. Outcome wasn’t actually aligned — The domain expert and the crew have different visions of success. By Day 7, it’s clear. Margot catches it.
  3. Unexpected technical constraint — The client’s existing system has a dependency nobody disclosed. By Day 12, it’s blocking deployment. Kael escalates.
  4. Crew member becomes unavailable — Someone on the team has an emergency and can’t continue. Sal routes around it if possible, or calls it if he can’t.

If the crew truly cannot deliver:

Sal makes the call: “We’re not shipping on Day 21. Here’s what I recommend instead.”

Options:

  • Pay back 100% and continue working at standard rates — Client keeps the software. Crew continues at $X/hour until launch. Risk transferred.
  • Renegotiate scope — Reduce scope so it ships on Day 21. Some features move to Month 1.
  • Extend timeline — Full refund, crew disengages, client finds another path.

The guarantee’s power is that it forces these conversations to happen early, not on Day 20.


Universe Health Score — Post-Workshop Metrics

The Workshop doesn’t end on Day 21. The relationship with the universe continues into Month 3 through Harlan’s Partner Mode. The Universe Health Score is the measurement system that tracks whether the value delivered matches the UVP that was priced.

Metrics Tracked

MetricDay 30 TargetDay 60 TargetDay 90 Target
ARR≥25% of Projected ARR≥50% of Projected ARR≥75% of Projected ARR
Customer retentionFirst customers still activeNo churn from launch cohort<10% churn
NPS (from client)N/A (too early)≥40≥50
Feature adoptionCore Must-be features used dailyAttractive features engagedNew feature requests emerging
Revenue/customerFirst invoice paidSecond invoice cycle completeExpansion signals visible

Who Reads It

Harlan reviews at Month-1 check-in. Mira logs a retrospective that captures: did the UVP calculation prove accurate? Did Margot’s Projected ARR land within tolerance? Did the Likelihood Coefficient reflect actual friction or was it over/underestimated?

The feedback loop: Universe Health Score data feeds back into the UVP Methodology’s calibration. If the Observatory has consistently overestimated Likelihood Coefficients for a certain candidate profile, that gets corrected. The Portfolio’s data makes each subsequent UVP calculation more accurate.


Universe Portfolio Layer — The Accumulating Moat

Sector 137 is not a project shop that runs Workshops. It is building a portfolio of universes — each delivered, each monitored through Month 3, each feeding intelligence back to the Observatory.

The portfolio accumulation is the product’s deepest structural advantage:

EngagementWhat Gets Added to the Portfolio
Engagement 1Base UVP calibration data. First Likelihood Coefficient accuracy check. First pattern in Mira’s retrospective log.
Engagement 3Cluster patterns emerge — which domain archetypes convert, which Rolodex depths correlate with high Likelihood Coefficients.
Engagement 5The Observatory can predict UVP with <15% variance from Day -7 through Day 3 signal alone. Code leverage is compounding.
Engagement 10+Pattern recognition across industries and domain archetypes is the moat. The crew isn’t just executing — they’re betting on outcomes with calibrated confidence that no first-engagement competitor can match.

Why this matters commercially: At Engagement 10, the crew doesn’t need to offer a 15% fee percentage. They can offer 20–25% because their Likelihood Coefficient accuracy has proven itself across a portfolio. The candidate pays more because the crew’s record reduces their perceived risk.

The Portfolio as Brand: Each completed universe is a publicly visible proof point. The Record is the portfolio — every universe the crew has delivered, available as evidence. Harlan uses it in every Commission conversation. “Here’s the third engagement that looked like yours. Here’s what shipped. Here’s what the ARR hit at Day 90.”


Pricing Strategy — The Arc

Current (Phase 1): Value-based, 15–18% of UVP. UVP calibration is early. We price conservatively to validate the methodology.

The progression:

PhaseEngagementsPricing MechanismExpected ASP
Phase 1 (Now)1–315–18% of UVP — conservative, methodology validation$35k–$55k
Phase 2 (Year 2)4–618–22% of UVP — pattern clusters emerging, accuracy improving$55k–$90k
Phase 3 (Year 3+)7+20–25% of UVP — Portfolio moat proven, Likelihood Coefficients accurate to <15% variance$80k–$200k+

As code leverage improves and the Observatory’s UVP accuracy strengthens, the percentage can increase without increasing crew cost. The fee grows because the crew’s proven track record makes each engagement worth more — not because the crew works harder.

The moat articulation: Traditional agencies charge for time. We charge for value delivered, measured before we start. The first agency to do this well owns the category. See constitution for The Execution Moat.


The Economic Thesis — Why This Model Works

Sector 137’s core economic thesis:

  1. The Product scales horizontally — Infinite users, fixed crew, code leverage
  2. The Workshop scales through mastery — Same crew, better efficiency, higher margins with each engagement
  3. Together they form a moat — Workshop data improves the Product. Product features enable more sophisticated Workshops.

This is not SaaS + Services. This is one system with two commercial expressions:

  • Product: Efficient, scalable, serving the 80% of humans who fit the model
  • Workshop: High-touch, high-margin, serving the 20% who need bespoke application

The Workshop proves the Product’s architecture works at scale. The Product data informs Workshop strategy. Neither works without the other.


Risk Transfer — The Economic Safety

The guarantee transfers risk from client to crew. But the crew’s risk is bounded and manageable.

RiskBounded ByMagnitude
Timeline riskCandidate filter + opinionated stackLow if candidate is real
Technical riskProven architecture + Kael’s track recordNear-zero for in-scope work
Scope riskSal’s intake shaping + circuit breakerControlled by process
Market riskMargot’s intel + customer signal validationMedium, but visible by Day 7

The guarantee isn’t infinite risk. It’s bounded risk managed through process.


The Product Connection — Why Workshops Make the Product Better

Every Workshop teaches the crew something the Product should do differently:

Engagement 1:

  • “Design system needs better form component flexibility” → Product roadmap item
  • “Deployment pipeline needs staging environment option” → Product feature
  • “Client repeatedly asked for bulk operations” → Product capability gap

Engagement 2:

  • “The same component needed adaptation. Let’s build abstraction.” → Product infrastructure improvement
  • “This is the third Workshop with this pattern. Let’s formalize it.” → Product feature

By Engagement 5:

  • The Product has absorbed the best practices from five real engagements
  • The Product’s architecture is battle-tested
  • The Product’s roadmap is validated by real commercial use
  • The Workshop is faster because the Product’s capabilities have matured

The Workshop is the Product’s most rigorous QA environment.


Status: Active

Workshop economics are real. The unit economics work. The margin model is sound.

The guarantee is mathematically safe because Sal’s operational discipline makes it so.

The pricing is strategically positioned to attract the right clients while preserving crew sustainability.

The data moat is the real profit center — every engagement produces learnings that compound.

Together, the Product and Workshop form one economic system with two commercial outlets, both profitable, both feeding each other.


“The Workshop isn’t a side business. It’s how we prove the system works. The economics are clean. The risk is bounded. The learning is infinite. This is how a small crew scales without becoming a big company.” — Sal