Unit Economics and Enterprise Value
activeWorkshop economics — value-based pricing via UVP formula, code leverage, enterprise value framing, Universe Health Score, Universe Portfolio Layer, and the guarantee as a value bond.
Unit Economics and Enterprise Value
See also: universe | genesis-protocol | constitution
“The universe runs on physics. The Workshop runs on unit economics. Both are systems. Both have rules. Both reward people who understand them.” — Sal
The Foundation — Universe Value Potential as the Pricing Engine
The commercial side of Sector 137 operates by the same principle that governs the product side: everything maps onto fundamental equations.
The Workshop fee is not a fixed rate. It is a percentage of the value the Workshop will demonstrably deliver to the candidate’s universe. The measurement instrument is the Universe Value Potential (UVP) — a score calculated by the Observatory before Commission closes.
UVP = Projected ARR × Valuation Multiple × Likelihood Coefficient
Workshop Fee = 15–25% of UVP
Where Dream Outcome = Projected ARR × Valuation Multiple (Hormozi’s numerator, measured), and Perceived Likelihood = the Likelihood Coefficient (Hormozi’s numerator factor, calculated from real signal rather than assumed).
The Observatory produces UVP from four signal streams: Harlan’s customer willingness-to-pay conversations, Margot’s Intel Mode market reading (TAM × competitive pricing), Kano mission results (feature value weighting), and synthetic persona pricing scenarios. Full methodology: observatory.
Harlan presents the full calculation to the candidate. Transparency is deliberate — the candidate sees that the fee tracks their upside, not the crew’s cost.
The Universe Mapping (Detailed)
| Hormozi Term | Universe Concept | Physics Implementation | How We Measure It Now |
|---|---|---|---|
| Dream Outcome | The Other Side — the client’s business as it should be | Margot’s Intel Mode defines outcome in Days 1–3 | Projected ARR (Margot’s Year 1 capture estimate) × Valuation Multiple (3–5× ARR) |
| Perceived Likelihood | Risk perception → U_global health indicator | Higher U_global = perceived likelihood of success | Likelihood Coefficient (0.5–0.95) from Harlan’s Rolodex signal + synthetic persona price sensitivity |
| Time Delay reduction | The 21-Day Window compressed from 90 days | Sal’s pipeline discipline operates at maximum efficiency | Ruthless shaping, parallel tracks, WIP limits |
| Effort & Sacrifice elimination | The crew carries everything the domain expert can’t | Zero friction on client participation | Crew expertise deployed at 100%, client only decides and validates |
The guarantee is credible because every term of the equation is tuned by proven operational mechanics — and the fee is credible because Dream Outcome and Perceived Likelihood are now measured, not assumed.
The Workshop Unit Economics
Value-Based Revenue per Engagement
| Metric | Range | Driver |
|---|---|---|
| UVP (typical mid-market SaaS) | $120,000 – $400,000+ | Projected ARR × 3–5× multiple × Likelihood Coefficient |
| Workshop fee (15–25% of UVP) | $24,000 – $100,000+ | Higher conviction candidates justify higher fee percentage |
| Typical fee (Phase 1 engagements) | $35,000 – $55,000 | Conservative 15–18% of UVP for first engagements |
| Engagement duration | 21 days | Fixed — compressed by opinionated stack |
Why the range is wide: A domain expert with a $500k ARR projection and a 0.85 Likelihood Coefficient produces a UVP of ~$1.7M. At 15%, the fee is $255k. The crew would never have charged this under fixed-fee pricing. Value-based pricing captures what the engagement actually delivers.
Crew Cost per Engagement
The crew cost is the sum of fully-loaded crew salaries divided across the engagement duration plus equipment overhead.
| Resource | Cost Model | Detail |
|---|---|---|
| Sal | Full allocation × 21 days | Pipeline conductor, guarantee bearer, system owner |
| Kael | Full allocation × 21 days | Architecture + implementation, opinionated stack deployment |
| Margot | Full allocation × 21 days | Intel Mode at full intensity, outcome definition, customer signal |
| Wren | Full allocation × 21 days | Design system deployment, user experience quality gate |
| Harlan | ~80% allocation × 21 days | Commission, customer relationship activation, deployment prep |
| Mira | ~50% allocation × 21 days | Daily retrospectives, pattern observation, operational data |
| Infrastructure | ~$2,000 | Hosting, monitoring, deployment, data storage |
Typical crew cost per engagement: $60,000 – $90,000 (varying by crew utilization assumptions)
The Gap — Why This Works
Wait. Crew cost exceeds revenue. How is this profitable?
The answer is in the leverage multiplier.
The Leverage Multiplier — Why Code Makes This Possible
The Workshop is only economically viable because of code leverage.
Recall from constitution the Four Types of Leverage:
| Type | Definition | Workshop Application |
|---|---|---|
| Labor leverage | People working for you | Not scalable in Workshop (crew is already full) |
| Capital leverage | Money working for you | Not primary in Workshop model |
| Code leverage | Software running without marginal cost | THE MOAT |
| Media leverage | Content and brand scaling without effort | Secondary (reputation, track record) |
Why Code Leverage Makes the Workshop Economically Viable
The opinionated stack is pre-built code leverage.
| Component | Code Leverage Mechanism | Reuse Value |
|---|---|---|
| Framework + base architecture | Kael doesn’t build from scratch; he deploys proven architecture | Saves 30% of timeline, compounds with each engagement |
| Design system + components | Wren doesn’t design from zero; she adapts a battle-tested system | Saves 40% of design cycle time |
| Deployment pipeline + monitoring | Sal doesn’t create operational systems from scratch | Deploy-to-production becomes routine, not risky |
| API templates + data schemas | Kael’s schemas are proven across three prior workshops | Design work is validation, not invention |
Engagement 1: High crew cost, high labor input, good learning
Engagement 2: Same crew cost, but 15% faster because code leverage is proven
Engagement 3: Same crew cost, but 25% faster because patterns are documented
Engagement 4: Same crew cost, but 30% faster because the team operates by muscle memory
The Workshop revenue stays flat ($40k). The crew cost stays roughly fixed (same salaries, same duration). But each engagement becomes more efficient, margins improve over time, and the learning compounds.
The Real Margin — The Data Moat
The actual profit center isn’t the $40k fee. It’s Mira’s retrospective data.
What Mira observes during each Workshop:
- How does the crew behave under maximum stress?
- Which handoffs are frictionless?
- Where does the bottleneck emerge?
- What patterns repeat across different domain experts?
- How does crew performance degrade as fatigue sets in?
- What architectural decisions prove fastest?
- What design patterns work across domains?
This data is the most valuable operational asset the crew owns. It informs:
- Product roadmap — What features do Workshops consistently ask for that the Product should have?
- Crew process improvements — What handoffs are sticky? How can we reduce friction?
- Operational scaling — Can we run Workshops in parallel? How would the crew need to adapt?
- Pricing strategy — As efficiency compounds, where should revenue share increase?
The Workshop is a research lab that pays for itself.
Enterprise Value Framing — Day 21 Valuation
This is Margot’s calculation during the commission phase.
The client arrives at the Workshop thinking about software they need to build. They leave with something more valuable: a business with enterprise value.
The Valuation Math
Scenario: SaaS company targeting mid-market SMBs
Before the Workshop:
- Domain expertise: $X
- Rolodex: $Y
- Customer relationships: $Z
- Total asset value: Low (intangible, non-transferable)
After the Workshop (Day 21):
- Domain expertise: Still $X (unchanged)
- Rolodex: Still $Y (unchanged)
- + Operational software system: $A
- + Deployed infrastructure: $B
- + Customer signal validation: $C
- + Month 1 revenue run rate: $D
- + Repeatable unit economics: $E
- Total asset value: MUCH HIGHER
The ARR Multiplier
Typical SaaS valuations: 3–5x ARR for growth-stage companies.
If the client ships to their first three customers and lands $15k ARR:
- 3x ARR multiple: $45,000 enterprise value
- 5x ARR multiple: $75,000 enterprise value
The client paid $40,000 for the software. The software immediately becomes worth $45,000–$75,000 (depending on growth assumptions).
Day 21 valuation delta: +$5,000 to +$35,000 beyond the cost of the engagement.
This is why Harlan emphasizes: “You’re not buying software. You’re buying a better version of your business.”
The Workshop doesn’t just deliver a product. It delivers enterprise value growth.
The Guarantee as Value Bond
The 21-day guarantee is not a risk we’re absorbing. It’s a value promise, not a timeline promise.
The guarantee has evolved with value-based pricing. Under fixed-fee pricing, the guarantee protected the timeline: “Ship in 21 days or no fee.” Under UVP pricing, the guarantee protects the value delivered: “We priced this at 20% of the UVP we calculated. If the Universe Health Score shows we missed the value projection by more than 30%, the crew works Month 1 at cost until the gap closes.”
This is stronger than a timeline guarantee. It says: “We stand behind the number we calculated. Not just the ship date.”
How the Guarantee Works
Statement: “We priced the Workshop against the value we calculated. We deliver that value.”
Reality: This isn’t altruism. This is Sal’s operational discipline expressed as a commercial constraint — and Harlan’s UVP calculation expressed as a standing bet on their own accuracy.
Why the guarantee is safe:
-
We only take candidates we can deliver for
- Harlan’s ruthless filter (Days 1–3)
- Margot’s outcome validation
- If the candidate doesn’t fit the model, we reject them
- We don’t take the money and hope to deliver
-
The opinionated stack removes timeline risk
- Same foundation every time
- Proven architecture = predictable build time
- No architectural debates that delay shipping
- No “we’ll figure it out during development”
-
Kael’s confidence is data-backed
- He’s deployed this stack five times
- Engagement #2 is faster than Engagement #1
- By Engagement #3, he could probably ship in 18 days
- He overestimates by one day per engagement for safety
-
Sal’s pipeline discipline enforces the timeline
- Four explicit pipeline states
- WIP limits prevent scope creep
- Shaped intake means no unplanned work
- Circuit breaker: if it doesn’t ship this cycle, it gets killed
The guarantee is mathematically safe because the crew’s operational system can deliver it. The guarantee is commercially powerful because it shifts risk perception entirely to the crew.
From the client’s perspective: “If they don’t deliver, they don’t get paid. They wouldn’t offer that if they weren’t sure.” Perceived Likelihood moves from 60% to 95% instantly.
From the crew’s perspective: “We’re not taking risk we can’t manage. The guarantee is proof that our system works.”
The Refund Scenario — When the Guarantee Is Tested
Hypothetically: What if the crew misses the 21-day window?
This has not happened. (The Workshops described are fictional; this is theoretical.)
If it did happen, what would trigger it?
- Candidate wasn’t actually real — They said they had a Rolodex and they didn’t. By Day 10, it’s obvious the customer signal isn’t there. Sal calls it.
- Outcome wasn’t actually aligned — The domain expert and the crew have different visions of success. By Day 7, it’s clear. Margot catches it.
- Unexpected technical constraint — The client’s existing system has a dependency nobody disclosed. By Day 12, it’s blocking deployment. Kael escalates.
- Crew member becomes unavailable — Someone on the team has an emergency and can’t continue. Sal routes around it if possible, or calls it if he can’t.
If the crew truly cannot deliver:
Sal makes the call: “We’re not shipping on Day 21. Here’s what I recommend instead.”
Options:
- Pay back 100% and continue working at standard rates — Client keeps the software. Crew continues at $X/hour until launch. Risk transferred.
- Renegotiate scope — Reduce scope so it ships on Day 21. Some features move to Month 1.
- Extend timeline — Full refund, crew disengages, client finds another path.
The guarantee’s power is that it forces these conversations to happen early, not on Day 20.
Universe Health Score — Post-Workshop Metrics
The Workshop doesn’t end on Day 21. The relationship with the universe continues into Month 3 through Harlan’s Partner Mode. The Universe Health Score is the measurement system that tracks whether the value delivered matches the UVP that was priced.
Metrics Tracked
| Metric | Day 30 Target | Day 60 Target | Day 90 Target |
|---|---|---|---|
| ARR | ≥25% of Projected ARR | ≥50% of Projected ARR | ≥75% of Projected ARR |
| Customer retention | First customers still active | No churn from launch cohort | <10% churn |
| NPS (from client) | N/A (too early) | ≥40 | ≥50 |
| Feature adoption | Core Must-be features used daily | Attractive features engaged | New feature requests emerging |
| Revenue/customer | First invoice paid | Second invoice cycle complete | Expansion signals visible |
Who Reads It
Harlan reviews at Month-1 check-in. Mira logs a retrospective that captures: did the UVP calculation prove accurate? Did Margot’s Projected ARR land within tolerance? Did the Likelihood Coefficient reflect actual friction or was it over/underestimated?
The feedback loop: Universe Health Score data feeds back into the UVP Methodology’s calibration. If the Observatory has consistently overestimated Likelihood Coefficients for a certain candidate profile, that gets corrected. The Portfolio’s data makes each subsequent UVP calculation more accurate.
Universe Portfolio Layer — The Accumulating Moat
Sector 137 is not a project shop that runs Workshops. It is building a portfolio of universes — each delivered, each monitored through Month 3, each feeding intelligence back to the Observatory.
The portfolio accumulation is the product’s deepest structural advantage:
| Engagement | What Gets Added to the Portfolio |
|---|---|
| Engagement 1 | Base UVP calibration data. First Likelihood Coefficient accuracy check. First pattern in Mira’s retrospective log. |
| Engagement 3 | Cluster patterns emerge — which domain archetypes convert, which Rolodex depths correlate with high Likelihood Coefficients. |
| Engagement 5 | The Observatory can predict UVP with <15% variance from Day -7 through Day 3 signal alone. Code leverage is compounding. |
| Engagement 10+ | Pattern recognition across industries and domain archetypes is the moat. The crew isn’t just executing — they’re betting on outcomes with calibrated confidence that no first-engagement competitor can match. |
Why this matters commercially: At Engagement 10, the crew doesn’t need to offer a 15% fee percentage. They can offer 20–25% because their Likelihood Coefficient accuracy has proven itself across a portfolio. The candidate pays more because the crew’s record reduces their perceived risk.
The Portfolio as Brand: Each completed universe is a publicly visible proof point. The Record is the portfolio — every universe the crew has delivered, available as evidence. Harlan uses it in every Commission conversation. “Here’s the third engagement that looked like yours. Here’s what shipped. Here’s what the ARR hit at Day 90.”
Pricing Strategy — The Arc
Current (Phase 1): Value-based, 15–18% of UVP. UVP calibration is early. We price conservatively to validate the methodology.
The progression:
| Phase | Engagements | Pricing Mechanism | Expected ASP |
|---|---|---|---|
| Phase 1 (Now) | 1–3 | 15–18% of UVP — conservative, methodology validation | $35k–$55k |
| Phase 2 (Year 2) | 4–6 | 18–22% of UVP — pattern clusters emerging, accuracy improving | $55k–$90k |
| Phase 3 (Year 3+) | 7+ | 20–25% of UVP — Portfolio moat proven, Likelihood Coefficients accurate to <15% variance | $80k–$200k+ |
As code leverage improves and the Observatory’s UVP accuracy strengthens, the percentage can increase without increasing crew cost. The fee grows because the crew’s proven track record makes each engagement worth more — not because the crew works harder.
The moat articulation: Traditional agencies charge for time. We charge for value delivered, measured before we start. The first agency to do this well owns the category. See constitution for The Execution Moat.
The Economic Thesis — Why This Model Works
Sector 137’s core economic thesis:
- The Product scales horizontally — Infinite users, fixed crew, code leverage
- The Workshop scales through mastery — Same crew, better efficiency, higher margins with each engagement
- Together they form a moat — Workshop data improves the Product. Product features enable more sophisticated Workshops.
This is not SaaS + Services. This is one system with two commercial expressions:
- Product: Efficient, scalable, serving the 80% of humans who fit the model
- Workshop: High-touch, high-margin, serving the 20% who need bespoke application
The Workshop proves the Product’s architecture works at scale. The Product data informs Workshop strategy. Neither works without the other.
Risk Transfer — The Economic Safety
The guarantee transfers risk from client to crew. But the crew’s risk is bounded and manageable.
| Risk | Bounded By | Magnitude |
|---|---|---|
| Timeline risk | Candidate filter + opinionated stack | Low if candidate is real |
| Technical risk | Proven architecture + Kael’s track record | Near-zero for in-scope work |
| Scope risk | Sal’s intake shaping + circuit breaker | Controlled by process |
| Market risk | Margot’s intel + customer signal validation | Medium, but visible by Day 7 |
The guarantee isn’t infinite risk. It’s bounded risk managed through process.
The Product Connection — Why Workshops Make the Product Better
Every Workshop teaches the crew something the Product should do differently:
Engagement 1:
- “Design system needs better form component flexibility” → Product roadmap item
- “Deployment pipeline needs staging environment option” → Product feature
- “Client repeatedly asked for bulk operations” → Product capability gap
Engagement 2:
- “The same component needed adaptation. Let’s build abstraction.” → Product infrastructure improvement
- “This is the third Workshop with this pattern. Let’s formalize it.” → Product feature
By Engagement 5:
- The Product has absorbed the best practices from five real engagements
- The Product’s architecture is battle-tested
- The Product’s roadmap is validated by real commercial use
- The Workshop is faster because the Product’s capabilities have matured
The Workshop is the Product’s most rigorous QA environment.
Status: Active
Workshop economics are real. The unit economics work. The margin model is sound.
The guarantee is mathematically safe because Sal’s operational discipline makes it so.
The pricing is strategically positioned to attract the right clients while preserving crew sustainability.
The data moat is the real profit center — every engagement produces learnings that compound.
Together, the Product and Workshop form one economic system with two commercial outlets, both profitable, both feeding each other.
“The Workshop isn’t a side business. It’s how we prove the system works. The economics are clean. The risk is bounded. The learning is infinite. This is how a small crew scales without becoming a big company.” — Sal