Intel Missions — Kano Research
activeStructured feature prioritization through the Kano Model. The Observatory's most rigorous tool — create studies, collect responses, classify via the 5x5 matrix, compute Better/Worse coefficients, and prioritize with data instead of opinions.
Intel Missions — Kano Research
See also: overview.md | crew/margot.md | crew/harlan.md | unit-economics.md
“Gut feel is not a system. The Kano Model is a system. I know which one I trust.” — Sal
| Owner | Margot |
| MCP | @sector137/mcp-observatory |
| Slash | /observatory |
| CLI | sector137 observatory intel |
| SDK | sdk.observatory().listMissions() |
| Status | active |
TL;DR
- Intel Missions are structured Kano Model studies that classify features by how much users actually care about them
- The process: Create Study, Collect Responses, Classify via 5x5 Matrix, Analyze Coefficients, Prioritize
- Six Kano categories: Must-be, One-dimensional, Attractive, Indifferent, Reverse, Questionable
- Better/Worse coefficients plot features on a scatter chart — satisfaction potential vs. dissatisfaction risk
- Missions feed the UVP Methodology for Workshop pricing and the PMF Validation Gate for deployment readiness
- Everything the Observatory produces flows into the Library as permanent record
What Intel Missions Are
Intel Missions are the Observatory’s most rigorous capability. They answer one question with mathematical precision: “Of all the things we could build, which ones do humans actually care about?”
The Kano Model classifies features based on two dimensions — how users feel when a feature IS present (functional) versus when it is NOT present (dysfunctional). The gap between those two responses reveals the feature’s true nature. Some features are table stakes. Some are delighters. Some are things nobody asked for and nobody wants. The model tells you which is which before you burn cycles building.
Status: LIVE. Full implementation: kanoStudies, kanoFeatures, kanoResponses, kanoResults tables. 5x5 classification matrix (evaluator.ts), Better/Worse coefficient computation (processor.ts), quadrant analysis. Survey collection via shareable link (/survey/[token]). UI: intel-missions.tsx (list), kano-study.tsx (detail + analysis with scatter chart).
How the Process Works
1. Create a Study
Define the features to evaluate. Each feature gets a name, a description, and an optional link to a roadmap item (issue). The study is a container — a research mission with a clear scope.
Margot typically initiates studies when the crew faces a prioritization question that opinion alone cannot settle. Harlan may request one when customer signal is contradictory. Wren may request one when two design approaches are equally defensible and user preference needs to break the tie.
2. Collect Responses
Share a survey link (/survey/[token]). Each feature gets two questions:
- Functional: “How would you feel if this feature IS present?”
- Dysfunctional: “How would you feel if this feature is NOT present?”
Responses use a 5-point Likert scale: Like, Must-be, Neutral, Tolerate, Dislike.
Real humans answer these via the shared link. Synthetic personas can also be run through the survey (see personas.md) — not a replacement for real signal, but useful for hypothesis testing before investing in distribution.
3. Classify
The 5x5 Kano Evaluation Matrix maps each (functional, dysfunctional) response pair to one of six categories. This is pure mathematics — no interpretation, no judgment calls. The matrix classifies.
4. Analyze
Compute Better/Worse coefficients for each feature:
- Better coefficient:
(A + O) / (A + O + M + I)— satisfaction potential (range: 0 to 1). Higher means more upside when present. - Worse coefficient:
-(O + M) / (A + O + M + I)— dissatisfaction risk (range: -1 to 0). Lower means more pain when absent.
Plot features on a scatter chart. The chart is a map of strategic reality — you can see at a glance which features are worth building, which are mandatory, and which are noise.
5. Prioritize
Features land in quadrants based on their coefficients:
- Must-be quadrant (high Worse, low Better) — Build these or lose users. Non-negotiable.
- Performance quadrant (high Worse, high Better) — Linear value. More effort = more satisfaction.
- Attractive quadrant (low Worse, high Better) — Delighters. The features that make people love you.
- Indifferent quadrant (low Worse, low Better) — Users don’t care. Don’t waste cycles here.
Data, not opinions. That’s the entire philosophy.
Kano Categories
| Category | Code | Meaning | Sal’s Take |
|---|---|---|---|
| Must-be | M | Expected. Absence = dissatisfaction, presence = neutral | ”Table stakes. Build it or lose them.” |
| One-dimensional | O | Linear. More = better, less = worse | ”The honest ones. Effort in, satisfaction out.” |
| Attractive | A | Delighters. Surprise value, no penalty for absence | ”The ones that make people love you.” |
| Indifferent | I | Users don’t care | ”Don’t waste cycles here.” |
| Reverse | R | Users actively don’t want this | ”Stop. You’re making it worse.” |
| Questionable | Q | Contradictory response — may indicate confusion | ”Bad data. Re-ask or discard.” |
Better/Worse Coefficients
The coefficients are the quantitative output that turns qualitative survey responses into strategic signal.
Better coefficient measures satisfaction potential — how much satisfaction a feature can generate when present. Ranges from 0 (no satisfaction impact) to 1 (maximum satisfaction impact). Features with high Better scores are candidates for premium positioning and differentiation.
Worse coefficient measures dissatisfaction risk — how much dissatisfaction results when the feature is absent. Ranges from -1 (maximum dissatisfaction) to 0 (no dissatisfaction impact). Features with low Worse scores (close to -1) are mandatory — missing them means losing users.
The scatter chart maps Better (x-axis) against Worse (y-axis) for every feature in the study. Each feature becomes a point. The quadrant it falls in determines its strategic classification. This is not subjective — it’s computed from respondent data.
UVP Methodology
For Workshop engagements, Intel Missions feed directly into the Universe Value Potential (UVP) pricing calculation. Kano results provide one of four signal streams that determine the Workshop fee.
Signal Stream: Kano Results (Feature Value Weighting)
Which features in the candidate’s domain are Must-be (absence = churn) versus Attractive (presence = premium pricing)? Kano classifies each proposed feature into a pricing tier:
- Must-be features set the floor — clients won’t pay less than what’s required to function.
- Attractive features are the premium — they justify price expansion beyond the floor.
- The ratio of Attractive to Must-be features determines the Pricing Tier Multiplier (1.0 = commodity, 1.5 = differentiated, 2.0+ = category-defining).
The UVP Formula
UVP = Projected ARR x Valuation Multiple x Likelihood Coefficient
Workshop Fee = 15-25% of UVPWhere:
- Projected ARR — Margot’s Year 1 realistic capture estimate
- Valuation Multiple — SaaS multiple for this market tier (3-5x ARR, from Margot’s comparable set)
- Likelihood Coefficient — Probability of achieving Projected ARR (0.5-0.95, from Harlan + persona signals)
- Workshop Fee Range — 15% (conservative, first engagement) to 25% (proven pattern, high-conviction)
The other three signal streams — Harlan’s willingness-to-pay, Margot’s market reading, and persona price sensitivity — are documented in overview.md and unit-economics.md.
PMF Validation Gate
Before Days 19-21 Deployment in a Workshop engagement, the Observatory runs a final validation pass. Intel Mission data is one of the inputs.
What the Gate Checks
- Customer conversation confirmation: Have Harlan’s Rolodex conversations (Days 8-14) produced concrete interest? “Interesting” is not a green light. “When can I start using this?” is.
- Retention signal emergence: For any prototype or beta shared with customers — did they come back? Did they ask follow-ups? Did they refer anyone?
- Outcome alignment: Does the software being built match the outcome Margot defined in Commission? Scope drift is the primary PMF failure mode in compressed timelines.
Gate Outcomes
| Color | Signal | Action |
|---|---|---|
| Green | Buying intent confirmed, retention signal present, outcome intact | Deployment proceeds on schedule |
| Yellow | Mixed signals — some interest but no urgency, or minor scope drift | Deploy proceeds, Harlan schedules Week-4 check-in instead of Month-1 |
| Red | No customer interest materialized, or outcome drifted beyond recognition | Sal escalates. Day 18 scope conversation with candidate. Rare — should be visible by Day 10. |
The PMF Validation Gate does not change what ships on Day 21. It changes how Harlan frames the handoff and what the Month-1 check-in targets.
MCP Tools
| Tool | Owner | Purpose |
|---|---|---|
create_issue | Margot | Create a Kano study (Intel Mission) linked to roadmap items |
list_issues | Margot | List active and completed Intel Missions |
get_issue | Margot | Fetch a single mission with classification results |
run_persona_survey | Margot | Run a synthetic Kano survey through a persona |
Kano-specific server operations (5x5 classification, coefficient computation, quadrant analysis) run within the core application engine, not as individual MCP tools. The tools above provide the interface for creating, inspecting, and feeding studies.
Implementation
| Package | Description |
|---|---|
apps/app | Core system — Kano engine (lib/kano/evaluator.ts, lib/kano/processor.ts, lib/kano/types.ts), API routes (routes/api/v1/kano.ts), client pages (intel-missions.tsx, kano-study.tsx) |
packages/sdk | TypeScript SDK — Kano study resources |
packages/mcp | MCP server — mission tools |
packages/research | Kano survey interceptor (KanoProvider, useKanoIntercept) — in-app research at journey touchpoints |