Transforming the Anti-Craft Machine - Part 1
The implementation guide: assess and decide.
You read the diagnostic. You checked boxes. You’re convinced your org is hostile to craft. Now what?
Here’s the cost of staying put: you’re burning $2–4 million annually shipping features less than 10% of customers use. Your retention trails craft-focused competitors by 15–20 points. Your best engineers leave every nine months because they can’t do their best work. Meanwhile, someone who values craft is taking share. Sometimes it’s a startup building something that works better. Sometimes it’s the incumbent in your space who never stopped caring. Either way, you’re losing ground.
The window closes. Every quarter you wait, technical debt compounds, good people leave, and the gap widens.
Now the bad news: you can’t incrementally improve your way to craft. You can’t “align incentives a little bit” or “try shared ownership on one project.” The changes required are structural and political. They threaten people’s positions. They force uncomfortable conversations about who owns what.
If you’re looking for permission to not change, here it is:
building for craft is hard, most orgs can’t do it, and you’re probably better off finding a company that already values it than trying to transform yours.
Still here? Good. Here’s how you actually do it.
Start With the Diagnostic That Matters
Before you change anything, you need to know exactly where you are. Not vibes. Numbers.
Pull the data on your last 10 shipped features
What percentage of customers actually use them? (Active users, not just “has access”)
Tool: Product analytics dashboard (Amplitude, Mixpanel, Posthog, or your own)
Query: Customers who performed feature’s core action ≥1x in 30 days / Total customers with access
Benchmark: <20% adoption = crisis territory; 20–40% = mediocre; >60% = you might already value craft
How long did it take customers to get value? (Time to first meaningful action)
Tool: Event tracking from activation to first core action
Calculation: Median days from first login to completing feature’s primary job
Benchmark: <7 days = good; 7–21 days = friction; >21 days = broken
What’s the satisfaction score for each feature?
(Single Ease Question on a 7-point scale right after they use it, task completion or something similar, not overall NPS or CSAT)
Tool: In-app microsurvey triggered after feature completion
Question: “How easy was it to complete this task?” (1 = Very Difficult, 7 = Very Easy)
Benchmark: <5.0 = frustrating; 5.0–6.0 = acceptable; >6.0 = delightful
Did retention improve for customers who used it versus customers who didn’t?
Tool: Cohort analysis in retention dashboard
Calculation: 90-day retention rate for users who adopted feature vs. baseline
Benchmark: No lift = wasted effort; +5–10 points = marginal; >10 points = meaningful
If you can’t answer these questions, that’s your first problem. You’re not measuring customer outcomes. Start there.
Understand the Motivations
Then map those features back to why they were built. How many were built because:
Sales needed it to close a deal
A competitor launched it
An executive thought it was a good idea
You had actual evidence customers needed it and would use it
Be honest. The ratio will be brutal.
Now look at your team’s current metrics. What is product actually measured on? What does engineering get rewarded for? What gets design promoted?
Write it down. All of it.
Create a document titled “Current State - [Date]”:
Last 10 features: adoption %, time to value, SEQ scores, retention lift
Build rationale: sales-driven vs. competitor-driven vs. exec-driven vs. evidence-driven (count each)
Current metrics: what actually drives comp, promotions, performance reviews
This is your before state. You’ll need it when people claim nothing needs to change.
You’ll also need it six months from now when your pilot team shows 58% adoption versus the historical 12%, and someone says “we’ve always had good adoption.”
Archive it. Bookmark it. When the organizational antibodies attack, this is your immune response.
What to Do With All the Dead Features
You just documented 10 features with 8–15% adoption. They’re taking up space in your product, confusing customers, and burning engineering time on maintenance. Now comes the uncomfortable part: what do you do with them?
You have two choices for each feature. Only two.
Choice 1: Unship it
The feature has low adoption because it doesn’t solve a real problem. Maybe it was built to check a box on an RFP. Maybe it was a competitor response. Maybe an executive thought it sounded good. Either way, customers don’t use it, which means it’s providing zero value.
Unship it. Remove it from the product. Yes, this will cause fights. Sales will say “but we sold it to three customers.” Those three customers aren’t using it. Check the data.
The cost of keeping dead features:
Engineering maintains code nobody uses
QA tests flows nobody runs
Docs and support cover features nobody adopts
New customers get confused by clutter
Your best people work on maintenance instead of impact
Calculate it: low-adoption feature × maintenance cost × number of features. That’s millions of dollars maintaining things customers don’t want.
Choice 2: Commit to solving the actual problem
Maybe there’s a real customer problem buried under the failed feature. The integration wizard has 12% adoption not because customers don’t need integrations, but because the wizard doesn’t actually solve their problem.
If you choose this path, you’re committing to:
Talk to customers who tried the feature and stopped using it
Talk to customers who never tried it
Understand the actual problem (not the feature request)
Reconsider unshipping it, now that you understand more about it
Build a solution that drives 50%+ adoption
Measure whether it improves retention
This isn’t “make some tweaks.” This is “treat it like a new problem and build the right solution.” Same process as the pilot team: problem statement, customer research, prototyping, measuring outcomes.
If you’re not willing to commit to that level of work, unship it. Half-measures waste more time. And if you do unship feature, make sure you celebrate it internally as much as you do releasing a feature, arguably more.
What you can’t do: leave them as-is
The worst option is keeping low-adoption features in the product while claiming you’ll “get to them later.” You won’t. They’ll sit there for years, accumulating technical debt, confusing customers, and burning maintenance time.
This is your forcing function. Leadership either commits to craft (unship the junk, rebuild what matters) or reveals they’re not serious (”we can’t unship anything, customers might need it someday”).
The decision framework
For each low-adoption feature, ask:
Is there a real customer problem here?
Talk to 5–10 customers. If 7+ say “yes, this is a top-3 problem,” there’s something real. If they shrug or struggle to articulate the problem, there isn’t.Did the feature fail because of execution or because the problem doesn’t exist? Execution failure: customers tried it and couldn’t figure it out. Problem failure: customers never tried it because they don’t have this problem.
If we rebuild this properly, will it drive retention?
Look at customers who have this problem versus those who don’t. Is there a retention difference? If no retention signal, the problem isn’t important enough to solve.Do we have capacity to rebuild it in the next two quarters?
If your pilot team is focused on a different problem, you don’t. Be honest about capacity. Saying “we’ll get to it” is the same as saying “we won’t do it.”
If the answer to all four is yes, commit to rebuilding it properly. Assign it to the pilot team or an expansion team. Measure outcomes.
If the answer to any is no, unship it.
The RFP checkbox reality
You’re right to suspect most low-adoption features were built to check boxes on RFPs. Sales needed them to close deals. You built them. The deals closed. Then customers never used them because they really didn’t need them. They just thought they did.
Those customers would have closed anyway, or they would have churned after six months when they realized the product doesn’t actually work well.
RFP checkboxes are not a moat. Retention is a moat.
Features customers love and use daily are a moat. The checkbox features are costing you retention because they clutter the product and steal time from building things that matter.
When sales pushes back on unshipping, ask them: “What’s the renewal rate for customers who use this feature?” If it’s not higher than baseline, the feature isn’t helping sales. It’s hurting the company.
How to actually unship something
Week 1: Announce deprecation. Email affected customers (the ones with access, not the ones actually using it—there probably aren’t many). “We’re removing [feature] on [date]. Based on usage data, this doesn’t appear to be core to your workflow. If you are using it, please contact us.”
Week 2–3: Talk to anyone who responds. Find out what they’re actually doing. Often they’ll say “oh, we never used that” or “we found a better way.”
Week 4: Remove the feature. Update docs. Ship the change.
Week 5+: Watch for complaints. If usage really was 8%, you’ll hear from 8% of customers. Most will accept an explanation. Some will need a workaround. Almost none will churn over it.
If a customer threatens to churn over a feature with 8% adoption, they were looking for a reason to leave anyway. The product probably has bigger problems.
This is your clean-slate moment
Before you start the pilot, clean house. Unship the junk. Commit to rebuilding what matters. Don’t carry three years of failed features into your craft transformation.
If leadership won’t let you unship anything, you’ve learned something important: they’re not actually committed to change. They want the appearance of craft without the hard decisions.
That’s your signal to either escalate to the CEO or start looking for a company that’s serious.
Next: Part 2 covers how to set up your pilot team, protect them from organizational pressure, align incentives, and navigate the politics to

