The Failure Portfolio: Collecting smart misses as compounding assets
Ever had one of those weeks where your big bet falls flat? You pitch a client, get a polite “not this time”, then spend the rest of the afternoon avoiding your to‑do list. By Friday, the week feels wasted. You didn’t get the result, and you didn’t really learn anything. It’s frustrating—and it’s avoidable.
What if your misses were worth something? Not motivational-poster “worth something”, but real, compounding value. That’s the promise of a Failure Portfolio.
What is a Failure Portfolio?
A Failure Portfolio is a deliberate collection of your smart misses—attempts designed to teach you something valuable even if they don’t “work”. Instead of hiding failures in a drawer or deleting them from memory, you log them, tag the assumptions behind them, extract lessons, and apply those lessons to the next attempt.
Over time, the portfolio compounds. You get faster at spotting patterns, better at estimating risks, and more confident taking the right kinds of shots. The win rate improves not because you stop failing, but because you stop failing for the same reasons.
What counts as a “smart miss”?
Not all failures are created equal. Smart misses are:
- Cheap: They cost little money, time, or reputation.
- Reversible: You can roll back if it goes sideways.
- Hypothesis-led: You’re testing a specific assumption (“New pricing at $19 will lift conversion by 10%”).
- Measurable: Success and failure are clear upfront.
- Actionable: Whatever happens, you’ll know what to try next.
“I’ll just wing it and see” isn’t a smart miss. It’s a blind one.
How to build your Failure Portfolio (in one afternoon)
-
Define your arenas.
Pick 2–3 domains where you want to get better: e.g. sales outreach, public speaking, fitness, hiring, product experiments. Focusing your portfolio helps you recognise patterns faster.
-
Set a risk budget.
Allocate a fixed slice of time to experiments—say 10–20% of your week. This keeps you bold without being reckless. If you’re slammed, scale the size of experiments down, not the habit.
-
Create a simple template.
For each attempt, capture:
- Title and date
- Arena (e.g. “Sales”, “Fitness”)
- Hypothesis and why it might work
- Expected cost and time
- Success metric and kill criteria
- Result (win/miss) and evidence
- Lessons and next move
-
Use pre‑mortems and kill criteria.
Before you start, ask: “If this fails, what will have gone wrong?” and “When do I stop?” This protects you from sunk-cost spirals and turns failure into information instead of drama.
-
Track lesson yield, not just outcomes.
Each week, tally: How many attempts? How many lessons? How many lessons reused? The goal is to increase “lessons reused”, because that’s where compounding kicks in.
-
Schedule a 20‑minute weekly review.
Mark what to repeat, what to retire, and what to remix. If an approach fails twice for the same reason, it’s either a bad tactic or a bad context—change one.
-
Keep a “Shadow CV”.
List grants not won, roles not landed, experiments that tanked, and the skill earned from each. It’s confronting at first; within a month it becomes your favourite cheat sheet.
Real-world examples
-
The founder’s pricing flop that paid off:
An Aussie SaaS founder tested a $29 to $19 price drop for new sign-ups. Conversion rose 8%—short of the 10% threshold—so they reverted. Miss? On paper, yes. But the test revealed that price sensitivity was lower than expected. Upsell experiments the following month—bundled add-ons at $9—drove a 14% revenue lift. The first miss “funded” a bigger win.
-
The runner who stopped overtraining:
A marathoner trialled three fuelling strategies across long runs. Two caused GI issues; one worked. Because he logged conditions and timing, he discovered caffeine tolerance was the actual lever. Result: a better race plan and fewer mid‑run disasters. The “failed” gels were tuition.
-
The manager’s meeting redo:
A team lead cut weekly stand-ups from 30 to 15 minutes with a new agenda. First week was chaos. Instead of abandoning it, she treated it as a test, added a pre‑read and a “blockers only” rule, and set a hard stop. By week three, the team reclaimed 2.5 hours per person per month.
Make it easy to stick with
The biggest reason people abandon this practice is admin overhead. Keep it light and live where you already work.
For instance, I keep a simple Failure Portfolio in Meloplan—one card per experiment—with tags like “Sales”, “Fitness”, “Hiring”, and statuses like “Planned”, “Running”, “Lesson banked”. Each card has:
- Hypothesis and success metric
- Kill criteria and timebox
- Outcome and evidence (screenshot or notes)
- One sentence: “This changes how I… [next action]”
Because it’s alongside my goals and weekly plan, I actually use it. The key is frictionless capture and regular review—no heroics required.
Metrics that make your misses valuable
- Experiment velocity: How many small bets per fortnight? Aim for a steady cadence over sporadic sprints.
- Lesson reuse rate: Of the lessons you captured, how many shaped your next actions? Target 50%+.
- Cheap-failure ratio: What proportion of failures stayed within your risk budget? Keep it high.
- Time-to-pivot: How quickly do you stop or adjust when a kill criterion hits? Shorter is better.
These beat vanity metrics. They measure your learning engine.
Common pitfalls (and easy fixes)
- Vague hypotheses: If you can’t fail clearly, you can’t learn clearly. Write down the metric and threshold before you start.
- Too big, too early: If the test failing would wreck your week or your brand, it’s not a test—it’s a gamble. Shrink it until it’s reversible.
- Over‑indexing on novelty: New isn’t always better. Retire tactics that repeatedly fail; double down on the boring winners.
- Skipping the review: The gold is in the debrief. Put a 20‑minute calendar block on repeat and guard it.
Why this approach compounds
- Patterns emerge: You start seeing which levers reliably move your results.
- Reputation grows: Colleagues and clients trust people who test, learn, and improve in public.
- Courage increases: When you have a safe system for misses, you take smarter risks.
- Decisions speed up: Kill criteria and templates remove dithering.
The portfolio doesn’t just make you more resilient—it makes you more effective.
A 7‑day starter plan
- Today: Choose one arena and draft three tiny experiments.
- Tomorrow: Run the smallest one with a 48‑hour timebox.
- Day 3: Debrief in two sentences. Bank one lesson.
- Day 4–5: Run experiment two. Invite a peer to “red team” your hypothesis.
- Day 6: Debrief, reuse one lesson.
- Day 7: Weekly review: keep, kill, or change. Schedule next week’s experiments.
That’s it. You’ve started compounding.
Final thought
You don’t need fewer failures—you need better ones. Put them to work. Build your Failure Portfolio, make your misses cheap and instructive, and let the lessons stack. A year from now, you’ll look back and see a trail of “smart misses” that made the wins inevitable.


Leave a Reply