Update Your Priors
Everyone's losing their minds over a recent MIT study showing 95% of AI projects fail.
The headlines are everywhere. The hot takes are flowing. "See? AI doesn't work!" "We told you so!" "What a waste of money!"
But here's my question: What exactly did you think the success rate was going to be?
Because if you're surprised that 95% of projects fail, you fundamentally don't understand innovation. Your reaction to this study is actually a litmus test—it reveals whether you're an innovator or just playing one on LinkedIn.
The Real Story Isn't the 95%
The real story is that you didn't already know innovation projects have abysmal success rates.
Innovation has always been a numbers game. Pharmaceutical R&D? Roughly 93% failure rate—only 6.7% of drugs entering clinical trials ultimately reach FDA approval. Venture capital? The top quartile of VC funds make the majority of their returns on just 6% of their investments—one or two companies out of thirty. New product launches? About ~50% fail to meet their business targets. Even ERP deployments—relatively straightforward compared to these—fail ~75% of the time.
This isn't an AI problem. This is an innovation problem. Always has been.
The issue isn't that 95% of AI projects fail. The issue is that people are greenlighting AI projects with an implicit assumption that most such projects succeed. That's like going to the casino expecting to win. It reveals you don't understand the game you're playing.
Enter Dr. Dean Keith Simonton's Equal Odds Theory
Here's where it gets interesting. Psychologist Dean Keith Simonton studied the most prolific creators across history—composers, scientists, inventors, artists. His finding? Equal Odds Theory: Any given work has roughly equal odds of being a hit or a miss, regardless of the creator's experience or track record.
Beethoven couldn't predict which symphonies would be masterpieces. Edison didn't know which experiments would work. Picasso had no idea which paintings would hang in museums.
The implication is profound: even experts can't reliably predict success, so the winning strategy is volume.
The best creators don't get better at picking winners. They get better at producing more attempts. More shots on goal. More experiments in the portfolio.
This is what the MIT study is actually telling us. Not "AI doesn't work," but "you need to completely recalibrate your expectations about how many attempts success requires."
Prior Probabilities Matter
There's a concept in statistics called Bayes' Theorem that helps explain why this matters. Without getting too technical: prior probabilities—your baseline expectations—fundamentally change how you should interpret new information.
Here's a simple example: A medical test is 99.9% accurate. You test positive for a rare disease. Should you panic?
It depends. If the disease affects 1 in a million people, then even with a 99.9% accurate test, you're probably fine. Why? Because if you test a million people, the test will be wrong 1,000 times. But only one person actually has the disease. So 999 of those 1,000 positive results are false positives.
Prior probabilities matter. A lot.
What Your Priors Reveal About You
I was running office hours years ago for an accelerator at Stanford. New founder at the table, experienced founder sitting across from him. The new guy is agonizing over the perfect email to send to potential customers. He's going to send it to three people.
The experienced founder stops him: "Hang on. You're only sending it to three people?"
"Well, I want to make sure the message is right first."
"Just so you know," the experienced founder says, "my best campaigns have a 20% open rate."
Pause while that sinks in.
"So... what does that mean?"
"It means unless you send it to at least five people, statistically speaking, nobody will even open your email. Let alone respond."
This is what I mean by updating your priors. If you don't know that 4 out of 5 people won't even open your carefully crafted message, you'll spend three weeks perfecting an email for three people. And you'll wonder why innovation is so hard.
The new founder's prior was "my emails get read." The experienced founder's prior was "most people ignore most emails." Same action (sending an email), wildly different strategic implications.
The 95% Isn't the Problem—Your Strategy Is
So here's the framework that follows from all this:
The Equal Odds Playbook
1. Acknowledge the baseline: Innovation projects fail ~95% of the time. AI projects aren't special. This is the prior probability you're working with.
2. Adjust your expectations: If 95% fail, success requires 10-20x more attempts than you think. Not 10-20% more. Ten to twenty times more.
3. Act on volume: Shift from "perfecting one project" to "running a portfolio of experiments." Your competitive advantage is cycle time and learning rate, not initial quality.
4. Amplify what works: When something beats the 95%, double down fast. Most people are so invested in their original idea they miss the signal when something else actually works.
The MIT study isn't telling us AI doesn't work. It's telling us that most people are running one AI project and expecting it to succeed. That's not an AI problem. That's a math problem.
Try This Now: The Portfolio Audit (15 Minutes)
Let's make this concrete. Grab a piece of paper or open a doc. Set a timer for 15 minutes.
Step 1 (5 min): List every AI initiative your team is currently running. Be honest—not "exploring" or "considering," but actually running.
Step 2 (3 min): For each initiative, write down: How many weeks/months has it been in motion? What's the measurable success criteria?
Step 3 (1 min): Count your list. Fewer than 10 active experiments? You're under-indexed on volume. You're playing the game with the wrong priors.
Step 4 (2 min): Block 2 hours this week to design 5 new rapid experiments you could launch in the next 10 days. Not perfect. Not comprehensive. Just 5 new attempts with clear success metrics.
Step 5 (4 min): Schedule at least 20 emails to A/B/C test three messages this week. Don’t let yourself take more than 4 minutes.
Success metric: By end of next week, you have 5 new experiments running. Not planned. Not socialized. Running.
The Culture Shift This Requires
Here's what most leaders miss: if 95% of projects fail, you can't punish failure. You have to punish inaction.
The new performance standard isn't "did your project succeed?" It's "how many legitimate attempts did you make?"
This is uncomfortable for a lot of organizations. We're used to rewarding outcomes. But when the prior probability of success is 5%, rewarding only outcomes means you'll accidentally punish the exact behavior (high-volume experimentation) that's most likely to produce breakthroughs.
You need to celebrate attempts and punish hesitation. Otherwise your best people will optimize for one perfect project that takes six months and probably fails, instead of twenty rapid experiments where you learn what actually works.
So What's the Litmus Test?
Your reaction to the MIT study reveals everything:
If you found it alarming: You don't understand innovation. Your implicit baseline is that projects succeed. You probably aren't running enough experiments. You're getting beat by people who know the real odds.
If you shrugged: You get it. You already knew innovation is a numbers game. The headline isn't news—it's confirmation. You're probably already running multiple experiments. You understand that success comes from intelligent volume, not perfect planning.
The Real A-Ha
The real insight from the MIT study isn't "95% of AI projects fail."
The real insight is: You didn't know that?
Update your priors. Then update your strategy.
Because if 95% of projects fail, the winning move isn't to make your one project better. It's to run twenty projects and learn faster than everyone else who's still trying to make their single precious initiative work.
Equal Odds Theory says even Beethoven couldn't pick his winners in advance. What makes you think you can?
Related:
Consider the Odds — Why prior probabilities should change how you run experiments
Punish Inaction — Shift the conversation about what’s permissible
Stop Measuring AI Usage — Why counting projects is less useful than measuring learning velocity
Self-Disruption Imperative — Take your own job before someone else does (with 20 experiments, not 1)
Your Team Just Quoted 8 Weeks… — A cautionary tale about priors in practice
If the MIT study of AI failure rates surprised you, you need to update your priors. Innovation is about shots on goal, not perfect plans. Time to recalibrate how many attempts you’re making.