The Dead Pareto Sketch
For decades, the Pareto Principle has been one of those mental models that people reach for without really examining it. Eighty percent of results from twenty percent of effort. It sounds right. It feels efficient. It tells you to focus on the vital few and not worry about the trivial many. Product teams, startup founders, and engineering managers have built entire prioritisation strategies on it.
It’s not resting. It’s not pining for the fjords. The 80/20 rule is dead, and people keep propping it up on its perch because they haven’t noticed the world changed around it.
The problem is that it’s wrong now. Not wrong in the sense that it was never true, but wrong in the sense that the ratio has shifted so dramatically that the old heuristic is actively misleading. If you’re still thinking in 80/20 terms, you’re making decisions based on a cost structure that no longer exists.
Here’s the updated version: you can now get 95% of your results for about 5% of the effort. The remaining 95% of effort goes on the last 5% of results. The polish, the edge cases, the production-readiness, the things that separate a demo from a product.
And almost nobody has updated their mental model to match.
The cost of “mostly working” has collapsed
The evidence for this isn’t subtle. Y Combinator reported that 25% of startups in its Winter 2025 batch had codebases that were 95% AI-generated. These aren’t weekend experiments. These are venture-backed companies going through the most competitive accelerator in the world, building real products on top of AI-generated foundations. I’ve been through a similar accelerator. They are brutal. Nobody survives one by shipping toys.
The vibe coding movement made this visible at scale. Tools like Cursor, Bolt.new, and Lovable let people go from idea to deployed prototype in hours. One YC-backed founder went from concept to working prototype in three days. Work that would have taken weeks with a traditional development process. Base44, a solo-founded startup, grew to 250,000 users and roughly $200K per month in profit within six months before selling to Wix for $80 million. One founder.
Eric Ries told us to build only what we need to learn. AI has made the “build” part so cheap that code becomes genuinely disposable. You can test five ideas in the time it used to take to spec one. The cost of being wrong has collapsed, which means you should be wrong more often and faster.
The point isn’t that these tools are perfect. They aren’t. The point is that getting to “mostly working” is no longer the hard part.
The effort has moved, not disappeared
This is where the conversation usually goes sideways. People see the alarming stats. Veracode found that 45% of AI-assisted development introduces security vulnerabilities. GitClear’s analysis of 211 million lines of code shows duplication quadrupling and refactoring collapsing. And they conclude that AI produces bad code.
That misses the point.
These are not failures of the tool. They’re failures of specification.
When a human developer writes code, security and quality feel like natural side effects of the craft. They’re not. They’re the product of years of internalised practices, code reviews, team culture, and hard-won experience. We just don’t notice because the specification is implicit, carried in the developer’s head and enforced by organisational habit.
AI has no implicit specifications. It builds exactly what you ask for. And most people aren’t yet asking for security, testability, maintainability, accessibility, or graceful error handling. They’re asking for “build me a dashboard” and being impressed when they get one.
The 95/5 split applies to the specification itself. Describing what you want built is the easy 5%. Describing all the dimensions of quality that the output must satisfy, security, resilience, accessibility, compliance, maintainability, that’s the remaining 95% of the intellectual work.
The effort hasn’t disappeared. It’s changed shape. It’s moved from writing code to defining what “done” actually means across every dimension that matters.
A new discipline needs to emerge
There is a significant body of practice that needs to develop around working with AI agents. When you work with a human developer, decades of software engineering culture encode shared expectations. Testing happens. Code review catches things. Security isn’t an afterthought because someone on the team has been burned before and won’t let it happen again. None of that transfers automatically to an AI workflow.
The organisations that get this right will be the ones that build repeatable frameworks for specifying all expected output dimensions. Not just “does it work?” but “is it secure? accessible? maintainable? testable? compliant?” That’s the new craft. Not writing code, but writing specifications that leave nothing implicit.
And this isn’t just an engineering problem. It’s a product problem, a management problem, and an organisational culture problem. If your definition of “done” doesn’t include security, and you hand that incomplete definition to an AI agent, you will ship insecure software. That was always true with human developers too. The difference is that human developers sometimes compensated for your incomplete specification with their own judgement. AI won’t.
So where do you draw the line?
If getting to 95% is nearly free, everything about how you think about product development should change.
Your minimum viable product can be much more ambitious if the build cost approaches zero. But “viable” now depends entirely on how well you’ve specified what viable means. Across security, reliability, and user trust, not just features. You should be validating more ideas, faster, and spending less time agonising over whether something is worth building. Build it. Ship it. Find out. In the time it takes to perfect one thing, someone else has tested five.
But you also need to be honest about where perfectionism is killing you. If you’re spending 95% of your effort chasing the last 5% of polish, you need to be very sure that polish is what your market actually rewards. In many cases, it isn’t. Speed to market beats pixel-perfection more often than most product teams want to admit.
The scarce skill is no longer writing code. It’s knowing what to specify. The full, multi-dimensional definition of “done” that includes the things developers used to carry implicitly in their heads. The people who develop rigorous practices around that will dominate. The people who just say “build me an app” will ship fast and break things in ways that actually matter.
The uncomfortable questions
Every product you’re working on right now. Ask yourself two things.
First: am I in the 5% of effort that gets me to 95% of results? Or am I in the 95% of effort chasing the last 5%? If the latter, does anyone actually care about that last 5%? Because if they don’t, you’re not polishing your product. You’re hiding from the market.
Second: when I use AI to build, am I specifying all the dimensions that matter? Or am I just asking for features and hoping that security, quality, and resilience happen by magic?
The old Pareto Principle told you where to focus your effort. The new one tells you something harder. The effort has changed shape entirely. Building is becoming free. Knowing what to build, in full, across every dimension, is the new expensive thing.
Ship the 95%. Specify the dimensions. Learn. Repeat.
Just go out and make something.