How to Measure Anything by Douglas Hubbard: A Revolutionary Guide to Quantifying Business Intangibles
Book Info
- Book name: How to Measure Anything: Finding the Value of Intangibles in Business
- Author: Douglas W. Hubbard
- Genre: Business & Economics
- Pages: 416
- Published Year: 2007
- Publisher: John Wiley & Sons
- Language: English
- Awards: Winner of the 2008 Best Business Book Award by the American Society for Competency in Healthcare (ASCH)
Audio Summary
Please wait while we verify your browser...
Synopsis
In “How to Measure Anything,” Douglas Hubbard challenges the common belief that certain business factors are simply too intangible to quantify. Drawing on decades of consulting experience and real-world applications, Hubbard demonstrates that everything from customer satisfaction to IT security risks can be measured using surprisingly simple methods. Through engaging examples like Enrico Fermi’s famous estimation techniques and practical business cases, the book reveals how to transform vague uncertainties into actionable data. Hubbard introduces readers to powerful tools like confidence intervals, calibration methods, and Monte Carlo simulations, making sophisticated measurement techniques accessible to business professionals. This groundbreaking approach has revolutionized decision-making across industries, proving that with the right methods, nothing is truly immeasurable.
Key Takeaways
- Everything can be measured if you understand what measurement really means—reducing uncertainty about a value, not achieving perfect precision
- The Fermi estimation method breaks down seemingly impossible questions into smaller, manageable components that can be reasonably approximated
- Calibration training significantly improves your ability to make accurate probability estimates and confidence intervals
- Monte Carlo simulations allow you to perform complex calculations using ranges rather than precise numbers, making uncertainty quantifiable
- Simple observations and basic calculations often provide more value than sophisticated but poorly applied measurement systems
My Summary
Why Everything You Think Is Immeasurable Actually Isn’t
I’ll admit it—when I first picked up “How to Measure Anything,” I was skeptical. Like many business professionals, I’d been conditioned to accept that certain things simply couldn’t be quantified. Company culture? Employee morale? The value of a new marketing initiative? These seemed destined to remain in the realm of gut feelings and educated guesses.
Douglas Hubbard’s book completely dismantled that assumption within the first few chapters. What struck me most wasn’t just his confidence that everything can be measured, but the elegant simplicity of his approach. This isn’t a book that drowns you in complex statistical formulas or requires a PhD to understand. Instead, Hubbard makes measurement accessible, practical, and dare I say it—enjoyable.
The core insight that transformed my thinking? Measurement doesn’t mean achieving perfect precision. It simply means reducing uncertainty about a value. If you can move from complete ignorance to having some idea of a range, you’ve successfully measured something. This reframing alone makes the seemingly impossible suddenly achievable.
The Genius of Fermi Estimation
One of the most compelling sections of the book revolves around Enrico Fermi, the Nobel Prize-winning physicist who had an almost supernatural ability to make accurate estimates with minimal information. The story of Fermi predicting the atomic bomb’s yield by dropping confetti during the Trinity Test is the stuff of legend, but what makes it relevant to business professionals is how accessible his method actually is.
Fermi’s famous “piano tuners in Chicago” problem demonstrates this beautifully. At first glance, estimating the number of piano tuners in a city seems absurd. Where would you even start? But Fermi taught his students to break the problem into smaller questions: What’s Chicago’s population? How many households own pianos? How often do pianos need tuning? How many pianos can one tuner service per day?
Suddenly, an impossible question becomes a series of reasonable estimates. And here’s the kicker—when you multiply these approximations together, you often get remarkably close to the actual answer. I’ve used this technique in my own work countless times since reading this book, and it never fails to impress.
The real-world example of Chuck McKay advising the insurance agent in Wichita Falls perfectly illustrates how this applies to business decisions. McKay didn’t need perfect data about the insurance market. By estimating the number of cars, average premiums, commission rates, and existing agencies, he could calculate whether opening a new office made financial sense. The answer? It didn’t. And that simple analysis potentially saved his client from a costly mistake.
What I love about this approach is that it democratizes decision-making. You don’t need access to expensive market research or proprietary databases. You need curiosity, basic arithmetic, and the willingness to break big questions into smaller ones. In today’s fast-moving business environment, where decisions often can’t wait for perfect information, this skill is invaluable.
Getting Comfortable with Uncertainty
One of the most practical concepts Hubbard introduces is the confidence interval (CI). Before reading this book, I thought of predictions in binary terms—either I knew something or I didn’t. The idea of expressing uncertainty as a range with an associated confidence level was revelatory.
Here’s how it works: Instead of saying “I think we’ll close 50 deals this quarter” (which you probably don’t believe with 100% certainty anyway), you say “I’m 90% confident we’ll close between 37 and 63 deals this quarter.” This simple shift accomplishes several things. First, it forces you to think more carefully about what you actually know. Second, it provides a more honest representation of uncertainty. Third, it gives you a measurable way to improve your estimation skills over time.
That last point is crucial. Hubbard emphasizes that most people are terrible at estimating their own uncertainty. We’re either overconfident (thinking we know more than we do) or underconfident (being too cautious). The beautiful thing about using confidence intervals is that you can track your accuracy over time. If you consistently say you’re 90% confident and you’re only right 60% of the time, you’re overconfident and need to widen your ranges.
I’ve started applying this in my own life, both professionally and personally. When estimating how long a project will take, I give myself a range with a confidence level. When predicting blog traffic for a new post, I use confidence intervals. It feels awkward at first—we’re trained to give definitive answers—but the discipline of thinking in ranges has made me a significantly better estimator.
Calibration: Training Your Estimation Muscle
Perhaps the most actionable insight from the book is that estimation accuracy isn’t an innate talent—it’s a trainable skill. Hubbard walks readers through calibration exercises designed to improve their ability to quantify uncertainty accurately.
The process involves asking yourself questions where you can eventually verify the answer, making your estimate with a confidence interval, and then checking your accuracy. Did things you said had a 90% chance of occurring actually happen about 90% of the time? If not, you need to adjust.
Hubbard recommends treating each bound of your range as a separate question. For a 90% confidence interval, there should be only a 5% chance the true value exceeds your upper bound and a 5% chance it falls below your lower bound. By considering each boundary independently, you’re forced to really examine your assumptions.
One technique that particularly resonated with me is the practice of identifying pros and cons for each estimate. If you’re predicting sales for a new product, what evidence supports your optimistic scenario? What factors might lead to the pessimistic outcome? This balanced approach prevents you from anchoring too heavily on a single perspective.
In my experience running Books4soul.com, I’ve found that regularly practicing calibration has transformed how I make decisions about which books to feature, how much time to invest in different content types, and even which affiliate partnerships to pursue. The improvement isn’t overnight, but it’s steady and measurable—which, given the book’s thesis, is perfectly appropriate.
Beyond Gut Feelings: Quantifying Risk
One of Hubbard’s strongest critiques is directed at how most organizations handle risk assessment. How many times have you seen risks categorized as “high,” “medium,” or “low”? It’s ubiquitous in business, and it’s almost useless.
The problem with these qualitative labels is that they mean different things to different people. My “high risk” might be your “medium risk.” There’s no way to aggregate these assessments meaningfully or to make rational decisions based on them.
Hubbard advocates for expressing risk in specific numerical terms: “There’s a 15% chance of losing $200,000 on this project.” This precision enables much better decision-making. You can compare risks across different initiatives, calculate expected values, and make trade-offs based on actual numbers rather than vague feelings.
I’ll be honest—this is one area where I’ve found implementation challenging. There’s significant organizational inertia around qualitative risk assessments, and pushing for numerical precision can feel uncomfortable. People worry about being held accountable for specific predictions in a way they aren’t for vague labels.
But that discomfort is precisely the point. If you’re not willing to put a number on your risk assessment, you probably haven’t thought through it carefully enough. And if you have thought it through, expressing it numerically provides far more value to decision-makers.
Monte Carlo Simulations Made Simple
When Hubbard introduces Monte Carlo simulations, I expected the book to take a technical turn that would lose non-statisticians. Instead, he presents the concept in remarkably accessible terms.
The challenge many organizations face is that they have ranges for various inputs rather than precise numbers. Traditional calculation methods struggle with this. Do you use the optimistic estimate? The pessimistic one? The midpoint? Each approach has problems.
Monte Carlo simulations solve this by running thousands of calculations, each time randomly selecting values from within your specified ranges. The result is a distribution of possible outcomes that shows not just what might happen, but how likely different scenarios are.
While Hubbard acknowledges that implementing Monte Carlo simulations typically requires software, he emphasizes that the underlying concept is straightforward. And in today’s world, tools like Excel and various online calculators have made these simulations accessible to anyone willing to invest a bit of time in learning.
I haven’t personally implemented full Monte Carlo simulations in my work (my business decisions rarely require that level of sophistication), but understanding the concept has changed how I think about uncertainty. When I’m weighing different strategic options, I mentally consider the range of possible outcomes rather than fixating on single-point estimates.
Real-World Applications That Actually Work
What sets “How to Measure Anything” apart from typical business books is its relentless focus on practical application. Hubbard isn’t interested in theoretical elegance; he wants methods that work in the messy real world of business decision-making.
The insurance agent example I mentioned earlier is just one of many case studies throughout the book. Hubbard draws on his extensive consulting experience to show how these techniques have been applied to everything from IT security investments to pharmaceutical research decisions to military logistics.
For my own work in book blogging and content creation, I’ve applied these principles in several ways. When deciding whether to invest in a new content management system, I estimated the time savings (with confidence intervals), the cost, and the probability of various outcomes. The analysis wasn’t perfect, but it was far better than going with my gut.
When evaluating which book genres to focus on, I used Fermi estimation to approximate potential audience sizes, engagement rates, and monetization possibilities. Again, the numbers weren’t precise, but they didn’t need to be. They reduced my uncertainty enough to make a more informed decision.
Even for something as simple as deciding how much time to spend on social media promotion versus creating new content, thinking in terms of measurable outcomes has been transformative. I track results, update my estimates, and continuously calibrate my predictions. It’s not glamorous, but it works.
Where the Book Falls Short
As much as I appreciate “How to Measure Anything,” it’s not without limitations. The book is heavily focused on business and organizational decision-making, which makes sense given Hubbard’s background. However, readers looking for applications in other domains might find the examples somewhat narrow.
Additionally, while Hubbard makes complex concepts accessible, there’s still a learning curve. Some readers might find the statistical concepts intimidating, even with his clear explanations. The book would benefit from even more step-by-step tutorials and perhaps some accompanying online resources or worksheets.
I also noticed that the book sometimes glosses over the organizational and political challenges of implementing these measurement approaches. It’s one thing to learn calibration techniques; it’s quite another to convince your entire management team to adopt them. Hubbard acknowledges these challenges but doesn’t always provide detailed strategies for overcoming them.
Finally, while the book’s core message—that everything can be measured—is powerful, it’s worth noting that Hubbard sometimes pushes this thesis to its limits. There are genuinely situations where the cost of measurement exceeds its value, or where the act of measuring changes what you’re trying to measure in problematic ways. A bit more nuance on these edge cases would strengthen the book.
How This Book Compares to Other Decision-Making Resources
In the crowded field of business decision-making books, “How to Measure Anything” occupies a unique space. It’s more practical than academic texts on statistics but more rigorous than typical business advice books.
Compared to Daniel Kahneman’s “Thinking, Fast and Slow,” which focuses on cognitive biases in decision-making, Hubbard’s book is more prescriptive. While Kahneman helps you understand why you make bad decisions, Hubbard gives you tools to make better ones.
Against “The Signal and the Noise” by Nate Silver, Hubbard’s work is less focused on prediction in complex systems and more concerned with practical business measurement. Silver’s book is broader in scope but less immediately actionable for most business professionals.
For readers interested in similar topics, I’d also recommend “How to Measure Anything in Cybersecurity Risk” (also by Hubbard) for those in IT, and “Superforecasting” by Philip Tetlock for a deeper dive into prediction accuracy.
Questions Worth Pondering
As I finished “How to Measure Anything,” several questions stayed with me. In your own work, what decisions are you making based on gut feeling that could benefit from even crude measurement? What would it take to start tracking your estimation accuracy systematically?
More broadly, what’s the cost of not measuring things we assume are immeasurable? How many bad decisions do organizations make because they’re operating on assumptions rather than data—even imperfect data?
These aren’t rhetorical questions. I genuinely believe that wrestling with them can transform how you approach business challenges. And that’s ultimately what makes this book valuable—not because it provides all the answers, but because it fundamentally changes the questions you ask.
Final Thoughts from My Reading Chair
Reading “How to Measure Anything” reminded me why I love books that challenge fundamental assumptions. Hubbard doesn’t just teach techniques; he changes how you see the world. After finishing this book, I found myself constantly asking “How could I measure that?” in situations where I previously would have shrugged and accepted uncertainty.
Is the book perfect? No. Does it require some mental effort to implement its lessons? Absolutely. But for anyone who makes decisions under uncertainty—which is to say, anyone in business—the investment is absolutely worth it.
I’d love to hear from others who’ve read this book or tried to apply its principles. What measurement challenges have you tackled? Where have you succeeded, and where have you struggled? The beauty of Hubbard’s approach is that it’s continuously improvable, and I’m always curious to learn from others’ experiences.
Whether you’re a startup founder trying to decide on your next product feature, a manager allocating budget across competing priorities, or just someone who wants to make better decisions in your professional life, “How to Measure Anything” offers a practical framework that actually works. It’s earned its place on my shelf of books I return to regularly, and I suspect it will do the same for you.
Further Reading
https://www.goodreads.com/book/show/444653.How_to_Measure_Anything
https://hubbardresearch.com/shop/measure-anything-3-ed-signed-author/
https://www.professionalwargaming.co.uk/HowToMeasureAnythingEd2DouglasWHubbard.pdf
