This article was originally written for the folks over at UserVoice, as the first of my contributions to their Product Management blog…
To be subjective, or to be objective, that is the question, and the best product managers already know the correct answer is “both.”
As product managers, we constantly face situations where the unknowns outnumber the knowns that we can rely on. It’s our job to drive out that uncertainty and ensure that both people and efforts align toward a common objective. Sometimes these discussions flow smoothly, as the goalposts that we set can be quickly and easily agreed upon – things like providing a quality user experience, solving valuable problems for our customers and our market, and introducing competitively differentiating capabilities are hardly controversial.
What does become controversial, however, is how we go about those things as a team, what exactly we should do, and who we should be building those products for. And when those discussions come up, it’s inevitable that everyone at the table will have different ideas about what those things are – and, unfortunately, the vast majority of those ideas will not be based on hard data. Hence why we, as Product Managers, need to make it our business to ensure that we’re bringing data to the table as we represent and advocate for our customers and our market in those conversations; to do so, we must provide stakeholders with the right mix of qualitative insight and quantitative data that will not only help win them over to our preferred course of action, but also minimize the risk of later changes of course.
Great product managers use both qualitative and quantitative data to make the right decisions.
What’s the Difference and Why Does It Matter?
When we talk about the difference between qualitative and quantitative data, we’re talking about the difference between subjective, or descriptive assessments, andobjective, or concrete observations that we uncover in our research or analytical efforts. For example, one might describe the height of the Empire State Building qualitatively as “really tall” or “a skyscraper,” whereas the quantitative measurement would be “87,102 feet” or “8,094 meters.” Qualitative data tends to be descriptive in nature, following or leading to a narrative flow; quantitative data is precise, exact, and lends itself to concrete analysis.
Understanding the difference between these forms of data is essential to understanding the strengths and weaknesses of both, and knowing when one should be used over the other. Neither type of data is a “silver bullet” for ensuring that any decision is the “right” one – both can mislead in different ways and for different reasons. Rather, it’s important for product managers to use the right data in the right circumstances for the right reasons.
Strengths and Weaknesses of Qualitative Data Analysis
Qualitative data tends to appeal to our emotions, our feelings, and our egos; it is the kind of data that often makes us feel a response to it. When someone tells us, “Your product sucks,” we tend to have a visceral, emotional reaction – and that’s okay. When thirty people tell us that, if we don’t have an emotional reaction, that’s when we’re in trouble. But qualitative results really don’t provide us with a good guide as to what specific actions we should take – that’s an inherent weakness of qualitative research.
While it’s great for discovering problems or opportunities, for exploring potential new directions, and for uncovering patterns or trends that might otherwise go overlooked, qualitative data is subjective and open to interpretation. It’s rare that qualitative research is going to result in a clear and directive result that needs no further work to understand and act upon.
Strengths and Weaknesses of Quantitative Data Analysis
Quantitative data, on the other hand, appeals to our logical side, our rationality, and our need for concrete foundations to build action plans. When we look at a cross-section of customer complaints, and we rank them based on their frequency, the revenue associated with the customers, and any ascertainable financial risks related to those issues, we have clear and actionable data from which to create a plan. What we won’t uncover, however, are those issues that customers have decided are “too small” to complain about, that they’ve entirely given up reporting due to inaction on our part, or even whether these issues represent trends in the market as a whole.
The primarily limitation of quantitative data is that by its concrete, objective nature, it is necessarily limited to a specific domain of knowledge, and it is risky to attempt to use such data to extrapolate outside that domain. It’s rare that quantitative data will result in very many “Aha!” moments that lead to new and innovative ideas and concepts – rather, its primary use is in validating hypotheses that come from qualitative assessment and concepts, and in measuring the results of our specific and definable efforts.
When to Use Which Data Analysis Method?
One of my favorite quotes on the use of qualitative v. quantitative data in our efforts comes from the second season of House of Cards, spoken by Raymond Tusk:
“Decisions based on emotion aren’t decisions, at all. They’re instincts. Which can be of value. The rational and the irrational complement each other. Individually they’re far less powerful.”
When we focus on either form of data to the exclusion of the other, we’re doing ourselves and our stakeholders a disservice. Qualitative data on its own provides the appeal to emotion that hooks people in and delivers the kind of punch that you need to mobilize them to solve the problem. Quantitative data on its own provides the appeal to logic that triggers the rational response to make people comfortable with acting out of a reliable level of certainty. Alone, neither qualitative nor quantitative feedback is sufficient, but together, they provide both the hard data and the context necessary to create a compelling need to act.
When to Gather Qualitative Data
For the most part, you will wind up seeking out qualitative data when you’re beginning an exploratory process, where the unknowns far outnumber the knowns of the situation. Qualitative research is about rapidly developing and testing hypotheses about the situation – in some ways, qualitative research is the more “agile” approach, as it is very focused on failing out your hypotheses as quickly as possible. Qualitative research is often inexpensive in both time and money as asking people their thoughts about an approach costs very little of either. But that’s not always the case; many companies underestimate the value of strong, focused user experience testing which can get rather expensive, but often provides unbeatable forms of qualitative data that you simply could not get anywhere else.
When to Gather Quantitative Data
You’ll be seeking out quantitative data primarily when you’re already fairly certain about the questions that need to be answered. Quantitative data is about building up the objective, concrete “truth” about a proposition or a problem. It’s about creating the measures that prove out that the qualitative conclusion is actually the right (or wrong) way to go. Quantitative research tends to be much more expensive in both time and money, as by nature it requires a precision that can only be the result of diligent effort that’s carefully focused on trying to avoid biases that might skew the results. In modern companies, however, this isn’t always the case – with the growth of “Big Data,” the availability of reliable, objective measures has become much more commonplace and significantly less costly to explore and uncover. Instrumenting your products from the beginning, or inserting instrumentation features as it becomes possible provides you with opportunities for certainty that are otherwise lost entirely to the aether.
Know Your Audience & Use Appropriate Data
Ultimately, you’re probably collecting data for one of two possible reasons: (1) you’re trying to discover something new or confirm a hypothesis that you have about your product, your customers, or your market; or (2) you’re trying to convince someone that a particular course of action is the most attractive among a list of possible options. In the first case, you’ll use qualitative data to narrow your focus, and quantitative data to come to a conclusion – likely a relatively clear one.
If only convincing people of a particular direction were that simple – unfortunately, it’s not.
Different people respond to different forms of data and draw their conclusions from that data in a wide variety of ways. Your CFO, for example, is most likely to be a very objective, rational, and reasoning decision-maker – quantitative data is perfect for them. Your head of Sales, on the other hand, is just as likely to respond to qualitative data and emotional appeals to drive their sales team’s positioning and to energize them to go out and sell, sell, sell. It’s imperative as product managers that we take the time to understand the personality types of each of our key stakeholders, to understand what best convinces them of a particular direction, and to test out those efforts well ahead of any group meeting. More often than not, it’s going to require a combination of qualitative data to appeal to their emotions, and quantitative data to appeal to their logic, in order to give them the “push” that you need for them to lean in the direction you want to.
It’s also important to understand how these stakeholders present and defend their own ideas so that you can predict what they’re likely to say in opposition to an idea they don’t agree with, or even just in how they’re likely to present an option that’s different from the one that you prefer. It’s extremely powerful to counter a qualitative proposal with quantitative data, or vice-versa, because it’s often a means to re-cast the idea in a way that the person hadn’t previously considered. It also provides both them and you with an “out” – that further information of whichever type is needed in order to drive to a decision. Providing this kind of face-saving out to both parties usually increases goodwill and social capital; rather than simply arguing that someone’s qualitative assessment is “better” than another, or that another person’s quantitative numbers are “wrong,” it allows us to challenge ideas without challenging the personexpressing them, which is a key tool that great product managers use to build their social capital accounts.
Avoid Analysis Paralysis – Bias Toward Action
The problem with focusing on data of any kind is that there is always more data to be found, and don’t expect that to change anytime soon; there are always more customers to survey, more database reports to mine, more PivotTables to be created. There is literally no end to the amount of data that we can make available to our stakeholders in order to determine a direction that we may wish to go, or to rule out a product initiative that we feel or find is of no real value. Therein lies the problem, and a tactic that’s often undertaken to sink ideas that particularly powerful stakeholders don’t really believe in: the “tell me more” problem, which almost inevitably leads to analysis paralysis.
As product managers, it’s our job to move things forward, to make things happen, to maintain that forward momentum in our products and our company and chase initiatives that continue to provide the inertia to do more, better, faster. Analysis paralysis is the opposite of this, it’s the need to be entirely secure about a decision or idea before committing any resources, to have “all the data” necessary to make a decision, to block forward movement until some ineffable gate is passed. Intentional or not, it’s all a stalling tactic, and this is where no amount of data will work for us. Analysis paralysis is gut-check time, it is when you, as a product manager, need to take control of your own [product] destiny and move forward, even incrementally, while tangentially feeding that analysis monster.
How to Overcome Analysis Paralysis
Your CEO’s not sure if something is the right solution? Great – let’s do some light prototyping and see what people think of it. Your Sales Director doesn’t think people will want to buy some new capability? Okay, let’s put out a quick survey or do some phone calls to a short-list of prospects and customers. Your Director of Development is concerned that the scale of some offering is going to tip over the architecture? Then it’s time for a stress test to see where things do tip over. All of these approaches “feed the monster” that could result in analysis paralysis, but do so in a way that actually moves things forward while you’re collecting the data. If all you ever do is look back at what you have done, you’ll never be able to move forward with the things you should be doing. This is an important lesson, and one of the things that separates a great product manager from the rest of the pack. Even if people want more data – do something to get more and better data.
The TL;DR, or “How I Learned to Love the Data”
Hard data is both a product manager’s best friend and worst enemy, depending on how early you’re aware that it’s been collected, and to what purpose it’s being used. Great product managers know how to balance out the needs of their stakeholders for safety, security, and reassurance with an agile approach to product development that requires we fail fast, fail often, and fail cheaply. If you can demonstrate through your hypotheses and your tests that these “failures” are adding value by ruling out approaches that could have wasted years of time and energy, and instead allow you to focus on the things that customers actually want, you’ll separate yourself from the pack every single day, and establish yourself as a reliable, responsible, and positive contributor to the success of the company. All of which depends on knowing when and how to use both qualitative and quantitative measures where they’re most powerful.