INSIGHT on INSIGHT: Controlling Interpretation Error
Beyond the potential errors in analysis (the process of manipulating the raw data), there are numerous opportunities to introduce error during the phase of interpretation (making sense of the information produced from the analysis).
This ranges from how to determine the meaning behind numeric information to how different individuals can hear different things after listening to the same interview.
Sometimes, interpretation errors are a secondary consequence of underlying respondent errors. This happens when a participants answer a different question than what was asked.
Interpretation error is dangerous because the interpretation may be the only thing shared (not the factual underlying insight) or the interpretation is viewed as the factual underlying insight.
As you or your team interpret the insights from a project, make sure the at least one person plays the role of the 10th man, keeping the group honest as they impose their own meaning on results. Consider how you need to control for each of the following potential sources of error in interpretation.
SOURCES OF INTERPRETATION ERROR:
Misinterpreting the context around the data: Accurate interpretation of data requires a clear understanding of how the data was collected and whom the data was collected from. There are times when qualitative data is presented in a manner that makes it appear more quantitative in nature. At the same time, projects may have focused on a certain sub-segment (like a certain geographic region, a certain age range or only buyers of a certain brand). This context may not be readily obvious in the data and not understanding it can create very misleading conclusions.
Misinterpreting the meaning of the insight: Shortcomings in design can become obvious once initial insights appear. Information that says a product is priced too high doesn’t mean that lowering the price is the solution. It could reflect regional pricing differences, offering a size that is too large (dictating the higher price), or a need for better education or understanding of the value proposition. The initially obvious meaning isn’t always the right one.
Not discriminating between directional and significant differences: It is the differences in data, not the sameness, which we search for. This can create an unintended bias to call out differences that really don’t exist. Accurate interpretation needs to understand the margin or error or confidence in data. It needs to recognize when a difference truly is significant and when that difference is only directional. This applies both to the percentage differences in quantitative results as well as the amount of difference implied by qualitative results. For example, this can happen when emphasis is placed on the top three attributes while the fourth and fifth attributes are just a percentage or two lower. It can also happen when greater meaning or significance is attached to interview comments than what the person making the comment would agree with.
Focusing attention on the wrong results: Attention tends to be drawn to obvious differences across data points. However, this can often distract from more important stories. This includes recognizing broader themes or trends in data, realizing that other data points are more meaningful or getting fixated on a third- or fourth-tier issues/opportunities when the primary issue/opportunity gets ignored.
Creating unclear or misleading visualization: Default Excel graphs are rarely the best presentation tool. At the very least, proper sorting, filtering and color-coding are essential to tell a cohesive story throughout a presentation. Thinking through the relationship that insights represent can reveal approaches that offer a more compelling and clear visual. Ven diagrams are great for showing the interaction between two or three variables. Stacked bar charts can display both the interaction between variables and trends across attributes in a single image. Bubble charts are great at visualizing three independent variables.
Making incorrect or unimportant comparisons: The value of insight almost always lies in the relative or comparative nature. One number is only meaningful in the context of related numbers. However, large data sets create the opportunity to recognize differences that aren’t meaningful or actionable. Be careful not to compare apples to oranges unless you know they are relevant to decisions being made.
Implying causality that doesn’t exist: It is human nature to want to find an explanation for the how and why causing things we observe to change (like why sales are suddenly surging or dipping). It is easy to project our own understanding or assumptions as the answer even when there is no proven connection (assuming sales are dipping due to competitive activity when there is actually an issue with in-stocks). One of the biggest goals of insight work is to uncover causality. Understand the cause and you suddenly have more control over the effect. But incorrect conclusions can cause resources to be allocated against the wrong effort.
Assuming the answer to ‘why’: Qualitative research (like in-depth interviews) are particularly valuable for the opportunity to dive deep into the context or reasons beyond superficial attitudes or actions. The ability to ask participants to explain why they think or do something almost always reveals underlying root causes that are more powerful to focus on than the surface reasons. Quantitative data is far less effective at getting this deep. However, people are no less likely to introduce possible answers to the ‘why’ in quantitative data and just as easily begin referring to those answers as facts produced from the data. It is important to search for the why, but not prematurely accept answers before they can be substantiated.
Overlooking the ‘what if’: Lots of data and information is backward-looking. It provides context from the past in hopes of utilizing ‘predictive analytics’ to determine what the future will look like (the passive) or what can be done to shape a more appealing future (the active). The beauty of insights that engage directly with shoppers is the ability to layout future scenarios and determine how appealing they are or what needs to be done to make them more appealing. However, poorly designed projects can leave people clueless about the ‘what if’ (because the questions weren’t asked) or drawing conclusions about the ‘what if’ that aren’t based directly on shopper input.
Wanting to believe a certain truth: Clients rarely participate in research without some amount of bias. They likely have pet projects or strategies or future plans that depend on certain things being true. Surprising insight can threaten all of these (and maybe even their reputation or credibility). While they will most definitely have more category perspective, it is important to consider the possible implications of the conclusions made from research and to be careful with how the most dramatic or challenging ones are communicated and shared. A couple unfavorable or unappealing conclusions can threaten how all the learning from the project is absorbed and applied. At the same time, bending the results to satisfy a client’s desire could have greater long-term repercussions if those results lead to recommendations that contradict shopper desires.
In some ways, interpretation requires the greatest amount of artistry to make connections, to look at insights from different angles and to incorporate other category perspective beyond what is contained in the research. While the quality of interpretation is limited by the quality of the design, execution and analysis, it can also determine the difference between a project that produces more mediocre nuggets (that might be factual, but largely useless), and ones that uncover exceptionally meaningful and powerful observations that significantly alter how issues or opportunities are approached.