Preliminary Thoughts – Guiding Principles of Market Research

We are privileged to live in interesting times. Implied in the old Chinese proverb is a life of stress, anxiety and frustration…BUT I prefer to think of it as a life of challenges and excitement. The world we live in is in constant flux with ever improving technology and global communications.  The Market Research industry in which we work is striving to keep up, evolving as we speak, and trying to take advantage of these new ways of connecting with consumers.

There have been many on-line discussions and books written about the death of the Market Research industry as we knew it earlier in the 21st century. Most of this is just hype. The way we market researchers conduct our consumer conversations has been changing since I got into the business in the 1970’s – from door-to-door, mail, telephone, central location, in-store shop-alongs, in-person focus groups (without telling respondents that there were folks behind the mirror), in-home ethnographies  to on-line, social media, bulletin boards, crowd sourcing, mobile, on-line qualitative, video ethnographies, big data, etc.

The Guiding Principles of Excellence in Conducting Consumer Studies remain the same.

  • Objective: There must be a clear and concise reason for doing this study. “Let’s do some qualitative so we can listen to our customers” does not cut the mustard. For what purpose? What is the business issue that needs to be answered? Agreeing objective is the most important step in the process. Don’t start a study without fully understanding the business issues involved.
  • Actions to be taken based on this market research:  This is a critical question because it affects both methodology selected and analysis plan. Before any project is undertaken the researcher and the client need to agree on how the data are to be used and, in the case of quantitative studies, how results are to be evaluated – what quantitative benchmarks or norms must be met to declare success.
    • Success criteria need to be concrete – on the positive end “meets or succeeds the criteria” or “must not lose to” with appropriate confidence level of significance testing – and limited. If too many variables are selected you run the risk of a muddled result. It’s OK to require a positive result (purchase intent, preference, etc.) as a primary criterion and then have secondary backup standards regarding performance on attributes, for example.
    • In the case above, “let’s do some qualitative” is not appropriate if the client needs to evaluate an issue vs. some quantitative benchmark; for example, whether the new product formulation is preferred vs. the current.  However, “let’s do some qualitative” is appropriate if the objective is to look for guidance or direction based on consumer comments; for example, we need to refine communication surrounding the new formulation.
  • Sample: Choosing the wrong sample will lead to misleading results. If your goal is to qualify a new product for launch you need to quantitatively survey the general population with a sufficiently large sample (min 250 respondents per idea/product tested and control) to be able to forecast sales. You can then augment your sample to see what your current customers are thinking.

Again you need to go back to your intended actions and think through how best to achieve your objective. In the example above you might think about just talking to target market or competitive users but then you would be missing the chance to understand whether you are alienating your loyal franchise.  If you are trying to target families with children < 18, be sure you have enough in your sample (at least 100) to evaluate their results. No need to go overboard – if you are not going to specifically evaluate a particular demographic group, it is not necessary to bump up that subsample. Just try to make your sample as representative as possible for consideration of your objectives.

  • Methodology – Choose your methodology based on your criteria of judgment and your target sample and NOT on the latest hot techniques. Do you need hard numerical results? Then select a quantitative methodology. Where are you likely to find your consumers? Are they on-line, mobile, will they chat? If you want to talk to Baby Boomers about their TV habits, don’t use a mobile study. This group is not there yet. But they are on-line and will chat. Think about how you find Hispanic consumers. Do you need a Spanish language version of your questioning instrument? Common sense and knowledge of your intended target go a long way here.
  • Questionnaire–This is the point where advances in technology are really a benefit. These advances help to make the study engaging, interactive and fun for your respondents so they will complete all the required tasks.  Questions need to be direct and to the point. Only one thought to a question, otherwise results become difficult to evaluate. Keep it short, especially with newer techniques like mobile! Focus questions on the information you need to know based on your objective! Don’t throw in the kitchen sink if those questions are nice to know but not relevant to the key issues. You will run the risk of burning out your respondent and getting yourself into data overload. Remember to take the study yourself before it goes live; not only to ensure that the questionnaire is correct but also to make sure it isn’t too demanding. If you find a questionnaire tedious so will your respondents. It is then time to rethink and revise.
  • Stimuli – Garbage in garbage out. Stimuli can make or break the study. The stimuli MUST be appropriate for the design of the study AND the methodology. For example, a shelf test done on line needs to be able to fit on a computer screen but a similar study done in central location can use either a real shelf or a life size mockup and as of now, this type of stimuli wouldn’t fit on a mobile screen. Concepts in the same test need to be consistent in look and feel yet with enough meaningful differences for consumers to be able to differentiate between them. Limit the number of stimuli to fit the type of study and the length. You don’t want to wear out your respondents before you get the answers you need. And the closer you get to in-market forecasting, the closer the stimuli need to be to launch material to be predictive.
  • Analysis–Analysis will differ based on the nature of the study – qualitative or quantitative, diagnostic or evaluative, custom or syndicated. The aim of the analysis is to answer the key questions laid out at the start of the study.
    • Qualitative studies are used for collecting, refining or giving directions for your ideas and for generating innovations. They ARE NOT evaluative tools. The goal in qualitative research is to look for consistencies and themes throughout the conversation.  While the researcher is generally looking for consensus, “outlier” opinions are crucial to report as well. It is these divergent opinions that sometimes yield the most important and exciting insights.
    • For Quantitative studies the analytic plan needs to be decided upon in advance, no matter the data source. This includes key questions to focus on, agreed-upon benchmarks or normative data to compare results against (if evaluative), which subgroups are of importance (make sure that the subgroup sample is robust enough, never less than 50 preferably closer to 100+), what multivariate techniques, if any, will be required and what confidence level for significance testing (usually 90%)is required. Remember to check for both positive AND negative response. The number of those rejecting an idea/product/service etc. and there reasons for rejection can tell as much as the number of those accepting. Just as with qualitative data, “naysayers” or “outliers” could provide the insight needed to move forward or rationale for not proceeding.
    • Analysis is simply good judgment if you bear in mind your objective, understand who you really need to speak to and utilize a methodology appropriate for your target.
  • Report Writing – Report writing has changed over the last few decades. In the old days researchers would deliver volumes of data with paragraphs detailing responses to every single question. End users had to wade through pages to get to results. Today the rule of thumb is “less is more.”
    • Above all else, the data need to tell Management the story of the findings of the study. If a question doesn’t provide desired learning, leave it for the appendix.
      • Ask yourself how the information on the page relates to the business question. Most managers can read the data. They are looking for you to interpret it.
    • Start with the data that most directly proves or disproves your hypothesis. Management wants the answer upfront.
    • The headline of each chart should be one to two sentences – what would you put on your T-shirt or say in your elevator speech. If you can’t find a succinct point, put the chart in the appendix.
    • Use videos and/or pictures to tell a story wherever possible but don’t forget the evidence. Display the important numbers. Managers need to be convinced of what you are telling them.
    • Write the upfront pieces (objective, background, methodology) as if your audience knows nothing about the project. Remember a year from now a new team may be reviewing and your background material will be the key to their understanding. The document you provide is part of historical review.

My father, a high school accounting teacher, used to say that “Figures lie and liars figure.” And in a real sense that applies to Market Research. Getting the right results all comes down to following the principles stated above:

  • Having a clear and agreed upon objective and action plan
  • Talking to the right people using the right methodology, whether innovative or tried and true, that fits your target and provides the answers you need
  • And finally carefully considering what data to use in your analysis to report the most compelling story your consumers have to tell.

For example, think of the recent story of a senator who used tweets from his statewide constituents lauding his policy. He MAY be getting the proper feedback for an upcoming statewide election, that is, assuming that he lives in a state with a large enough demographic that is on Twitter. However, if he is looking towards a national election, using his Twitter account as a measure of approval is misleading.  He has not heard opinions of people out of state who may not know his Twitter handle. And he hasn’t heard from those people no matter where they live who are not on Twitter, particularly the elderly and the poor. And he hasn’t taken into consideration what the negative tweets are trying to tell him. The resulting actions he takes may be appropriate for his state but would lead him astray with a broader, national audience – a lesson we should all learn and take to heart.

The conclusion: Market Research Best Practices are as critical in 2014 as they were in 1984 and although times have changed and the industry has evolved, guiding principles remain the same.


Recent News

  • Preliminary Thoughts – Guiding Principles of Market Research 4 years ago


▼ ► 2013 (1)