Thursday 25 June 2009

ARF: Industry is not served with several research quality standards

Already in September 2006 - almost 3 years ago - the Dutch NOPVO initiative was born; an industry-wide study to investigate panel effects across all Dutch online Panels.
Early June, the ARF's Online Research Quality Council presented detailed findings from a similar US based research-on-research project regarding online data quality, called “Foundations of Quality” (FoQ). The ARF FoQ study has results compiled across 17 online panel providers, it represents around 75% of the online panel sample available in the United States.
The Background
Increasingly, buyers of market research ask the question about panel effects. To what extend have the different panel strategies an effect on the data that is collected and who are the respondents that participate in such panels? There are a number of similar and comparable findings in both studies, topics include:
  • Effects of multi-panel membership on survey results;
  • Effects of respondent motivations and engagement on survey results; and,
  • Connections between proposed or commonly used metrics and data quality.

Let’s take a look on how do these findings cast out on some strongly held believes on panels particularly regarding professional respondents:

Professional Respondents

The idea behind the concept of professional respondents originates from the assumption that online research really is coming from a very small number of people who respond to online surveys for the money, the points, the rewards; they figured out how to game the system. The ARF results show that this is not at all the case.

The picture that emerged from the findings wasn’t what many researchers would have expected: not only wore most respondents not members of more than one panel, but the so called professionals – the ones who are doing most of the surveys - were actually the ones giving the most thoughtful and reliable answers. This conclusion from the ARF confirms what was found in the Dutch NOPVO study over two years ago.

But for the ARF, duplication and professional respondents is not the biggest issue here, the ARF findings highlight the fact that researchers should pay attention to other questions too.

Sample Source

The number one thing that buyers and suppliers should be talking about right now is that panels are not interchangeable. According to the ARF, buyers need to start having conversations with suppliers about the sample sources that they use, which is not a conversation they’re having today.

In a recent Research podcast, ARF’s Joel Rubinson explained how

“…operations people within the sample suppliers need to start monitoring and managing how they source sample for a given study. Not just based on sample availability and productivity, but also based on data consistency..."
This – according to Joel Rubinson – should be the number one area that needs to be attended to, to be able to establish comparability across studies.

90 Day Deadline

The ARF has several more sets of results from this study to release in the coming weeks and has given itself 90 days to come up with recommendations on metrics, business practices, definitions and training. The ARF seems to be taking this self-imposed 90-day deadline seriously.

People are out of patience and out of time and the ARF believes it should come up with solutions or chaos may occure when people find their own proprietary solutions. This is such an interraletd ecosystem, having individual solutions where one buyer has an own approach and another one found another solution will simply no serve the research industry.

Friday 19 June 2009

The "third way" of research: Bigger, Better, Cheaper and Faster!

Two recent blog posts inspired me a lot. The first one: Ray Poynter's post on "the New MR" and how community research is taking quantitative budgets to deliver qualitative benefits. He writes:

"Head of Synovate, Adrian Chedore, has described communities as the fastest growing aspect of market research, and the reason for his deal with Vision Critical. However, unlike online data collection, online communities are a true category destroyer. Communities compete for quantitative research budgets, but deliver qualitative research benefits."

I am not convinced communities will prove to be a category destroyer, on the contrary: it may be a whole new category in it's own right. Here's the thing, different research community solution providers position their community solution differently; KL Communications and CommuniSpace are at one side of the "size" equation, advocating smaller research communities. They should be much more productive and insights will be much more "qualitatively focussed".
In traditional research companies, it will be the qualitative department taking care of the community; there will be many qualitative insights that need interpretation. Indeed, such a smaller sized research community requires above-average moderators and the application of specialist techniques.
At the other end of the 'size' equation we find providers like Jive and Lithium, allowing for several thousands of members per community, clearly skewed towards more quantitative research results. Having hundreds of members, communities allow for coverage across multiple target segments and have a huge potential for quantitative feedback. Why shouldn't we take advantage of the opportunity to contact much larger samples than was possible in the past to provide more reliable and comprehensive data?
So what will it be? Are research communities the domain of qualitative or quantitative researchers?
I think you'll agree: non of the two and both of them! Right, I almost forgot the "in between" solutions: those providers promoting mid-sized communities like Passenger, Vovici and of course Angus Reid's Vision Critical.
I am most confident with this positioning: in the middle. It's a bit like Bill Clinton's centrism (a.k.a. the "third way") advocating a mix of some left-wing and right-wing policies. This third method of market research may help us overcome the fears of the more traditional orientated researchers - both the qual and quant teams who are afraid it may cannibalise their research. It will be another method of market research, leveraging the strengths of both methods combined with the benefits of the available technology.

It may be bigger, faster, and cheaper. And this brings me to the second post that inspired me Tom Ewing's post on the same topic. In this post he also writes the following:
"The cry in online research for the last five years or so has been “simpler! quicker! easier!”. Most online communities are none of these."

I'd argue the right research community should deliver faster, more flexible, cheaper and better research insights:
The clients I've been presenting our community solution to love it just because it's all of the above:

  • Fast and Flexible: Collect insights quickly, a community is "always on" and directly accessible.
  • Better: Given the longitudinal nature of research communities, it is possible to go much deeper on a given topic than in an ad-hoc research project. Respondent will be much more engaged and should therefore result better quality of data, more reliable if you will: less straight lining, more thoughtful answers, higher response rates (we see an average of just under 50%)
  • More for Less: Supplemental research becomes available at little extra cost. Research communities are fundamentally changing the cost structure of research from a variable-cost, per-project basis to a fixed-cost “all you can eat” basis.

But it is true: with communities comes the need for reducing our dependence on evaluative research data and learn to trust listening to these new sources of consumer insights. I believe that anything our clients do to get more in touch with consumers is a positive.

Or as per Adrian Chedore:

“We don’t see how the connection through social media as any more “risky” than relying on traditional qualitative research approaches. Social media are a great way to gauge consumer reactions to trends and often provide a fast return on research investment. It is the joint task of researcher and client to come up with solutions, not that of the respondent.”

Monday 8 June 2009

The world's biggest small research company

I have asked myself before if big research firms can act small (see the post here), and back than, I already was convinced we could. As we all are here at Synovate. Now here's some corporate Synovate arguments I simply have to share:
Synovate's new corporate video! And make sure to not miss the final 15 seconds.... here it is, I think it's great:
Well there you had the original and official corporate video. But wait, there's more: the outtakes! Will they start a fight? Check for yourself, and do mind the final 20 seconds or so....: