Monday 31 March 2008

Reality Check: what you see isn't what you get...













Several marketing blogs linked to these briliant examples of deceiving today's consumers. If you haven't seen it, check it out!

The original German site Pundo3000 has a comparison of werbung gegen realität based on comparing photos of food packages and the foods inside. They’ve taken 100 (sometimes incredibly odd looking) German food products and given us a direct comparison of their packaging next to what was actually inside.
Of course nobody expects their Microwave menu to be as totally filled with fresh food as is portrayed on the box, but shouldn’t it at least be approximately the same texture?


The video gives you the idea, obviously.




Wednesday 12 March 2008

Help my data doesn't align...!

Have you ever been there...? You did everything right: you convinced your client that online data collection is a valid method for the study, he seems ready to move his -- let's say -- offline tracker to an online method. You've prosed a parallel study to identify changes directly resulting from a shift in data collection method and to you will assess these findings when considering a move to online research. And now the data between the offline and online research does not line up. What to do? Here are four steps that will help you and your client...:

1) Understanding the type of difference


The first step in dealing with the different results is to look for the type of differences:

  • Type 1 difference: shift in results but no fundamental change in the overall conclusions and recommendations.
  • Type 2 difference: shift in the results and a fundamental change in the overall conclusions and recommendations (e.g. different concept ranking, significant results in brand awareness between brands, etc). So as a first step we must check if the conclusions and recommendations differ between the offline and online results. Remember it's all about you supporting the client in their business decisions, and if the decision will be the same, the difference in data collection method does not matter, right?



2) Explain the difference


You now have to interpret the differences found: in case of Type 1 differences you should comfort the user of the research that several reasons exist to explain the differences, but they are relatively little and did not influence business decisions. In case of Type 2 differences: we should distinguish between those type 2 results that we can easily "live with": they are not key-indicators and are not hugely important for business decisions, and those type 2 results that really differ and influence business decisions. Once we divided the Type 2 differences into two groups, we can move to the next step:

3) Which data is closer to reality?


For those Type 2 differences that are really key for the project, we must now check which results (the offline or offline results) are closer to reality: which results are actually a better representation of the market? This could be a combination of variables: general ones such as age, gender and education or region, or more specific ones to the category: category use in the P3M. Perhaps you have actual market shares to compare? Perhaps sales figures? Is none of this available, well, together with the client you can evaluate which figures are more likely to be representing reality.

4) Calibration of results


We will now have to decide how we are going to calibrate the results between the two methods. Generally speaking, the approach towards existing data is the following:

  1. before, or during the transition from offline to online data, parallel testing is done to measure discrepancies caused by the change of method.
  2. For a limited amount of time (couple of months, up to one or two years, depending on the old data available) the new online data will be weighted towards the old existing offline data for those variables where the parallel test proved a change in results.
  3. Once enough online data exists, once your client feels comfortable and has gotten used to the different new level of the online data, the above exercise will be done again, this time weighting the offline historic data towards the new online data. This should allow for the historic offline data to be still used in the future.

Drop me a line to let me know if the above is helpfull !

Thursday 6 March 2008

Please secure my concept!



All of you who are dealing with online concept testing heard the question: "how do you secure my stimulus material in your online survey?". I am sure you've all come up with the appropriate answers to your client. Either you use specific software tools to secure graphic material in online surveys, or you actually designed the tool yourself within your company.

Using JavaScript, it is common practice to ensure that respondents are discourages from copying the images within a survey. The advantages of using such techniques are:
  • Disabling the use of the right click button to prevent the “copy,” “save,” and “print” functions.


  • Restrict the ability to print from the User’s toolbar.


  • Attempting to print a page results in a blank page printing.


  • Security is delivered without requiring any specific view program.



But there simply is no way to completely prevent a respondent from saving images, there are limitations. A savvy internet user may still save the images: The respondent may use “Print Screen” to get a picture of the image. However, when using Print Screen the entire screen is saved, including template headers and footers, forcing the respondent to edit the saved image. The image may also be saved using the “Save As” from the browser toolbar. The tools do not prevent the respondent from viewing the source code which contains the link to the image. However, this function is disabled if the respondent is using an Internet Explorer browser to take the survey due to the use of a secure survey link.

So what to do?

Other non software related measures to secure graphics and video include:

  • Excluding certain cities / zip codes / regions from your sample (perhaps those where competitor employees are living / working). E.g. should you be doing an online Kraft concept test in the US, why not consider to exclude the home-town of P&G


  • Obviously you should always have the "security screening": excluding certain professions (e.g. marketing, press, certain industries related to the product)


  • Excluding certain e-mail accounts (e.g. no panellists that enrolled in the panel using an e-mail address of the a certain company will be invited)


  • Showing graphs and images for only a couple of seconds (to limit the time of actually taking a picture of the screen with a mobile device or photo camera)



I am not in favour of sending out Non-disclosure agreements to panellists prior to any certain research: this will only trigger more attention!

In the end, should you have found the perfect software, it is still possible for malicious respondents to simply take a picture of the screen with their camera or mobile phone!

Fact remains that no 100% guarantee can be given, but remember: this is valid for other field work modes too, right? Any ideas?



Monday 3 March 2008

Three Essential Panel Quality Checks

In GMI's latest email Newsletter, three essential panel quality checks are given: a handy guide for any user of online panel as a source of market research. In fact, the three checks are probably more comprehensive than using the more complete, yet bulky ESOMAR 25 questions ESOMAR has defined for online research buyers on panel quality. On the other hand, I think that GMI document is covering all the notional basic checks regarding panel / data quality, but I think I'll dedicate a futre post on that topic... Here are their three checks: CHECK 1: PANEL RECRUITMENT AND INCENTIVES
  • Ensure minimum response rates. Your panel provider should require their recruitment partners to provide willing, active panelists who will secure a minimum response rate for your research studies.
  • Pay only for good panelists. Your panel provider should not compensate their recruitment partners for fraudulent panelist registrations. They should have systems in place to track which recruiting partners provide fraudulent panelists, and strictly enforce contractual measures to demand a refund of the recruiting fees paid.
  • Review panelist redemption files. Your panel provider should regularly create and thoroughly review panelist redemption files to detect any suspicious member account activity.
  • Engage panelists with profiling surveys. Your panel provider should regularly revamp each of their profiling surveys, and provide an incentive to their panelists for completing each one. This not only contributes to keeping panel profiling information up-to-date for future research studies, but also helps continuously engage the panel.

CHECK 2: PANEL REGISTRATION

  • Enforce panelist login restrictions. Your panel provider should require their panelists to log in using their email address, and create a strong password with a particular type of / minimum set of characters.
  • Prevent panelists from changing personal information during registration. Your panel provider should not allow their panelists to change some of the personal information they provide during the registration process. This will prevent fraudulent panelists from creating accounts in multiple geographic regions with varying demographic attributes.
  • Store and compare account creation and changes. Your panel provider should be in a position to record a snapshot of every account created upon registration, and store that information in a secure database. Subsequent account changes made by panelists to their personal information should prompt another snapshot, so information can be compared between the two steps.
CHECK 3: OVERALL PANEL HEALTH
  • Strictly enforce panelist account removal policy. Your panel provider should decrease the rating of panel members who have incurred an infraction, and remove those fraudulent panelists from the panel after a set number of infractions. Block fraudulent panelists without warning. Your panel provider should delete those panelists who act in an egregious manner from the panel, and block them from opening a subsequent account.
  • Remove suspicious email addresses, domains and IPs. Your panel provider should permanently block a panelist suspected to use multiple email addresses, domains or IPs from accessing the panel.
  • Detect speedsters. Your panel provider should have checks in place to filter out responses provided by speedsters – panelists who complete a survey faster than the established minimum time threshold.
  • Keep bots and scripts at bay. Your panel provider should have technology solutions in place to prevent bots, scripts or other programs from creating or editing panelist accounts. This technology should also be used during account registration, account login and before a panelist submits changes to the personal information stored in their account.