space Home space
space Contact Kevin space
space
space Availability space
space Great Mind Award space
space Hall of Fame space
space Personal Interests space
space Shocking Truths space
space Best Practices Test space
space Curriculum Vitae space
space Brand Death Watch space
space Case Histories space
space CEO / CMO Exam space
space Webcasts space
space Marketing Blog space
space Copernicus space
Kevin J Clancy - Marketing Consultant
spc_image
spc_image spc_image
Hall of Fame
space


The Maximum Difference (aka Maximum Discrimination or Max-Dif)  Methodology:  a Questionable Solution in Search of a Problem.

This is a new methodology designed to help researchers improve the level of discrimination in studies where respondents are asked to rate a set of attributes and benefits in terms of importance.  Sometimes in such studies people rate all the characteristics the same, usually high.  Every characteristic rated may get a 4 or 5 on a five point scale.

The Max-Dif methodology is simple.  It calls for giving the respondent the list of attributes and benefits (A/Bs) and asking them to pick the one they deem most important.  Then pick the one they think is least important. 

Then go back and repeat the process over and over again until all of the A/Bs have been evaluated.

It has one advantage.  It will give you the maximum differentiation at the level of the individual respondent if, in fact, you have a small number of, let’s say less than 12, attributes and benefits.  So if you have a small number of A/Bs and you’re concerned about discrimination, this is a way to go.

A close alternative to this methodology is to ask people to pick the three most important (we would say desirable) attributes and benefits followed by the three least important or desirable.  You would then move on to ask about the next three that are most important followed by the next three that are least important, etc., etc.  This approach is faster than the single item analog and can handle more attributes and benefits. 

But back to the basic Max-Dif Methodology

  1. If it’s a long list of A/Bs it takes too much time to administer and doesn’t yield reliable data.  Respondents are overwhelmed by the task and begin to answer randomly.
  2. You can overcome this problem by embedding the attributes in a set of attributes using an orthogonal design.  Let’s call each set a scenario or a frame.  It creates more problems than it’s worth.  You lose the ability to look at the data on an individual respondent basis.  The problem is identical to conjoint measurement where each respondent may see only 8 or 12 scenarios but the effects of maybe fifty or even one hundred features are captured.  Each of these individual features are only seen by a small number of respondents.
  3. The data that is generated provide only  rank order information.  You don’t have any sense of whether the attributes are great or whether they’re awful.
  4. The scale that appears to be most commonly used is an “importance” scale.  Thus it has the same problems as any importance scale.   It overstates rational, tangible traits and understates the value of emotional, intangible states.
  5. Although this approach improves discrimination at the individual respondent level, it doesn’t necessarily improve discrimination at the aggregate level.  This is particularly true if you’re forcing respondents to go through an exercise where they’re being asked to make fine distinctions between characteristics that they’ve hardly thought of before.  This yields discrimination but unreliable discrimination and doesn’t necessarily improve aggregate level discrimination.

As stated at the beginning, Max-Dif is a questionable solution in search of a problem. It is a methodology that should be avoided rather than adopted.


 

 

Shocking Truths:

> There's a Negative Relationship Between What People Say They Will Do and What They Actually Do
> Quality and Price Are Positively, Linearly Related
> As Price Goes Up, Sales Go Down
> New Product Appeal and Profitability Are Not Positively Related
> Jobs-Based Segmentation Is Not a Remedy to Marketing Malpractice
> Most Brands Are Unpositioned
> Higher Levels of Customer Satisfaction and Retention Don't Always Translate Into Higher Profitability
> Net Promoter Scores Suggest That Most Companies Employ a Failed Business Strategy
> Back To The Future: How a Discredited Research Tool Discarded in the 1960s Has Become Popular in 2012
> Spending Money to Build an Emotional Connection with Your Brand Won't Build Market Share
> Most Companies Are Operating without a Vision
> Derived Importance Measures Will Lead You to the Wrong Decision
> Focus Groups May Kill Your Brand
> The Maximum Difference Methodology: a Questionable Solution in Search of a Problem
> Heavy Buyers are the Worst Target for Most Marketing Programs
> CEOs Don't Know Much About Marketing
> Advertising ROI is Negative
> Many CEOs Never Take The Time To Do It Right
> Given lots of cues and prompts, few people remember anything about your television commercial the day after they watched it
> A Dumb Way To Buy Media Is Based On The Cost Per Thousand People Exposed—CPMs
> Implementation May Be More Important Than Strategy
> Zip Codes Tell You Little About Consumers And Their Buying Behavior
> Retailers Rarely Send Truly Personalized Mailings to Individual Customers
> Too Much Talk About Brand Juice
> Marketing Plans are more Hoax than Science

space