Who We Are

We are your guide to timely, fact-based insights and actionable solutions. We connect the dots between the research and your business objectives.

Who We Work With

Some of the most iconic brands in the world have trusted us since 1983 and we are proud to have clients that continue to work with us after 30 years of successful business together.

What We Do

We deliver on our promise. Our promise to provide the highest quality marketing research insights with a reliable team of industry experts that you can count on every step of the way.

How We Do It

We care about the quality of our deliverables. We care about the impact that our insights and solutions have on your organization. We care about our relationships with our clients and with each other.

Let’s Connect

We know that trust must be built between us over time. So, let’s get started.

Privacy Policy
Copyright © 2024 KS&R, Inc.
All Rights Reserved

What Truly Drives Your Business?

Decision Sciences & Innovation empowers our clients with end-to-end quantitative research solutions and custom advanced analytics. We deliver insights that are both easy to understand and actionable, enabling our clients to make better decisions for their business. Our comprehensive approach equips our team with the tools and confidence to solve complex business problems with our extensive and continuously evolving analytical toolbox and to translate data-driven insights into business solutions.

What Truly Drives Your Business? Hero Image

What is going to move my business in a positive direction? All organizations have asked themselves this question at one point. At KS&R, we frequently receive requests to test product (or service) features or attributes, to determine which are preferred, and ultimately which make a difference, and should be included in a product or service roll-out. While there are many ways to accomplish this, one that has proven particularly useful is Maximum Difference Scaling (MaxDiff). MaxDiff is easy to design, straightforward for respondents, and enables simple analysis and results interpretation. 

Using MaxDiff, customers are asked to select the most and least important attribute across several different lists. The analysis uses these responses to calculate the derived importance of each attribute. A stand-alone MaxDiff is often compared to a traditional rating or ranking exercises where customers concretely state their importance; however rating or ranking exercises can come with some disadvantages such as:

  • cultural biases
  • not pure ratio-scale (e.g. a 5 point scale)
  • lack of differentiation among attributes
  • not usable with large number of attributes (MaxDiff is well suited for more than 10 attributes

Anchored MaxDiff, on the other hand, allows researchers to combine derived and stated importance and helps to further differentiate the attributes tested.  First, let’s look at Stand-alone MaxDiff and then how Anchored MaxDiff takes it a step further.

STAND-ALONE MAXDIFF

In a MaxDiff exercise, respondents are asked to trade-off attributes in several tasks that have been randomly generated.  Attributes are shown in a balanced fashion to respondents, through the use of an experimental design.  It should be noted that MaxDiff is appropriate when product bundles and price are not to be tested.  If product bundles or price are to be tested, one should consider an alternative method like discrete choice. 

Each MaxDiff task includes a sub-set of the attributes in the full group. Respondents complete several tasks, with the actual number of tasks dependent on the number of attributes being tested.  An example MaxDiff task card is shown below (let’s assume these 4 attributes are from a list of 12 total attributes):

An example of a MaxDiff task card.

MaxDiff data is prepared for analysis with the use of Hierarchical Bayes (HB) estimation. Keeping in mind that not all respondents see the same combination of sub-sets of attributes in a MaxDiff exercise, HB is a method by which individual estimates are calculated, across all attributes, by borrowing information from across the entire base of respondents. Through the use of HB, individual-level data is generated for each attribute, enabling users to analyze the data as though each respondent reacted to all possible combinations. The data resulting from a MaxDiff exercise is often presented similarly to how one would present rating or ranking data – as examples, it can be reported as relative importance via a share of preference out of 100% (aka chip allocation) or as the percentage of respondents ranking each item first, ranking each item in their top 3, etc.  

A bar graph of data showing how each respondent ranked each item.

A stand-alone MaxDiff provides the relative importance of each attribute, which is nice to have, but taking it a step further, by adding an anchor question, enables us to understand which attributes are the must haves.

ANCHORED MAXDIFF

An enhancement to MaxDiff is to add an anchor question that is tied to a threshold (e.g., pricing, action, or preference). While a stand-alone MaxDiff can provide the relative importance of each attribute tested, it does not enable end users of the research to say, for example, “These 3-4 attributes are important enough to command extra dollars when consumers are considering purchasing a new car.”  

Adding a multi-response follow-up question helps to facilitate that next level of analysis, which is what attributes tested actually matter or make an impact. When considering the anchor question, the threshold should relate to the study, and should be understandable by the respondents. In the new car example above, one might as…

Considering the list of new car attributes below, which attributes are so important that you would pay extra?  (SELECT ALL THAT APPLY)

  1. Back-up camera
  2. Custom exterior color
  3. Driver fatigue recognition
  4. Navigation system
  5. USB ports
  6. Self-cleaning windows
  7. Cruise control
  8. Auto-open trunk
  9. WiFi
  10. Built-in vacuum
  11. Forward collision warning system
  12. Rear seat reminder system
  13. None of the above

In essence, the anchor question is used when there is a need to determine whether items in a list exceed an action or attitudinal threshold, so those items can be classified as impactful or not. The anchor question is built into the HB estimation and subsequently, the analysis phase, so that in the end, a literal line can be drawn to show which attributes are impactful (and would be worth investing in because consumers will pay more for them) and which attributes are considered “nice to have.”

A bar graph showing data of how respondents ranked the features that were nice to have.

Because the anchor question does not take up much real estate in the survey, it is our recommendation to consider always including this valuable add-on that will provide greater insights. While it’s nice to know how attributes test against each other (as seen in a stand-alone MaxDiff exercise), the benefit of being able to measure the attributes against a meaningful threshold makes the anchor question a true asset.