Fresh insights on making the most of the powerful Kano model for digital product development
By Kelly Moran
Principal Experience Researcher
Building up a product over time in a way that provides the right features at the right time is a huge challenge for businesses of all types and maturity levels. The Kano model -- a method which gauges and maps the emotional reaction to individual features for the purposes of product roadmapping -- can be a great answer to this challenge.
Developed originally for use in physical products, the Kano model is commonly leveraged for digital product development, to identify and prioritize product features that would most satisfy users.
We also have seen a lot of groups struggle with getting value out of the model. Both designing the Kano study carefully and tailoring the analysis are critical to success with Kano. You could argue this is true for any research method (and I’d agree with you), but the Kano is so often misapplied and misunderstood that it bears restating the obvious: Put a lot of effort into test design with your analysis plan already in mind.
With several years of experience using the Kano model on a variety of projects, we have some fresh perspectives to share on how to make the most of this powerful tool.
As a very quick primer on this method, in a Kano study we ask users a specific pair of questions about each potential product feature: how they feel both with and without that feature present.
Next, we map each pair of responses to a table to determine the overall emotional reaction category for that feature.
Ultimately, the team can calculate the anticipated satisfaction and dissatisfaction possibilities for including or leaving out said feature.
This allows teams to make less politically-fueled and more user-informed decisions about what to include in a product launch, a minimum viable product, or a new release. You can also find the basics of the Kano model further explained in this projekt202 blog post.
Of course, we have many methods for identifying user needs and motivations. One typical process is to gather upfront understanding using observational, in-context sessions with users. The Kano method, however, is a handy technique when we’re helping clients prioritize the multiple feature ideas that emerged from user understanding or from a client’s team imagining, or both.
Once you’ve decided to conduct a Kano study, there are nuances to getting solid results. Let’s walk through these from setup through execution and analysis.
Setting up a Kano model test for success means selecting the right things to test, crafting the session purposefully and collecting important supplementary data. Remember to bring in as much context as possible by setting up a realistic scenario and phrasing your feature descriptions and additional conversation in ways that reflect the user’s perspective, not the company’s.
1. Applying Kano the Right Way
Remember, the point of the Kano model is to figure out where to put your limited resources. It is intended to test features for potential inclusion, not to evaluate delivery and execution of those features. It tests the response to feature presence and absence; how those features work isn’t something you should bring to your users at this exploratory phase.
Instead of asking if customers would like to view information in a table view as opposed to in a pie chart, ask how they feel about having access to that information, period. It may be that they don’t care at all and you can put your brainpower somewhere else.
Discover what features are important first, work out the details later. Usability or A/B testing will settle execution.
2. Phrasing from the User’s POV
When presenting the features to test participants, remember to describe them from the user’s, not the company’s, point of view.
Wrong: How would you feel if this app sent your information to our database so we can know where you are, which will help us find better recommendations to send you?
Right: How would you feel if this app used your location to provide useful tips about the local area?
Visuals or mockups are very helpful, but words frame the way test participants consider the concept. A brief description provides context and will be needed to help them focus in on the specific attribute you’re asking about. You’ll need to phrase this in terms of benefit to or impact on the user.
3. Divide Features into Domains
Help the user identify the context of use by grouping features by use case or topical area.
When working on a consumer healthcare application, I grouped features into the areas of:
- Doctor Recommendations
- Costs and Medical Bills
- Advice and Education
- The Mobile Experience
Each of these domains included three to five features.
By grouping into these domains, I could introduce each set of features with some grounding discussion on how participants currently experience the problems each feature claims to solve. This situates the user and allows for more reality-based responses. This is key to distinguishing delight factors from more basic requirements.
4. Establish a Realistic Scenario
Help your users visualize themselves encountering your product in the wild. Set up a usage scenario for the entire experience and tie this to the individual features.
I like to set up the scenario by providing a description of what the product is intended to accomplish. So I might say: This application will help you understand your health insurance coverage, costs, and benefits, as well as monitor your overall health and well-being by giving you anytime access to all your medical and insurance records. As I get to each feature, I would set up an actual scenario of when that feature would be used.
Bonus tip: To make the most of the initial setup description, I also use this as a baseline measurement of interest (typically on a 1-7 scale), which I repeat at the end of the session. It’s interesting to note if interest levels increase or decrease once participants have been exposed to feature details and possibilities.
5. Tailor the Follow-Up
As mentioned, a Kano model study includes a pair of questions targeting a response to both feature presence and feature absence. These are often followed up by a third question, which is used as supplementary data and provides some ability to validate the veracity of the participant’s previous two responses. A 7 or 10-point scale of self-stated importance is commonly used here. I recommend taking the opportunity to customize this to the unique needs of each project. For instance, when a team had a high level of interest in consumers’ taste for purchase, I developed a Willingness to Buy follow-up.
After each pair of questions, I asked, “How would inclusion of this feature impact your willingness to buy this product?” The response options were:
- I would pay more to buy this product if it had this feature over a product that did not.
- I would not pay more but I would choose to buy this product over another product that did not have this feature.
- This feature does not impact my decision to buy.
- I would prefer to buy a product that does not have this feature.
The responses here helped us nail down features that would make a huge difference and allowed for some finer-grained recommendations. (Real World Hint: People hate to admit they’d pay more for something, so when they do, they typically really like it. For this reason, I aggregate the “would pay more” and “wouldn’t pay more but would choose” responses together during analysis, but make a side note of which features had a high number for “would pay more.”)
Pick a careful follow-up question to shed a little more light on how users respond to each feature and how that relates to the project’s overall goals. I strongly suggest keeping this follow-up the same for all features in the study, to avoid confusion.
Conducting the Test
The Kano model is not your typical “how much do you like this” survey. The response options (like, expect, neutral, tolerate, and dislike) are not familiar to most test participants and understanding the language being used is paramount to getting useful data. If your test participants assume the “like” response has to mean they like it a whole lot because it’s higher up than the “expect” response, or that liking a feature being present means they must therefore choose “dislike” when the feature is absent, you’ll miss all the important nuances of their true reactions. There is no strength of liking or not liking in this method; it’s not a 1-5 scaled response. Avoid this potential confusion in two ways: first, calibrate the participants’ responses by walking them through two example features; second, always moderate the test (do not conduct it as an online or mail-in survey).
6. Provide Response Examples
The major differences within the Kano framework occur when a participant Likes versus Expects or Tolerates/Feels Neutral to versus Dislikes. Help them grasp these nuances with a couple of (non-app related) example questions. I use the ones below because the almost silly level of obviousness makes a clear and memorable point:
To demonstrate a feature that a participant might expect as opposed to like when present, and then feel dislike as opposed to tolerance or neutrality when absent, I ask:
- How would you feel if your car had a steering wheel? / How would you feel if your car did not have a steering wheel?
I follow this with a feature that is typically liked, not expected when present, and tolerated (some even say expected) when absent, as opposed to disliked:
- How would you feel if your car got 1,000 miles to the gallon? / How would you feel if your car did not get 1,000 miles to the gallon?
Providing these examples and talking over Like versus Expect and Dislike versus Tolerance sets the stage for clearer categorization of the tested features.
Because you need to walk participants through the examples above, you’ll need to moderate the session. That’s not the only value of moderating, however. By sitting with your participants (or sticking with them on the phone), you can probe for the rationale behind their responses, which allows you to make more nuanced recommendations later.
Keep watch for when participants say they expect a feature, and follow up. Ask them to tell you about a time when this feature was, or would have been, useful in the recent past. This can provide helpful direction when it comes to executing this potential Kano “Must-Have” feature.
Try to inject this type of occasional discussion at the end of a feature’s three questions (the pair of questions dealing with feature presence and absence, and the custom follow-up). Once they have provided their responses to these, that is the best time to dig into the details of why they responded that way.
The beautiful thing about using the Kano model is that, after testing, you have some immediate results. The basic Kano reaction categories come right out after entering the data into your calculation sheet (or plotting them on your table if you’re doing analysis manually). To extract even more value from spending an hour with 12-24 different end users, you should do a little further analysis.
8. Test the Strength of Trends with Data Segmentation
Checking to see if the reaction categories hold true after splicing the population different ways is always a good idea. Remember, however, that you are basically reducing your sample size when you do this, so keep focused on big changes in the Kano reaction categories if you split your sample.
A feature that provides transparency into a person’s individual fitness level in comparison to that of their peers was an Attractive feature overall, but became One-Dimensional (an even more strongly positive reaction in the Kano methodology) for very fit users, while trending Reverse (a negative reaction) for those with chronic medical conditions.
Try other forms of segmentation using your screening criteria or using data collected in an initial interview. You may end up not finding any differences – that’s fine. I consider it a part of due-diligence to at least try.
9. Use Triangulation
The Kano is a strong tool, but make sure your analysis is robust enough to stand up to scrutiny with some good old-fashioned data triangulation. The customized follow-up used after each question pair is a good starting point. Feel free to be creative here. Remember, this will supplement the Kano data, so it’s an opportunity to get at participants’ responses from another angle.
As a final data point, I like to add a “Top Three” component. Near the end of the testing session, I show a screen or a printout listing all the features we covered. I ask participants to pick their three favorites.
It’s always interesting and powerful to see a feature come out as a Kano Must-Have, be noted 25% of the time as a feature they’d pay more for, and get mentioned nine times out of 12 as a Top Three.
10. Leverage All That Discussion to Make Your Final Recommendations
You just spent a dozen or more hours talking to people. Utilize that good qualitative data to supplement and fine tune the Kano results. This is particularly helpful in providing finer-grained analysis of the features which provoked an Attractive reaction - remember that Attractive features have a somewhat ambiguous call to action; people delight in having them but tend to tolerate their absence.
The philosophy behind the Kano model states that Attractive features have a tendency to transition over time into Must-Haves. Look carefully at the qualitative commentary, as well as the individual responses which feed into the potential for satisfaction and dissatisfaction (the satisfaction and satisfaction coefficients) to identify what I call Rising Attractives: features that are on their way to Must-Have status.
The tip here is to use everything you’ve got to interpret the data set and extract the most useful insight.
Working with the Kano model has allowed the team at projekt202 to make immediately useful and actionable recommendations for both MVP launches and product updates. The tool is receptive to combination with both qualitative and quantitative approaches, and provides output which stands up well in stakeholder meetings. When applied correctly, the Kano is reliable and even flexible. Remember to plan for your analysis in advance, set yourself up for successful interpretation, and reach deep into your data set to make the most of your recommendations.
projekt202 is the leader in experience-driven software design and development. We are passionate about improving the experiences that people have with enterprise and consumer digital touchpoints.