top of page
  • kmcgourty2

Opportunity Scores Provide Insight Into Creating Successful New Products

The opportunity algorithm discussed in last week’s article, provides a systematic method to rank and prioritize desired outcomes with the highest impact in defining solutions customers will embrace and choose over alternative competing solutions.  If an outcome is important and unsatisfied, it’s a good indication of an innovation area to explore and exploit.

On the other hand, outcomes that are unimportant or already satisfied, represent little opportunity for improvement and consequently should receive minimum design resources. Making additional improvements in areas that are already overserved is a waste of resources and is likely to add cost without adding additional value.

Determining Underserved and Overserved Outcomes Using Opportunity Bands

When we plot the outcome index scores for each outcome on an X/Y graph, where the X axis represents “importance” and the Y axis represents “satisfaction,” we can see graphically how outcomes cluster along specific bands.  See Figure 1 below.

Opportunity Bands

Generally speaking, outcomes with an index score of 15 or greater represent outstanding areas of innovation opportunity. These opportunities should be explored and exploited as much as possible.  Opportunities with indexes of 15 or greater are more often found in new and evolving markets.

Outcomes with scores between 12 and 15 are more likely to occur in both established  and new markets as products and services rarely execute a job perfectly.  They represent the low hanging fruit the development team should focus its innovation on.

Scores between 10 and 12 are worthy of consideration, though they may not be unique enough to establish a highly differentiated solution. Nevertheless, improvements along these dimensions can yield differentiated solution separating you from the competitor pack.

Opportunities ranked below 10 are generally considered overserved and won’t provide discriminating competitive advantage no matter how much we invest in improving them. However, because they are overserved, these outcomes might represent an opportunity to reduce overall cost by reducing the performance levels of targeted feature assuming the feature remains “good enough.”

For example, increasing processor speed for a laptop computer may have very little performance advantage from the user’s point of view. But what if you could reduce the speed while extending the battery life of the laptop?  Perhaps that would be a better tradeoff to pursue.

Positioning Strategy Using Outcome Index Scores and Satisfaction Levels

As we gather quantitative data in our research to determine our outcome ranking index, we are also gathering specific information on  the customers’ “current solutions”  and the corresponding satisfaction levels of each outcome. We can use this competitor data in helping us shape a differentiated solution and a positioning strategy.

Figure 2 is an example of index scores for a small subset of desired outcomes along with the relative satisfaction scores of three competitors. For illustrative purposes, let’s say that brand “Z” represents you, and brand X and Y are the top two other competitors in our target.

Outcome Based Competitive Analysis

We can see that “Increasing the likelihood of finding the right tool” is an outcome with an index score of 16.4, and relatively low satisfaction levels by all three brands. Thus this would be an outcome we would want to focus our development resources on to solve and improve.

If we succeed in creating a solution that results in moving the customer’s satisfaction level toward the ideal product (total satisfaction), we will achieve a competitive advantage we can enjoy until the competitors follow suit with their attempt at a competitive response.

“Decreasing the time it takes to find the right tool” also looks like a promising opportunity to focus on. Here though, we see that competitor X has a leg up on the rest of the brands. We could attempt to create a new approach to better competitor’s X solution. Or alternatively, we could simply emulate brand “X’s” solution and achieve competitive parity.

As long as we have sufficient differentiation along other opportunity vectors, this could be a great solution that reduces our time to market our efforts to launch a new product.  NOTE: We certainly don’t want to emulate competitor’s Y solution since they have the worse score amongst the three solutions. Emulating their design solution will set us back in the minds of the customers.

Finally the opportunity “Increase the likelihood of knowing if a tool is missing” receives a 4 on the opportunity index. This suggest that the opportunity is overserved and not worth focusing development efforts on. Let competitor X continue to spend development resources on improving this outcome vector – money spent on an overserved outcome is wasted effort.

What’s next?

Using the jobs-to-be-done innovation method, an innovation and NPD team can discover what customers are really trying to get done by doing specific “jobs,” how they measure success, and the circumstances, challenges, and obstacles that get in their way in achieving 100% satisfaction in executing the job.

We now know that important jobs-to-be-done will typically have 50 to more than 150 desired outcomes involved in executing its “job chain.” By using the “opportunity algorithm” we can determine the best opportunities by identifying the most important outcomes with the least satisfaction (underserved outcomes) while identifying the marginal opportunities (unimportant outcomes and/or already satisfied) and direct our development efforts on more fruitful design innovations.

As we will see in future articles, the jobs-to-be-done innovation approach also provides us a “lens” into defining new markets and product categories that result in breakthrough innovation. Look forward to exploring this topic with you in future articles.

Kevin

5 views0 comments
bottom of page