Relative Importance

“Key Driver” Analysis Tool

Learn More Start Free Trial

Ever surveyed your clients to find out what is important to them, then focused on those aspects only to find it didn’t move the needle?

A basic analysis of the survey responses may be the problem. Taking the same data and applying Key Driver analysis methods can yield more accurate insights into what really drives client satisfaction.

JumpData’s online Key Driver tool is an easy-to-use way of allowing you to perform this analysis on your data. It requires no statistical knowledge or third-party stats software. Simply upload your data to the tool and download the results, typically in a few seconds.

See a video of the tool in action:

About Key Driver Analysis

Key Driver Analysis, or Relative Importance Analysis, is the generic name given to a number of regression / correlation-based techniques that are used to discover which of a set of independent variables cause the greatest fluctuations in the given dependent variable; i.e., which of them have the greatest impact in determining the value of the dependent. For example, in market research surveys, the dependent variable could be a measure of overall satisfaction, whilst the independent variables are measures of other aspects of satisfaction, e.g., efficiency, value for money, customer service etc.

One of the main problems with analysing this type of data is that the independent variables are highly correlated with one another. This phenomenon is known as multicollinearity. It can result in importance values that are derived from simple regression / correlation analysis being inaccurate and potentially highly misleading (see https://en.wikipedia.org/wiki/Multicollinearity).

Our easy to use, web-based tool simultaneously conducts three of the most well-respected statistical techniques to overcome multicollinearity in regression analysis, producing easily interpretable results is a very short space of time.

The three techniques the tool uses are:

Both Shapley Value Analysis and Kruskal’s Relative Importance Analysis are fairly similar in concept. For each independent variable, we derive a measure of the strength of the correlation between itself and the dependent after we have “stripped” out its correlations between the other independent variables. The final measure of importance for each variable is the mean of these derived correlations taken over all possible regression models between the dependent and the different possible subsets of the independents. Ridge Regression, on the other hand, in effect, penalises the importance values in an attempt to neutralise the effect of multicollinearity. In this case, the importance values returned depend on the penalising factor.

For more detailed explanation of the product, please either watch the video of the tool in action, or read the technical document linked to this page.