Why you should assess your organization’s innovation performance (part 3)

And how you should do it

What do we measure when we try to measure the innovation performance?
As we have mentioned before the most crucial part of assessing your performance is to know what to measure. And the key is to measure the performance of the innovation management system of your organization. So how do you know if you cover the right things, if you cover them the right way, or if you are missing something out? Well, you can never really know, because there are too many factors that are fluid and context dependent. But you can look at best practices that accumulate lessons learned and draw from those insights. With my 15 years of experience having worked with these things, including 7 years of innovation management standardization, I will share some of my own lessons learned and hope they may be useful to you.

Create a structure that becomes as inclusive as possible
Coming partially from a computer science background, I am always keen to ensure that nothing falls through the cracks. And with assessment methodologies it is excruciatingly important to ensure that the methodology of choice is covering all aspects of your innovation management system. So this is the logic I propose for an assessment structure for most organizations.

Overall areas

  1. Strategic innovation management
  2. Operational innovation management
  3. Innovation management support
The logic behind the structure of innovation management

Let me explain why I have chosen to divide the structure into these areas. Strategic innovation management is all about establishing, managing and maintaining the innovation management system, its governance, and all of its components.

Operational innovation management is mainly about managing the innovation process and its core components in each phase. This is where innovation projects happen. Once the innovation process is established all operational activities happen within its structure.

Innovation management support includes such components that are not part of the innovation process, but that are ensuring its functionality, like innovation tools, methods and skills. Examples are strategic intelligence, ideation tools, or innovation funding. Together they support all aspects of the innovation management system, especially the innovation process.

Examples of area components
But what components is it that I am actually referring to that are included in each of the areas? And what is it that we are actually going to measure within those areas? Let me give you a few simple examples as a brief overview of the contents. Strategic innovation management includes defining and establishing an innovation vision, an innovation policy, an innovation strategy, etc., but also setting up a governance structure for, and tools to improve, the innovation management system. Operational innovation management mainly concerns the configuration, the structure and the phases – including input and output – of the innovation process. Then innovation management support are all those components that will ensure that the innovation process is running smoothly and that complements the process to make it into a system. Innovation management support can contain components such as funding, infrastructure, competence, strategic intelligence, etc. And when you put all of these components together you will ensure that you have a complete innovation management system in place.

Maturity levels
The next issue we run into is the levels of measurement. A very common way to measure levels is by using a Likert scale. Likert scales typically allow respondents to value their level of agreement with a statement on a scale from one to five (or any number of choice). So for a statement saying for instance that “Your organization has an innovation process in place” you will get the options to 1) “strongly disagree”, 2) “disagree”, 3) “neither agree nor disagree”, 4) “agree”, and 5) “strongly agree”. Traditionally the same rating system is applied to all questions in an assessment. The biggest hurdle using this evaluation model is that “agreeing” or “disagreeing” with a statement may mean completely different things to many – or even all – engaged respondents. I have experienced first hand how people who have no innovation process whatsoever in place valued that they did, and chose “agree”. Because since they were not very familiar with what an innovation process looks like (in the eyes of an innovation manager) they accept anything that can be viewed as an innovation process. So for a Likert evaluation we technically would need to ensure that all respondents have the same frame of reference, which in practice is almost impossible.

A more comprehensive way of measuring maturity levels would be to provide the respondents with a scale that is adapted to the statement at hand and that actually reflects the current maturity level for that statement. So using a tailored Likert model the options could be such as 1) “no we haven’t even considered establishing an innovation process”, 3) “yes, we have started, but very few are yet contributing with ideas”, 5) “yes, and we launch several new innovation projects every month thanks to ideas submitted”. The predicament with having tailored ratings is that they too can vary too much in perception. What if your organization don’t match the maturity level exactly, you fall between two options, how should you rate yourself? So if can’t find adapted options that are either accurate enough or general enough to be applied to your assessment, you better go for the simpler scale. So my recommendation is to try to be as accurate and detailed as possible, but if you are not completely certain that your options match reality, then go for the basics.

This was the final article in a series of three. You are more than welcome to read the first article or the second article as well, since they are written as a sequence. Next I will write a series on why you need an innovation management strategy.

UA-76219527-1