Measuring the Practice of Collaborative Innovation

Developing and supporting the practice of collaborative innovation takes time and money. What do we assess to weigh its value? In this article innovation architect Doug Collins proposes focusing on strategic alignment of the program, relative advantage of the ideas, and engagement of the community members.

People measure the return on their practice of collaborative innovation for many reasons. They want to show the extent to which the practice supports the organization’s strategy. They want to speak to employee engagement and cultural transformation. They want to increase the resources the organization applies to the practice. They want to advance their careers.

In this article I offer a simple framework by which people who lead the practice can measure and convey its value. I explore the critical questions that open at each level.

This discussion assumes the organization practices the enquiry-led form of collaborative innovation.

Three-Level Framework

I suggest measuring the practice by the following three levels: strategic alignment of the program, relative advantage of the ideas, and engagement of the community members (figure 1). I suggest this approach for a couple reasons.

  • The levels comprise the critical factors that determine the success of the program and the associated campaigns.
  • The levels can be measured.
  • The levels reflect part of the critical conversation that the program leader should be having with their advisory board and their campaign sponsors ahead of launching campaigns (i.e., the metrics bookend important conversations that the program leader should initiate on the front end of their work).

Figure 1: measuring the practice by the levels of program, ideas, and engagement

Click to enlarge

The Program

Organizations pursue and support the practice of collaborative innovation by chartering and funding a formal program. The program guides and supports the individuals who want to explore the possibilities that the practice offers by way of sponsoring and participating in innovation campaigns.

The leader of the program, along with the program’s advisory board and—potentially—the campaign sponsors, has one key measure to consider at this level.

  • To what extent do the campaigns support the strategic intent of the organization?

Some may wonder whether the question fits under the general heading of common sense. My experience says no. What I find, often in the early days of the program, is that the participants take a first-come-first-served approach. Wanting to secure the first set of sponsors, the program defers depicting how the campaigns support the organization’s strategy.

The organization pays a high opportunity cost when the program pursues “low-hanging fruit,” however. People have a limited number of hours to devote to the practice. External pressures compel the organization to resolve critical business challenges in the near term. Inviting people to engage on non-core enquiries wastes their intellectual capital and defers the possibility of transforming the organization in authentic ways. Measuring strategic alignment helps the organization avoid paying this price, which far outweighs costs associated with inefficiencies I at times see at the level of the individual campaign.

To this end, the program leader can map their campaign plan for the year to their firm’s balanced scorecard—or whatever capstone approach the organization uses to describe strategy (figure 2). They use this simple, visual depiction to engage in dialogue with their advisory board: the people who provide higher level perspective on the program’s direction. The measurement is both qualitative and subjective at this level: a product of the consultation between program lead, the advisory board, and the campaign sponsors, potentially.

Figure 2: measures at the program level

Click to enlarge

People may observe that engaging sponsors in dialogue around strategic intent does not lend itself to a dashboard view of the program’s health. My response: allow the measure to drive the presentation, not vice versa.

The Ideas

Every collaborative innovation campaign has in mind an end user: the person or organization that the campaign team anticipates will benefit from the innovations that the practice delivers. The organization itself can serve as the internal client at times.

To this end, the practice should be able to answer one or both of the following questions in the affirmative.

  • To what extent did the campaign generate ideas that enable the client to complete a new task (e.g., through an idea about a new product or service innovation)? And, how important is this task to the client?
  • To what extent did the campaign generate ideas that enable the client to complete an existing task in an improved way (e.g., through an idea about improving an existing workflow)? And, how important is this task to the client?

The question of timing arises. The collaborative innovation practice begins at the front end of innovation. Organizations may need weeks or months to trial the ideas that the campaign produces in order to gain an inkling of the potential benefits from the resulting innovation on the back end.

Here, I recommend the program ask the sponsor to estimate the benefits at the time they decide to pursue the idea and then at the time they have trialed the resulting innovation. By way of mechanics, the program lead may, for example, be able to capture the client’s perspective by using a simple value stream map that depicts the AS IS (pre-innovation) and TO BE (post-innovation) flows that tie most closely to the critical question that the campaign addresses (figure 3). This approach can work for both the new and existing task scenarios, assuming the client has a substitute approach for completing the former.

Figure 3: a simple value stream to depict an idea’s relative advantage

Click to enlarge

Regardless of the approach taken, I encourage program leaders to focus on how the idea in question might improve the lives of the anticipated end users. Work with the sponsor and client to create well-structured, scripted scenarios that depict the relative advantage that the idea offers over current practice.

The Engagement

When a campaign sponsor invites a community of people to engage on a critical question facing the organization, they in turn commit to helping individuals who contribute ideas pursue them in kind.

To this end, I recommend the program leader develop a survey that asks community members, including those who contributed ideas, the following questions.

  • To what extent did the challenge question resonate with you?
  • To what extent did you explore the opportunity to engage fellow community members on your ideas?
  • To what extent did you explore the opportunity to pursue the ideas that you contributed—or that you found compelling?

Campaigns may also require a certain level of diversity of participation. I recommend the program leaders match the actual level of engagement to the segmentation scheme they developed in order to achieve a diverse audience. The campaign may, for example, have defined diversity as a function of brand, region, function, seniority, or personality type. The campaign may also have defined diversity as a function expertise or familiarity with the subject at hand. Campaigns whose challenge questions require community members to delve into unpredictable environments may, for example, have chosen to minimize the level of expertise (figure 4).

Figure 4: measuring diversity of engagement

Click to enlarge

Parting Thoughts

I offer two parting thoughts out of consideration for the program leader’s sanity when it comes to developing the measurement framework that makes sense to their organization and to them.

First, do not stray beyond the level of rigor that the organization as a whole applies to its current forecasting processes. I have, for example, witnessed programs attempt to estimate discounted cash flows associated with pursuing the idea in order to calculate the net present value of the associated project. Why? Someone within the organization asked—prematurely—for an ROI calculation.

Instead, focus on understanding—and then depicting—the relative advantage that the idea may offer the end user, relative to the current environment. The practice of collaborative innovation begins in the front end of innovation. Ask the questions around relative advantage which people who work in the front end discipline themselves to ask during this part of the process. They know that speculating on the macroeconomic implications of an idea at this point serves as a waste of time.

Second, do not overcomplicate, or Rube Goldberg, your measurement framework. To this end, I propose in this article a structure based on the critical attributes of program alignment, relative advantage of the idea, and community engagement. These attributes tie back to conversations that the program leader should have with the campaign sponsors and advisory board members ahead of launching the challenge in question. That is, the framework enables the program leader to close the loop on decisions and expectations they set with the organization’s leaders around these attributes.

Figure 5: closes the loop on this article by framing these preliminary questions

Click to enlarge

By Doug Collins

About the Author:


Doug Collins is an Innovation Architect who has specialized in the fuzzy front end of innovation for over 15 years. He has served a variety of roles in helping organizations navigate the fuzzy front end by creating forums, venues, and approaches where the group can convene to explore the critical question.
As an author, Doug explores the critical questions relating to innovation in his book Innovation Architecture, Practical Approaches to Theory, Collaboration and Implementation. The book offers a blueprint for collaborative innovation. His bi-weekly column appears in the publication Innovation Management.

 

Ad

FOLLOW US

Find out how innovation can be learned.
Ad

RESOURCE DIRECTORY

IM services, training, tools & more.
Need an account? Sign up >
Ad