SOMEWHERE BETWEEN SUCCESS & FAILURE: Assessing Implementation Performance

Have you ever gotten curious about average failure rates for implementation? So curious that you actually Googled it?  Yes? Then you’ll know that often quoted failure rates are startlingly high.  It doesn’t matter if you search for rates related to change, strategy or project implementation. They are all insanely high.  So, high, that on reading them you probably (hopefully?) think something like, “This can’t be right. The vast majority of all major initiatives cannot possibly fail. Nothing would be getting done, anywhere!”

The truth is...no one knows.

If you think this, you are not alone.  In an extensive literature review of published failure rates for strategy implementation, the authors conclude (as have others) that:

 "...whilst it is widely acknowledged that the implementation of a new strategy can be a difficult task, the true rate of implementation failure remains to be determined. Most of the estimates presented in the literature are based on evidence that is outdated, fragmentary, fragile, or just absent."

By logical extension, if we have no idea what real failure rates are, we probably don’t know what success rates are either. 

Where does this leave practitioners and organizations that are looking for ways to assess and improve the relative performance of their project, change and strategy implementation efforts?  

It seems safe to say you probably are neither as bad, or as good, as you think you are.

If you find that answer unsatisfying and truly want to understand how well you and your organization are implementing — where you are strong and where you could use improvement — why not put in place a simple mechanism to consistently evaluate your efforts?

One of the tenets of evidence-based management is to use the best available evidence.  Sometimes, the best available evidence is that which you systematically and credibly gather from within your organization or team. Keep reading, if you want to find out how. 

Start with your definition of success.

As I wrote in a recent post, and is reflected in literature, definitions of implementation success are numerous and often tend to differ based on how high they set the bar for success.  

It may be useful, then, to start by thinking about successful implementation as multi-dimensional, rather than a singular “thing” you attain.  In her research on the implementation of strategic decisions, Susan Miller offers a helpful framework for evaluating implementation success, which includes consideration of three dimensions: completion, achievement and acceptability.  I’ve adapted and summarized these dimensions below.

  • Completion: Did you complete all intended aspects of the implementation, within the anticipated timeframe?
  • Achievement: Did you achieve the intended performance and outcomes from the implementation?
  • Acceptability: How satisfied are stakeholders with the implementation process and outcomes? 

Second, given that we rarely reach perfection in all (or perhaps any!) of these areas, it may also be useful to think about success on a spectrum.   Evaluating your efforts on these dimensions using a graduated scale may provide more useful learning than taking an all or nothing approach.

Lay the necessary groundwork.

As you begin to ask these questions about your implementation efforts, you’ll quickly note that assessing performance against these dimensions requires information…which you will only have if you’ve laid a strong foundation for your implementation.  For instance, you can’t tell if you’ve completed the implementation within the intended timeframe, if you never defined the minimum requirements for completion and/or created an implementation plan including a clear schedule.

It can be quite easy to make assumptions about what completion is, if you don’t define it clearly. “All teams will be using the process by the end of the year” is one definition.  However, you’ll likely have a better sense of actual completion and better shared understanding if you get more specific.  For example: “25 teams will have documented completion of three required process steps by December 31st.”

Finally, it’s hard to remain objective if you create these definitions and targets at, or near, the end of your implementation; best to do so at the outset. 

Focus your improvement efforts.

Taking a multi-dimensional and graduated approach to assessing implementation performance also positions you to more accurately target areas for improvement.  If you use this assessment framework for a number of implementation efforts,  you may begin to see trends of low or high performance on particular dimensions.  This insight can help you to focus improvement efforts on the most relevant skill areas.  For instance, project management skills may be needed if most implementations run behind schedule, or communications & engagement skills may need a boost if acceptability scores are low.

Be consistent.

Creating new things is exciting.  But when we change how we evaluate success each time we start implementing a new project or initiative, we miss the opportunity to build a body of knowledge about our organizational capacity for implementation.  The simplicity of the three dimensions in the framework outlined above makes it ideal for building a rich understanding of your team, department or organization’s implementation skill set, without spending all of your time on evaluation. 

Interested in learning more about good practices in implementation? Checkout our free Implementation Playbook, a practical guide and checklist to help you better implement just about anything. 


References

Carlos J F Cândido and Sérgio P Santos (2015).  “Strategy implementation: What is the failure rate?" Journal of Management & Organization, 21, pp. 237-262. See here. 

Eric Barends, Denise M. Rousseau, Rob B. Briner.  (2014)  Evidence-Based Management: the Basic Principles. Amsterdam:  Center for Evidence Based Management.  See here. 

Susan Miller (1997), “Implementing Strategic Decisions: Four Key Success Factors.” Organization Studies. 18, pp. 577-602.   Note: The findings of this research are based on a small number of case studies (11 decisions in 6 organizations.)  See here.