Recently, while visiting a friend and former colleague, we came to the topic of our current work. I mentioned my book, The Implementer’s Starter Kit, and consulting in organizational change and implementation. Her response was one I’ve heard a fair amount. I’m paraphrasing here, but the gist of it was — “Processes, measures, and plans are great and all, Wendy, but my real challenge is with people! How do you get them to change?!”
She has a point. Even the best planned and managed change implementation won’t succeed if you don’t convince people to adopt whatever it is you are embedding in the organization. However, I’d say it’s a mistake to think of it as an “either/or” prospect.
Instead, think about change as happening at various levels simultaneously, one of which is individual (others include group/team and organization.) What’s more, all levels impact one another and are impacted by external and internal contextual factors. It’s a system in which no part operates in isolation of any other part.
That said, it can be overwhelming if you try to get a handle on all aspects of a system simultaneously. So, in this article, I focus on the individual level, and what you can do to support individual change as part of your broader change management efforts.
Specifically, I cover a few things:
- A basic model that explains why people choose to act (or not). Some evidence suggests you’ll be more successful if you align your efforts with theory, rather than going with your gut. (Who knew?!)
- How to identify specific barriers and enablers to individual change in your context. Evidence suggests methods that target a particular barrier are more effective than those that don't.
- How to select the “best” intervention and techniques to address the barriers you’ve identified. Hint: Be prepared to use your judgment informed, hopefully, by some research.
A caveat before we jump in. I intend to provide a general introduction to this topic and ideas to build on. Behavior change is a complex — we’re just scratching the surface of it. Additionally, please know there are many theories of behavior and a multitude of behavior change frameworks that one can look to for guidance. To inform this article, I chose models that offered coherence and simplicity — although I often simplified them further. Additionally, I looked for those that have been verified in some way. For more about the sources that informed this article, and how I adapted them, please see the resources section at the end of the article.
Why do people choose to act (or not)?
Ability — Motivation — Opportunity (AMO) is a high-level framework that reflects the likely reality that our behavior is influenced by a variety of factors. This theory helps us to understand why people “who know perfectly well” how to do something, still may not do it.
Researchers define the aspects of this model differently, but to keep it simple you can think about them in this way:
- Knowledge: Information, skills, or capability necessary to perform the behavior
- Motivation: Drive to act or perform the behavior
- Opportunity: Contextual or situational constraints or enablers that affect performance of the behavior
Additionally, it can be helpful to understand how these factors might interact with one another to influence behavior. I’ve seen a few variants, but most suggest that motivation influences behavior and that ability and opportunity influence motivation, as illustrated by the arrows in the graphic below (Hughes, 2007; Michie et al., 2011).
How does this help us?
In change efforts, it's common to focus on ability — sharing information to build knowledge or awareness and providing training to develop skills. However, this model illustrates why it’s important to consider other things that may influence end users. Perhaps end users don’t believe the change will make a difference (motivation) or maybe they simply don’t have the time or resources necessary to participate (opportunity), even if they have the required knowledge and skills (ability).
Keep the AMO model in mind when planning how you’ll support (or influence) individuals directly involved in your change. This model may help you to explain to others (and reminder yourself!) why just “getting the word out” about the change may not be sufficient.
You may be wondering — what is sufficient? To answer that, let your context and stakeholders lead the way!
How do you know what aspects of behavior to focus on?
Where you implement can have a big impact on your change efforts. For this reason, context is a key element of the implementation framework I use in my work with organizations. The importance of context is also born out in research that suggests taking a “contingency” approach to change, matching chosen interventions to the situation at hand (e.g., Damapour, 1997; Grol & Grimshaw, 2003; Stajkovic & Luthens, 1997). In fact, there is some evidence that suggests interventions targeted as specific change barriers are more effective than those that are not (Grol & Grimshaw, 2003).
Below, I review two ways you can glean insights from your context to inform how you support individual change as part of your organizational change implementation. For simplicity, I call them indirect and direct.
First, the indirect method. It can be helpful to scan your context to identify factors that might significantly impact your efforts. (I provide details on how to do this in my book.) In sum, gather input from a diverse group of stakeholders about what they think may influence the change at hand or your approach to it. You can collect this input through a brainstorming session, survey, focus groups, interviews, etc.
This input will inform various aspects of your change approach. Related to individual behavior change, some contextual factors that may have particular relevance include, (but are not limited to):
- The number of changes being simultaneously implemented in the organization. Some research suggests that when multiple changes are rolled-out concurrently, individuals' commitment to change may be negatively impacted, particularly for those who have lower confidence in their ability to handle the change (self-efficacy) (Herold et al, 2007). This suggests two things: 1) You shouldn't view your change effort in isolation and 2) efforts to help build individuals’ self-efficacy (e.g., small wins) may be beneficial.
- The magnitude and impact of the change at the group/team and individual levels. Employee commitment to an organizational change may be impacted by the degree to which the change is seen as favorable to their team and the amount of work the team and individual employees must bear to enact the change (Fedor et al, 2006). Thus, it may be important to communicate how the change is valuable for teams and individuals, not simply as “good for the organization”. Additionally, commitment may lag for those who feel they are being asked to shoulder a disproportionate burden for enacting the change. These factors may influence commitment, even for change efforts that are well managed.
- Organizational history with change. Past changes that have been poorly managed, misunderstood, or generally unsuccessful may be linked to employee cynicism about change (Reichers, 1997). If your organization has such a history, it can be important for leaders to take responsibility for past mistakes, and perhaps even to apologize for those that were particularly significant. Additionally, if your organization has a positive history with change it can be helpful to publicize it. Too often the final evaluations of changes are not shared with employees — as such, they may be unaware of successes.
- The diversity of professions and skills amongst end users: Some research indicates that changes that require collaboration across different professional groups within an organization may be more challenging. Social norms, knowledge assumptions and frameworks, and professional identity may differ across groups; one should not assume uniformity (Ferlie et al, 2005). In such circumstances, cross-group collaboration may first require intentional efforts to support social interaction between groups or other trust-building interventions.
- Organizational and team climate. Climate refers to individuals’ perceptions about, and reactions to, characteristics of their work environment. Research on psychological safety has shown that a climate that supports interpersonal risk-taking is linked with team learning and performance. A study of German firms implementing process innovations suggests that a climate supportive of employees taking an active approach towards work (initiative) and interpersonal risk-taking (psychological safety) is linked to the success of such changes (Baer & Frese, 2006).
In addition to identifying contextual factors that may broadly impact individuals involved in your change, it can also be useful to get input “straight from the horse’s mouth.” The direct method involves asking actual end users about their perceptions and actions related to the change.
We can take some guidance on how to do this from work such as that of Susan Michie and colleagues. In it, they aligns “domains” — such as knowledge, goals, identity, social influence — with (a variant of) the AMO framework (Michie et al, 2005; Cane et al, 2012). The idea is to ask questions about these domains to get a more nuanced understanding of what might be influencing end users’ actions.
For example, to home in on specific enablers and barriers in the realm of ability you may ask about:
- Knowledge: What do you know about X (the change)? What do you think you are you being asked to do?
- Skills: Do you know how to do X? What aspects of it are easy or difficult?
To better understand what might be influencing motivation, you can ask about:
- Professional Identity: How compatible is the change with your professional values and responsibilities?
- Beliefs: How confident are you in your ability to do X? What do you think will result from doing X?
- Goals: How does X align or conflict with other things you want to do?
- Emotion: How do you feel about X? When you consider doing X, what feelings come up for you?
To gain insight into what might be at play related to opportunity you might inquire about:
- Social influences: What do your peers think about X? Who do you know who has done X successfully?
- Context and Resources: To what extent are the resources you need to do X available to you? How will you make time to do X?
You can ask these questions via focus groups, one-on-one interviews with a sample of end users, or a survey. (There are benefits and costs to each method; you’ll need to decide what is a good match for your need and available resources.) If your change implementation involves multiple groups of core users, be sure to include representatives of each group.
What interventions might work “best” for the barriers you’ve identified?
Once you’ve identified specific barriers or enablers to target based on your context, you’ll need to decide which methods you’ll use to do so.
Broadly speaking, research indicates that some categories of interventions align with the different parts of the AMO model, as illustrated in the graphic below.
The good news (and bad news) is that within each one of these broader categories there are many, many different methods, often referred to as "behavior change techniques (BCTs). (Here is a list of 93, which also has a related app!; here is another list of more than 90 methods, and finally for those interested in behavioral economics, here's a list for you).
It is also likely that some interventions may influence various barriers/enablers simultaneously.
How do you choose?
Because you know that what’s commonly done and what’s effective are not always the same thing, you might think – I’ll just identify the interventions that are most effective. Alas, "what's most effective?" is the million-dollar question!
It seems that not all techniques work well in all environments or for all situations. Also, even methods that have been shown to be consistently effective in research likely won’t work if you don’t execute them appropriately or if certain preconditions are not in place.
In sum, there is no list that says for “X” you should always do “Y”. Identifying what might work best will likely require additional research on what has been effective in situations similar to yours. You’ll also want to consider the feasibility and cost of potential interventions and how they align with the capacity and resources you have available to execute them. Thus, you should expect to use quite a bit of judgment when deciding what to do (Grol & Grimshaw, 2003).
If your head is spinning, I understand. So, let’s walk through some examples.
Example: I know what I need to do and I think I can do it, I just don’t think it’s going to make a difference. Perhaps in speaking with a variety of end users, you found they were generally clear on what they needed to do and felt comfortable they would learn how to do it in training (ability-knowledge-skills). So, it seems they felt confident in their ability to do what was required of them to enact the change (motivation-beliefs about ability). However, they weren’t convinced the change would lead to results (motivation-beliefs about consequences).
In this situation, one barrier seems to be beliefs about consequences.
A potential intervention to target this barrier might be to provide information on the verified benefits of the change as well as the costs of failure to change. There are a variety of ways you could do this, such as:
- Consistent messaging from leaders at all levels about benefits of the change (persuasive communications)
- Case studies that demonstrate the benefits of the change integrated into training (information about consequences)
- A talk from an influential person focused on providing evidence about change benefits (credible source)
- Progress updates to communicate interim results related to the goals of the change (goal monitoring)
Example: I know what you want me to do, but I'm not sure it’s going to make a difference, and I don’t have the time.
Perhaps, in addition to doubts about the results of the change, you also see a theme from your investigation related to opportunity — people already feel stretched for time and they aren’t sure how they will get this done.
In some efforts, it may be appropriate or feasible to make “structural” change at the organization or work unit level, such as shifting schedules with the aim of providing end users sufficient time to perform additional tasks (facilitation). However, this often isn’t possible.
Alternatively, you might help users plan how they’ll integrate the change into their regular schedule (action planning; time management), or aim to ensure necessary resources are available and conveniently accessible (environmental restructuring; facilitation).
Broadly applicable interventions
As noted previously, you’ll likely need to do a bit of research to inform your selection of interventions and techniques. To help get you started, I offer below links to additional resources on a few methods that may be useful in a variety of situations.
Goal-setting & monitoring: Goal setting and goal monitoring can help people build knowledge and skills, as well as influence their beliefs about their abilities (self-efficacy). Research indicates goals are effective when they are specific and appropriately challenging (hard, but not too hard). Additionally, documenting and sharing information on goal progress (monitoring) has been linked to improved performance. To learn more about these methods, check out my blog articles here and here, each of which offers additional resources.
Feedback: Feedback might be best thought of as a “double-edged sword” — although it has been linked with performance gains, it can also be detrimental to performance. There are many factors that might influence the effectiveness of feedback. In general, feedback has been positively linked to performance when it is offered in a timely fashion, related to a specific task (rather than the person, or general praise or punishment), and provided on a continual basis (Kluger & DeNisi, 1996). For an easy-to-read guide on how to give performance feedback, which includes key ideas from research, see this piece from the Association of American Medical Colleges (AAMC). For those who like to read journal articles, this meta-analysis by Kluger & DeNisi might be helpful; somewhat more accessible is this piece by Hattie and Timperly on feedback in education.
Training: Most change efforts include training of some kind, but how do you make sure training has its intended impact? My friends at ScienceforWork offer a few helpful summaries of evidence, here and here. If you want to go deeper, check out this piece by Rebecca Grossman and Eduardo Salas on how to better transfer what’s learned in training into behavior on the job.
A whole lot more: Finally, for an extensive list of behavior change interventions that includes additional information on effectiveness factors for each, see Kok and colleagues' Taxonomy of Behavior Change Methods. Although this taxonomy is designed for those aiming to influence health-related behavior change, you may find methods in the list that are applicable to other contexts.
Don’t forget to measure the success of your efforts!
To support your team’s learning and identify insights that you can apply to future implementations, be sure to assess your efforts.
You’ll likely have a variety of things underway that might simultaneously influence end users. As such, it could be difficult to pinpoint the precise influence a single intervention or technique may have had on a barrier to change. However, I’d offer it’s still worthwhile to identify how you’ll track changes in the barriers you identified and integrate those into the measurement plan for your implementation. (Hopefully, you’ve developed a way to measure the results of your change effort — if not, see my article on the topic for a simple method.)
If nothing else, it’s always a good idea to debrief with your team to identify learning about your experience with different interventions— learn more about how to do that in my article on debriefs, here.
There are many theories and frameworks that can inform your efforts to influence individual behavior change as part of broader organizational change planning.
To gain a more nuanced understanding of human behavior, you can explore a variety of theories, such as the Theory of Planned Behavior and Social Cognitive Theory on this site from Boston University.
Related to behavior change interventions, in this article I largely drew from healthcare research. I did so because I found it to be a more robust body of evidence than that available in the general management space. However, be aware that there are risks in applying findings from one context to another.
While I drew insights from a variety of sources, a major influence on this article was the work of Susan Michie and colleagues. I selected this because I found it to be more comprehensive and coherent than some others, and thus easiest to translate for a general audience. You should know, this work is not without its critics. Also, I simplified it quite a bit. For a full representation of this work, please see cited articles below, or visit the Behavior Change Wheel site.
Finally, your feedback and ideas are always appreciated. Have other sources you feel are useful? Think I misinterpreted something? Please share your thoughts in the comments section; we'll all benefit from them.
Aarons, G. A., & Sawitzky, A. C. (2006). Organizational culture and climate and mental health provider attitudes toward evidence-based practice. Psychological services, 3(1), 61.
Alexander, K. E., Brijnath, B., & Mazza, D. (2014). Barriers and enablers to delivery of the Healthy Kids Check: an analysis informed by the Theoretical Domains Framework and COM-B model. Implementation Science, 9(1), 60.
Baer, M., & Frese, M. (2003). Innovation is not enough: Climates for initiative and psychological safety, process innovations, and firm performance. Journal of organizational behavior, 24(1), 45-68.
Bragdon, T. (2017, May 25). The Persuasion Wheel – Persuasion at Work. Retrieved July 18, 201, from https://persuasionatwork.com/the-persuasion-wheel-125839dafe35
Cane, J., O’Connor, D., & Michie, S. (2012). Validation of the theoretical domains framework for use in behaviour change and implementation research. Implementation Science, 7(1), 37.
Damanpour, F. (1991). Organizational innovation: A meta-analysis of effects of determinants and moderators. Academy of Management Journal, 34(3), 555-590.
Ferlie, E., Fitzgerald, L., Wood, M., & Hawkins, C. (2005). The nonspread of innovations: the mediating role of professionals. Academy of Management Journal, 48(1), 117-134.
Fedor, D. B., Caldwell, S., & Herold, D. M. (2006). The effects of organizational changes on employee commitment: A multilevel investigation. Personnel Psychology, 59(1), 1-29.
Fylan, F. (2017). Using Behaviour Change Techniques: Guidance for the road safety community. RAC Foundation.
Grimshaw, J., Eccles, M., Thomas, R., MacLennan, G., Ramsay, C., Fraser, C., & Vale, L. (2006). Toward evidence‐based quality improvement. Journal of General Internal Medicine, 21(S2).
Grol, R., & Grimshaw, J. (2003). From best evidence to best practice: effective implementation of change in patients' care. The Lancet, 362(9391), 1225-1230.
Grol, R., & Wensing, M. (2004). What drives change? Barriers to and incentives for achieving evidence-based practice. Medical Journal of Australia, 180(6 Suppl), S57.
Grossman, R., & Salas, E. (2011). The transfer of training: what really matters. International Journal of Training and Development, 15(2), 103-120.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112E
Herold, D. M., Fedor, D. B., & Caldwell, S. D. (2007). Beyond change management: A multilevel investigation of contextual and personal influences on employees' commitment to change. Journal of Applied Psychology, 92(4), 942.
Hughes, J. (2007, January). The ability-motivation-opportunity framework for behavior research in IS. In System Sciences, 2007. HICSS 2007. 40th Annual Hawaii International Conference on (pp. 250a-250a). IEEE.
Ivers, N. M., Grimshaw, J. M., Jamtvedt, G., Flottorp, S., O’Brien, M. A., French, S. D., ... & Odgaard-Jensen, J. (2014). Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. Journal of General Internal Medicine, 29(11), 1534-1541.
Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254.
Kok, G., Gottlieb, N. H., Peters, G. J. Y., Mullen, P. D., Parcel, G. S., Ruiter, R. A., ... & Bartholomew, L. K. (2016). A taxonomy of behaviour change methods: an Intervention Mapping approach. Health Psychology Review, 10(3), 297-312.
Michie, S., Johnston, M., Abraham, C., Lawton, R., Parker, D., & Walker, A. (2005). Making psychological theory useful for implementing evidence based practice: a consensus approach. BMJ Quality & Safety, 14(1), 26-33.
Michie, S., Van Stralen, M. M., & West, R. (2011). The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implementation Science, 6(1), 42.
Peters, G. J. Y., & Kok, G. (2016). All models are wrong, but some are useful: a comment on Ogden (2016). Health Psychology Review, 10(3), 265-268.
Reichers, A. E., Wanous, J. P., & Austin, J. T. (1997). Understanding and managing cynicism about organizational change. The Academy of Management Executive, 11(1), 48-59.
Rousseau, D. M., & Gunia, B. C. (2016). Evidence-based practice: the psychology of EBP implementation. Annual Review of Psychology, 67, 667-692.
Stajkovic, A. D., & Luthans, F. (1997). A meta-analysis of the effects of organizational behavior modification on task performance, 1975–95. Academy of Management Journal, 40(5), 1122-1149.
Stajkovic, A. D., & Luthans, F. (2003). Behavioral management and task performance in organizations: conceptual background, meta‐analysis, and test of alternative models. Personnel Psychology, 56(1), 155-194.
ThØgersen, J. (1995). Understanding of consumer behaviour as a prerequisite for environmental protection. Journal of Consumer Policy, 18(4), 345-385.
Wilson, E. J., & Sherrell, D. L. (1993). Source effects in communication and persuasion research: A meta-analysis of effect size. Journal of the Academy of Marketing Science, 21(2), 101.