AI Adoption in Organizations: Unique Considerations for Change Leaders

Key Points:

  • Successful AI adoption efforts address familiar technology adoption factors while also accounting for unique trust-related challenges.

  • The effectiveness of AI adoption strategies can vary based on industry norms, cultural context, and the specific type of AI being implemented, highlighting the need for tailored approaches.

  • Social influence in AI adoption is nuanced: while its impact may diminish over time as people have more experience with AI, research suggests peer behavior often influences adoption decisions more than individuals realize or admit. 


AI promises a world of possibility, but for many organizations, the path from potential to actual remains unclear. While many principles of technology adoption apply to AI, there are unique factors that change leaders must consider when implementing AI solutions in their organizations. This article explores these distinctive aspects and provides insights for successfully navigating AI adoption challenges.

Understanding AI: More Than Just Another Technology

While many people indicate they have heard of AI, there are reports that more than half feel less certain they understand AI or know when and how it is used.

Understanding AI influences how much people trust it and feel they will benefit from using it, both of which impact adoption decisions. That makes defining AI a crucial, if often overlooked, first step in managing the AI adoption process.  

AI generally refers to technology that can autonomously perform tasks that have traditionally required human intelligence. Some example definitions include:

  • "AI is the imitation of human behavior (ability to think, solve problems, learn, correct oneself, etc.) by computer."

  • "Artificial intelligence devices can simulate human behaviors (e.g., talk, walk, express emotions) and/or intelligence (e.g., learning, analysis, independence consciousness)."

  • "Artificial intelligence refers to computers and robots doing things that traditionally require using human intelligence." 

Beyond defining AI, when implementing AI in your organization it can also be useful to clarify the type of AI you are implementing and the specific capabilities people will experience when using it, in ways that are easy for them to grasp.

Usefulness and Ease of Use Are Top Influencers of AI Acceptance

The drivers of AI adoption align closely with those found to influence general technology adoption, according to research.  Studies on AI adoption specifically find two factors most influence people's intentions to use AI. They are:

  1. Perceived usefulness: How will this AI system improve job performance? What specific tasks will be made easier or more efficient?

  2. Ease of use: How user-friendly is the AI system? What training or support will users need and receive?

These factors, which decades of research indicate are critical for successful technology adoption in general, remain relevant for AI adoption. However, AI introduces additional considerations that change leaders must also keep in mind. 

The Trust Factor: A Unique Challenge in AI Adoption

While traditional technology adoption models focus on factors like perceived usefulness and ease of use, AI introduces a critical new element: trust. Studies done to date, often look at trust in AI applications in a multifaceted way, encompassing both human-like attributes and functional aspects.

  1. Human-like trust factors: These relate to how we perceive the AI's intentions and character, similar to how we might trust another person.

    • Competence: The belief that the AI can perform its tasks effectively.

    • Integrity: The perception that the AI adheres to a set of acceptable principles or rules.

    • Benevolence: The belief that the AI has good intentions and acts in the user's best interest.

  2. Functional trust factors: These relate to how well the AI performs its job and how it operates:

  • Reliability: The consistency and dependability of the AI's performance over time.

  • Explainability: The degree to which the AI's decision-making process can be understood by users.

  • Transparency: The openness about the AI's capabilities, limitations, and the data it uses.

To date, studies indicate that functional trust is particularly crucial in AI adoption because it directly relates to how well the technology does its job. Users need to trust that the AI will consistently perform tasks accurately and efficiently. For example, in AI used for managerial decisions, functional trust would involve confidence in the system's ability to consistently provide accurate analyzes based on the data provided. That said, studies suggest in some industries, such as customer service, healthcare and education, human-like aspects of trust may have increased influence.

In general, to build trust in AI, change leaders can consider demonstrating the AI's reliability through consistent performance, explaining its decision-making processes to enhance explainability, being transparent about its capabilities and limitations, and how the organization is governing it’s AI use.

Context Matters: Tailoring Trust-building Strategies

Studies indicate that the importance of trust, and different trust factors, can vary based on the context (industry, national culture) and type of AI being implemented. For example:

  • Types of trust: For financial decision-making or customer service applications, human-like trust factors such as perceived integrity and benevolence may be more influential. For workplace applications, where people are highly dependent on a system to do their work, they may have no choice but to trust the AI. Here, functional aspects like reliability and accuracy of results, might take precedence.

  • National Culture: People surveyed in emerging economies, such as India, China and Brazil, seem to be more generally trusting of AI, than those in Finland, the Netherlands and Japan.

  • Domains: People have reported less willingness to trust AI used in Human Resources than for healthcare diagnosis and treatment.

As a change leader, it’s worth investigating what’s known so far about implementing AI in your industry, function, and culture for nuanced insights that can sharpen your implementation strategies. Using Google Scholar to search for academic research is a good place to start. In addition, some reports suggest that the majority of AI research is now being conducted by industry. As such, reports from consulting and AI firms may provide the most up-to-date findings. (Keeping in mind, of course, that such studies may be biased due to the commercial interests of the authors.)

Building Institutional Trust to Support AI Adoption

A 2023 study by the University of Queensland highlighted the significance of institutional trust in AI adoption. The study found that "people often defer to, and expect, authoritative sources and institutional processes to provide assurance of the safety and reliability of new technologies and practices."

This includes:

  • Government regulations

  • Corporate ethics guidelines

  • Organizational governance practices 

However, the study also found that most people perceive these institutional safeguards as lacking. This presents an opportunity for organizations to differentiate themselves by developing and communicating robust AI governance frameworks.

The Complex Role of Social Influence

Social norms and peer influence appear to play a nuanced role in AI adoption, particularly in high-contact environments like healthcare, education, and customer service. Studies suggest that social influence seems to be more potent during the pre-adoption phase when opinions are being formed. Its impact on actual usage may diminish over time, as people become more familiar with the technology and form stronger personal views about it.

However, a counterpoint comes from research across various domains that suggests that people's actions are often more influenced by their peers' behavior than they realize or admit.

Studies in contexts as varied as doctors' prescribing patterns for antibiotics and people's adoption of water conservation measures have shown that people say they are motivated by their own values or rationale arguments, but their actions indicate they are mainly swayed by information about the actions of their peers. What’s more, research indicates that changes that requires shifts in behavior often spread most widely through networks with many connection points, rather than a single, well-connected individual.  Suggesting it’s not just work of one “influencer”, but the actions of a variety of peers, that may have a real impact in swaying behavior.

For AI adoption strategies, this implies that creating visible examples of successful AI use among respected peers, and embedding AI into the work of groups, particularly those that operate cross-functionally, could be more effective than relying solely on rationale arguments, motivating messages, or individual persuasion.

Bridging the Intention-Action Gap

As with other technologies, there can be a significant gap between stated intentions to use AI and actual adoption. It's essential to keep in mind that research on AI adoption usually focuses on intentions to use AI and does not gauge how those intentions turn into actual use decisions. While intentions do make a difference for adoption decisions, they don't always translate directly to actual use.

Literature on general technology adoption indicates that “facilitating conditions” can influence continued use, in addition to people’s intentions to use technology. Facilitating conditions refer to the organizational (leadership commitment, training, resources) and technological infrastructure in place to support the effective use of the technology. Some studies on AI suggest that facilitating conditions may be important in influencing continued use of AI, although findings to date have been mixed.

Given this, to bridge the gap between intention and action, consider:

  1. Providing ample hands-on experience with the AI solution and ongoing training and support

  2. Identifying and remove obstacles that are inhibiting adoption, such as time, resources, or mixed messages from leadership.

  3. Showcase investments in organizational and technological infrastructure necessary for effective outcomes from AI adoption.

  4. Clarify how successful adoption will be measured and communicate progress and learning. Amplify and celebrate early adopters and success stories.

  5. Address concerns and skepticism openly and transparently.

  6. Model the use of AI broadly, including amongst leaders.

Tips for Change Leaders to Overcome AI Adoption Challenges

By recognizing aspects of AI implementation that are similar — and different — from implementation of other types of technology, change leaders can develop more effective change management strategies. Remember, successful AI adoption goes beyond technical implementation – it requires a holistic approach that addresses the human factors of trust, understanding, and social dynamics. 

Some actions to consider include:

  1. Define AI clearly: Provide specific definitions and examples of the types of AI being implemented in your organization. Consider creating brief videos to illustrate the AI's functionality and how users will experience it.

  2. Build Multi-Faceted Trust: Develop strategies that address various aspects of human-like and functional aspects of trust in AI. This might include demonstrating the AI's reliability, explaining its decision-making process, and showcasing its positive impact on work processes.

  3. Strengthen Institutional Trust: Develop and communicate clear ethics guidelines and governance practices for AI use in your organization. Consider partnering with respected external bodies to validate your approach.

  4. Tailor Your Approach: Research AI adoption specific to your industry, country context, and the type of AI you're implementing. Use these insights to customize your change management strategies to fit your change context.

  5. Leverage Social Influence: Identify and support early adopters, ideally groups of them, who can serve as visible examples of successful integration of AI into work practices. Create opportunities for peer-to-peer learning and sharing of AI experiences.

  6. Monitor and Bridge the Intention-Action Gap: Regularly assess both intended and actual AI usage. Follow-up to identify barriers to adoption and develop targeted change interventions to address them.

  7. Communicate Transparently: Be open about the capabilities and limitations of the AI solution. Address concerns proactively and create safe spaces for employees to express their doubts or reservations.

  8. Provide Ongoing Support: Recognize that AI adoption is a journey, not a destination. Model the use of AI, at all levels where possible, and offer continuous learning opportunities, technical support, and channels for feedback and improvement suggestions.


References 

Abrahamse, W., & Steg, L. (2013). Social influence approaches to encourage resource conservation: A meta-analysisGlobal environmental Change23(6), 1773-1785.

Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2024). A systematic literature review of user trust in AI-enabled systems: An HCI perspectiveInternational Journal of Human–Computer Interaction40(5), 1251-1266.

Bankins, S., Ocampo, A. C., Marrone, M., Restubog, S. L. D., & Woo, S. E. (2024). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practiceJournal of Organizational Behavior45(2), 159-182.

Bedué, P., & Fritzsche, A. (2022). Can we trust AI? an empirical investigation of trust requirements and guide to successful AI adoptionJournal of Enterprise Information Management35(2), 530-549.

Cao, G., Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2021). Understanding managers’ attitudes and behavioral intentions towards using artificial intelligence for organizational decision-makingTechnovation106, 102312.

Centola, D. (2018, November 7). The Truth About Behavioral Change. MIT Sloan Management Review.

Chi, O. H., Gursoy, D., & Chi, C. G. (2022). Tourists’ attitudes toward the use of artificially intelligent (AI) devices in tourism service delivery: moderating role of service value seekingJournal of Travel Research61(1), 170-185. (1)

Choung, H., David, P., & Ross, A. (2022). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 1–13.  

Gansser, O. A., & Reich, C. S. (2021). A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of applicationTechnology in Society65, 101535. (2)

Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. (2023). Trust in Artificial Intelligence: A Global Study. The University of Queensland and KPMG Australia.

Kelly, S., Kaye, S. A., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics77, 101925. 

Kelley, S. (2022). Employee perceptions of the effective adoption of AI principlesJournal of Business Ethics178(4), 871-893.

5 charts that show what people around the world think about AI. World Economic Forum. (2022, 5 January)  (3).

Manning, C. (2020, September). Artificial Intelligence Definitions. Stanford University Human-centered Artificial Intelligence. 

Noonan, D. (2024, February 20). The 25% revolution--how big does a minority have to be to reshape society? Scientific American.

Russo, D. (2024). Navigating the complexity of generative AI adoption in software engineering. ACM Transactions on Software Engineering and Methodology.

Tomoko Yokoi, M. R. W. (2024, May 30). How organizations navigate AI ethics. I by IMD.


AI Statement: Consensus was used, in addition to Google Scholar and Google Search, to identify relevant research on this topic. Claude was used to develop initial drafts from an outline and notes provided by the author. Claude was used as an assistant to interpret statistical findings from sources.

Photo by Pavel Danilyuk