How to Identify AI Champions When Nobody Wants to Lead
Your organization needs AI champions to drive adoption, but what happens when nobody steps forward? This guide shows you how to find hidden potential leaders, motivate reluctant staff, and build grassroots momentum even when technology leadership feels like a burden nobody wants to carry.

You've read the reports. You know AI can transform nonprofit operations. Leadership has given cautious approval to explore AI tools. There's just one problem: when you ask who wants to be the AI champion for your organization, you're met with silence, downcast eyes, and suddenly urgent excuses about pressing work.
This scenario plays out in nonprofits across the country. According to recent research, one-third of nonprofits list employee resistance as a barrier to AI adoption. Many staff members are deeply suspicious of AI, worrying about accuracy, privacy, ethics, or job displacement. Others simply don't understand what AI does, making their default position cautious avoidance rather than enthusiastic leadership.
Yet the data is clear: AI champions make the difference between successful adoption and stalled initiatives. Even one or two champions can transform an organization's relationship with technology. But these champions aren't always obvious, they don't necessarily volunteer, and they might not look like traditional technology leaders.
This guide will show you how to identify potential AI champions even when nobody raises their hand, understand what motivates reluctant staff to step into leadership roles, and create conditions where grassroots AI adoption can flourish from the bottom up rather than being mandated from the top down. Whether you're an executive director frustrated by inertia or a middle manager trying to drive change, you'll learn practical strategies for finding and developing the AI leaders your organization desperately needs.
Understanding Why Nobody Wants to Lead
Before you can identify AI champions, you need to understand why the role feels unappealing. The reluctance isn't about laziness or resistance to innovation. It's about very real fears and constraints that your staff are experiencing.
The Burden of "Extra" Responsibility
In most nonprofits, becoming an AI champion isn't a formal role with reduced workload elsewhere. It's an add-on to an already overwhelming job. Your staff are already stretched thin, working evenings and weekends to serve your mission. When they hear "AI champion," they hear "more unpaid overtime learning technology that might fail anyway."
This perception is particularly acute in smaller organizations where staff wear multiple hats by necessity. The communications director who also manages social media, writes grant applications, and coordinates volunteers doesn't have spare capacity to become the organization's AI expert too, even if they're curious about the technology.
Fear of Looking Foolish
Many nonprofit professionals feel they lack the technical background to lead technology initiatives. They worry about making mistakes in front of colleagues, giving bad advice, or championing tools that ultimately don't work. According to research on AI adoption challenges, employees may view AI as complex and hard to understand, making them reluctant to take it on, especially people without technical backgrounds.
This fear is compounded by the hype cycle around AI. Staff read breathless articles about AI transformation and feel inadequate. They see the term "artificial intelligence" and think it requires a computer science degree, not realizing that being an effective AI champion is more about understanding workflows than understanding algorithms.
Skepticism About AI's Value
Some staff genuinely question whether AI will help or hurt their work. They've seen technology implementations fail before. They remember the expensive database that nobody uses, the donor management system that created more work than it solved, the communication platform that was abandoned after six months.
For staff who work directly with vulnerable populations, there are deeper concerns. Will AI depersonalize their relationships with clients? Will it introduce bias into decisions about who receives services? Will it compromise the privacy of people who have every reason to distrust institutions? These aren't idle concerns, they're ethical questions that deserve serious consideration.
Lack of Clarity About What "Champion" Means
When you ask for an AI champion without clearly defining the role, people imagine the worst. They envision being on call to troubleshoot technical problems, having to train everyone in the organization, or being blamed when AI tools don't deliver promised results. The ambiguity of the role becomes a reason to avoid it.
Understanding these barriers is the first step. Your job isn't to make these concerns disappear. It's to find people who, despite these concerns, have the curiosity, skills, and positioning to lead AI adoption, and then to address their concerns directly so they can step into the role successfully.
What Makes an Effective AI Champion (It's Not What You Think)
Here's the counterintuitive truth: the most effective AI champions often aren't the most technically skilled people in your organization. According to research on successful AI champions, technical expertise is less important than other characteristics.
Think of AI champions not as technology leaders but as bridge builders, translators, and scouts. They don't need to understand machine learning. They need to understand your organization's workflows, challenges, and culture well enough to identify where AI can help and to communicate that value to skeptical colleagues.
Cross-Functional Communication
The bridge between vision and reality
Effective champions spend their time translating AI concepts to non-tech teammates and conversely translating workflow details to tech staff. They can explain to your executive director why a particular AI tool matters without using jargon, and they can explain to a vendor why your data structure makes their proposed solution unworkable.
Look for staff who already play translation roles in your organization. The program manager who can explain complex service delivery to your board in plain language. The fundraiser who can articulate donor trends to your finance team. These communication skills transfer directly to AI championing.
Process Thinking and Pattern Recognition
Seeing opportunities others miss
AI champions identify repeatable patterns in work and spot opportunities for automation or augmentation. They notice when staff do the same kind of task repeatedly, when information gets requested in similar formats over and over, or when decisions follow predictable logic.
Your best champions might be the staff members who have already created informal systems to make work easier. The person who maintains that shared spreadsheet everyone relies on. The employee who wrote a guide for onboarding new volunteers. These are people who naturally systematize and improve processes.
Success Storytelling
Making wins visible and inspiring
Champions showcase success by identifying and sharing "wow" moments where AI delivers tangible benefits. They're not hypemasters, they're evidence gatherers. They can articulate how a tool saved three hours of work, helped identify five new donor prospects, or made a grant report more comprehensive.
Look for staff who already celebrate team successes informally. The person who recognizes colleagues' achievements in meetings. The employee who shares helpful resources in team channels. They have the instinct to make good work visible, a critical skill for normalizing AI adoption.
Appropriate Skepticism
Critical thinking, not blind enthusiasm
Effective champions aren't AI evangelists who think technology solves everything. They ask hard questions: Will this actually save time or just shift work around? Does this tool work with our existing systems? What are the privacy implications? Their skepticism makes their endorsements more credible.
Paradoxically, some of your best potential champions might be people who initially express concerns about AI. If they're willing to explore despite their skepticism, they'll ask the right questions and identify problems before they become failures. This critical perspective is invaluable.
Notice what's not on this list: formal authority, technical credentials, or even enthusiasm about AI. The most effective champions are often mid-level staff or even frontline employees who see AI's potential and are willing to lead the charge in their team. They don't need to be in charge of AI. They need to be credible resources that others trust.
Where to Look for Hidden Champions
If nobody volunteers to be an AI champion, look for people who are already doing champion-like behaviors informally. These "secret cyborgs" are experimenting with AI tools on their own, solving their own problems, and occasionally helping colleagues do the same.
The Secret Experimenters
Research shows that 43% of US workers already use AI without waiting for permission. In your organization right now, some staff are using ChatGPT to draft emails, Claude to summarize documents, or other AI tools to make their work easier. They haven't announced it because they're not sure if it's allowed or because they don't want to be asked to teach everyone else.
Find these people through casual conversation. In meetings, ask: "Has anyone tried using AI tools for this kind of task?" You might be surprised who speaks up. Alternatively, notice who seems to produce work unusually quickly or whose documents have suddenly improved in quality. There might be an AI tool helping them.
These secret experimenters are ideal champion candidates because they've already overcome the initial learning curve, they've demonstrated self-directed curiosity, and they have firsthand experience with what works and what doesn't.
The Workflow Problem Solvers
Look for staff who habitually improve processes without being asked. The person who created a template to standardize program reports. The employee who reorganized the shared drive so files are actually findable. The colleague who figured out how to extract better data from your database.
These individuals have a natural orientation toward efficiency and systems thinking. They're frustrated by repetitive work and motivated by making things work better. AI tools are exactly what they need to scale their problem-solving instincts. Give them permission to explore AI in their domain, and they'll likely find valuable applications quickly.
The Peer Educators
Some staff naturally help colleagues learn new skills. They're the person everyone asks when they can't figure out how to do something in the database. They patiently explain processes to new hires. They create unofficial documentation when official documentation is lacking.
These peer educators might not be the first to adopt AI themselves, but once they learn, they're exceptionally effective at helping others adopt. Their teaching instinct and patience make them perfect for the mentoring aspect of being an AI champion.
The Respected Skeptics
Don't overlook staff who have expressed thoughtful concerns about AI. If someone raises questions about data privacy, algorithmic bias, or whether AI will actually help your clients, that's not opposition. That's critical thinking.
If you can address their concerns and show them AI applications that align with their values, they can become your most powerful champions. When a known skeptic endorses an AI tool, it carries far more weight than when a technology enthusiast does. Their colleagues think: "If they believe it's valuable and ethical, maybe I should pay attention."
The Generational Perspective
Younger staff who grew up with digital technology often have less intimidation around new tools, while experienced staff have deep organizational knowledge and credibility. Your ideal champion team might pair these strengths together, a newer employee who quickly learns AI capabilities working alongside a veteran who understands where those capabilities can add value.
This intergenerational approach also models collaborative learning. It sends the message that AI adoption isn't about being young and tech-savvy or being experienced and traditional. It's about combining different kinds of knowledge to serve your mission better.
The Conversation That Recruits Champions
Once you've identified potential champions, you need to approach them thoughtfully. A clumsy recruitment conversation can scare off even willing participants. Here's how to have a conversation that makes the role feel manageable and meaningful rather than burdensome.
Start With What They Already Do
Don't lead with "We need you to be an AI champion." That's abstract and intimidating. Instead, connect to specific work they're already doing: "I noticed you created that volunteer scheduling template that everyone uses now. Have you thought about how AI might help with some of the repetitive parts of that process?"
This grounds the conversation in their actual work rather than abstract technology leadership. You're not asking them to become a different kind of professional. You're asking them to explore tools that might make their existing work easier.
Define the Role Narrowly
Be specific about what you're asking for and what you're not. A good framing might be: "I'm looking for someone to experiment with AI tools for grant writing and report what works. You wouldn't need to teach everyone or support the whole organization. Just explore in your own work and share what you learn with the development team."
Narrow scopes are easier to say yes to. Once someone succeeds in a limited domain, they often naturally expand their championing. But starting with "be the AI leader for the entire organization" overwhelms people before they begin.
Acknowledge the Learning Curve
Be honest that there will be trial and error. According to research on building team confidence with AI, nearly half of employees cite training as the most critical factor for successful adoption. Say something like: "This will involve some experimentation and learning. Some things won't work. That's expected. I'm asking you to explore and report back on what you find, successes and failures."
This permission to fail reduces the pressure. Your potential champions need to know that you're not expecting them to have all the answers immediately. You're asking them to be curious explorers, not instant experts.
Provide Real Support
If you want someone to champion AI, give them resources. This might mean:
- Budget for AI tool subscriptions to test different platforms
- Dedicated time during work hours for learning and experimentation
- Access to online courses or training resources
- Connection to other nonprofit AI practitioners for peer learning
- Recognition in their role description or performance evaluation
If you can't provide these supports, be honest about that too. Some people will still say yes out of personal interest. But asking someone to champion AI while giving them no resources or recognition is asking them to do unfunded volunteer work on top of their actual job.
Connect It to Their Values
The most powerful motivation isn't career advancement or learning new skills. It's mission impact. Help potential champions see how AI could advance work they already care about.
For a program manager frustrated by reporting burden: "What if you could generate first drafts of those quarterly reports in minutes instead of hours? You'd have more time for the program design work you actually love." For a fundraiser worried about donor relationships: "What if AI could help you personalize outreach at scale so every donor feels individually valued?"
When you connect AI tools to outcomes that matter to someone personally, you transform the ask from "do more work" to "work more effectively on things you care about." That's a fundamentally different proposition.
Building a Champion Network, Not a Single Hero
Here's a critical insight: you don't need one AI champion. You need several, each focused on different domains. Research on bottom-up AI adoption shows that successful organizations empower people closest to day-to-day work to experiment, evaluate, and evolve their processes organically.
Think of this as your informal AI adoption working group. You don't need to make it official with meetings and minutes. Just gather a few people who are exploring AI in their respective domains and give them permission to share what they learn.
Domain-Specific Champions
Consider recruiting champions for specific organizational functions:
- Communications champion: Explores AI for content creation, social media, email marketing, and storytelling
- Program champion: Tests AI for client services, outcome tracking, report generation, and service coordination
- Development champion: Experiments with AI for donor research, appeal writing, stewardship, and prospect identification
- Operations champion: Explores AI for documentation, process automation, data management, and knowledge capture
This distributed approach has multiple advantages. No single person feels responsible for all of AI adoption. Each champion develops deep expertise in their domain rather than surface knowledge across everything. And when champions share their domain-specific discoveries, they demonstrate concrete value rather than abstract possibilities.
Creating Informal Learning Structures
Once you have a few champions, create lightweight ways for them to share learning. This doesn't need to be formal. Consider:
- A shared document where champions note useful AI tools and applications they've found
- A Slack channel or Teams space for sharing tips and asking questions
- Five-minute "show and tell" segments in existing staff meetings where someone shares an AI win
- Quarterly lunch-and-learn sessions where champions demo tools they've found useful
The goal is to make AI exploration visible and normalized without creating bureaucracy. According to research, people adopt what their respected peers are using, not what management dictates. When staff see colleagues they respect using AI tools successfully, they become curious and willing to try themselves.
Celebrating Progress, Not Just Perfection
Make sure to celebrate not just successful AI implementations but also valuable failures. When a champion tries a tool that doesn't work, that's useful knowledge. It saves everyone else from wasting time on that tool. Create an environment where champions feel comfortable saying "I tried this and here's why it didn't work for us" without fear of criticism.
This psychological safety is essential for grassroots adoption. If champions feel they can only share successes, they'll stop experimenting with anything risky. The organization learns more slowly. But if "I tried this and learned something" is valued as much as "I found the perfect solution," champions will explore more courageously.
When Leadership Must Step Up First
Sometimes the reason nobody wants to champion AI is that leadership hasn't signaled that it's safe, valued, and aligned with organizational priorities. According to research on AI upskilling, the CEO and other C-level executives must function as AI's top advocates.
If you're in a leadership role and wondering why nobody is stepping forward, ask yourself:
- Have I personally experimented with AI tools so I can speak knowledgeably about their potential and limitations?
- Have I publicly shared examples of how I use AI in my own work?
- Have I allocated budget and time for AI exploration, or am I asking staff to do this on top of everything else?
- Have I addressed staff concerns about job security, privacy, and ethics directly and honestly?
- Have I created clear guidance on what kinds of AI use are encouraged versus prohibited?
If the answer to several of these questions is no, you can't expect staff to volunteer as champions. You need to model the behavior you want to see. This doesn't mean you need to become the organization's AI expert. It means you need to demonstrate that AI exploration is a legitimate use of work time, aligned with organizational values, and something you yourself are engaged with.
The AI Policy That Enables Champions
One concrete action leadership can take is developing clear AI usage guidelines. This doesn't need to be a comprehensive policy document initially. Even a simple set of principles helps:
- Staff are encouraged to experiment with AI tools for their work
- Never input confidential client information, donor financial data, or sensitive personal information into public AI tools
- AI-generated content should be reviewed and edited by humans before external use
- Share useful AI applications with colleagues to multiply the benefit
- Ask questions if you're unsure whether a particular AI use is appropriate
This kind of guidance removes ambiguity. Staff know they're allowed to experiment. They understand basic guardrails. And they have permission to ask questions rather than avoiding AI entirely out of caution. For more comprehensive guidance on developing AI policies, see our article on creating AI policies for small nonprofits.
The Long Game: Developing AI Champions Over Time
Not everyone will be ready to champion AI immediately. That's okay. Part of building AI capacity is developing people's skills and confidence over time. The staff member who is reluctant today might become an enthusiastic champion six months from now once they see peers using AI successfully.
Creating On-Ramps for the Cautious
For staff who are curious but nervous, create low-stakes ways to learn. This might include:
- Pairing them with an early champion for informal mentoring
- Starting with AI tools for low-risk tasks like brainstorming or first-draft writing
- Providing access to free AI training resources they can explore at their own pace
- Inviting them to observe how champions use AI without pressure to adopt immediately
Some of these cautious learners will develop into champions themselves once they build confidence. Others will become capable users even if they never champion publicly. Both outcomes are valuable.
Recognizing Different Champion Styles
Not all champions look the same. Some are vocal advocates who demo tools in team meetings. Others are quiet examples who others notice and emulate. Some champions love teaching and mentoring. Others champion primarily by making their own work better, which inspires colleagues to ask "how did you do that?"
Recognize and value these different styles. The quiet champion who simply does excellent work using AI tools might influence more people than the enthusiastic champion who tries to teach everyone. Both are valuable. Don't push people to champion in ways that don't fit their personality or working style.
When to Bring in External Support
Sometimes internal champions need external support to succeed. This might mean:
- Hiring a consultant for initial AI strategy and champion training
- Connecting with peer nonprofits who are further along in AI adoption
- Joining nonprofit technology communities where champions can learn from each other
- Participating in cohort-based learning programs focused on nonprofit AI
Internal champions shouldn't feel like they're figuring everything out alone. Connecting them to external resources and peer networks makes the role more sustainable and helps them bring better ideas back to your organization.
Conclusion
Finding AI champions when nobody volunteers isn't about convincing people to do something they don't want to do. It's about recognizing the champion qualities people already have, connecting AI exploration to work they already care about, and removing barriers that make championing feel burdensome rather than empowering.
The most effective champions aren't necessarily the most technically skilled or the most enthusiastic about AI. They're the bridge builders, the process improvers, the peer educators, and sometimes even the thoughtful skeptics. They're already in your organization. Your job is to notice them, invite them thoughtfully, support them genuinely, and create conditions where they can learn, experiment, and share without overwhelming pressure or responsibility.
Remember that AI adoption is a long game, not a sprint. You don't need perfect champions immediately. You need people willing to explore, fail safely, and share what they learn. Build that environment, and you'll find that champions emerge naturally. They might not call themselves champions. They might not even recognize that they're playing that role. But they'll be doing the essential work of translating AI potential into practical reality for your organization.
The alternative, waiting for the perfect champion to volunteer or mandating AI adoption from the top down, almost never works. But finding and nurturing the hidden potential already present in your team? That's how sustainable AI adoption actually happens in nonprofits. Start looking, start inviting, and start supporting. Your champions are waiting to be found.
Ready to Develop AI Champions in Your Organization?
One Hundred Nights helps nonprofits identify potential AI champions, develop their capabilities, and create the conditions for sustainable AI adoption. Whether you're just starting to explore AI or working to scale existing initiatives, we can help you build the internal leadership capacity your organization needs.
