Back to Articles
    Leadership & Strategy

    The 81% Problem: Why Most Nonprofits Use AI Individually and How to Build Shared Workflows

    Nearly all nonprofits have embraced AI tools, but the vast majority are using them in isolation. Individual staff members experiment on their own, develop personal shortcuts, and quietly improve their own productivity, but that knowledge never becomes part of how the organization operates. This article explores why this pattern persists and what you can do to change it.

    Published: April 1, 202612 min readLeadership & Strategy
    Nonprofit team collaborating on shared AI workflows

    Here is a number worth sitting with: according to the 2026 Nonprofit AI Adoption Report from Virtuous and Fundraising.AI, 92% of nonprofits are now using AI in some capacity. Yet only 7% report that AI has produced major improvements in organizational capability. The gap between those two numbers is not a technology problem. It is a workflow problem, specifically the problem that most nonprofit AI use remains individual, ad hoc, and invisible to the rest of the organization.

    The same report found that 81% of nonprofits describe their AI use as individual and reactive, relying on one-off prompts and personal experimentation rather than documented, repeatable processes. Only 4% report having structured workflows that anyone on the team can use and improve. That is the 81% problem: the massive space between widespread adoption and genuine organizational capability.

    This pattern is understandable. AI tools are designed to be immediately accessible. A staff member can open ChatGPT, ask a question, get an answer, and move on. There is no friction, no documentation required, no need to tell anyone. The individual experience is smooth and satisfying. But that same frictionlessness is precisely what makes AI so difficult to turn into shared organizational value. What feels like progress, faster drafts, quicker research, easier formatting, never compounds because it lives entirely inside one person's head.

    The organizations that are closing this gap are not necessarily larger or better resourced. They have simply made a deliberate decision to treat AI as infrastructure rather than a personal productivity tool. This article explains why individual AI use is so persistent, what it actually costs your organization, and how to begin building the shared workflows that convert scattered experimentation into durable organizational capability.

    Why Individual AI Use Is So Persistent

    Before you can solve the 81% problem, you need to understand why it exists. The barriers are not primarily technical. Organizations that have failed to move from individual to shared AI use rarely lack the right software. They face a combination of structural, cultural, and governance challenges that make individual use feel like the path of least resistance, and sharing feel risky or burdensome.

    No Policy Framework

    Without a formal AI policy, staff have no shared understanding of what AI use is appropriate, what data can be included in prompts, or what quality standards apply to AI-generated outputs. This ambiguity makes sharing feel risky. People stick to personal use precisely because they are uncertain whether their approach would be sanctioned if visible to leadership.

    The Knowledge Hoarding Dynamic

    In many organizations, AI expertise has quietly become a form of personal professional capital. Staff who have developed effective prompts and workflows may not consciously decide to withhold them, but they have little incentive to share knowledge that gives them a visible productivity edge. When individual performance is rewarded more than team outcomes, knowledge stays individual.

    Status Anxiety

    Many professionals conceal their AI use out of fear that it signals incompetence or that colleagues will perceive their work as less valuable if AI was involved. This creates a paradox where the most productive AI users are also the least likely to advocate for shared adoption, because doing so would require admitting how extensively they already rely on these tools.

    Organic Adoption Without Structure

    Most nonprofit AI adoption has been driven from the bottom up, which is healthy for initial exploration but insufficient for systematization. When leadership is not actively sponsoring shared workflows, providing resources, or measuring AI impact at the organizational level, there is no structure to aggregate individual experiments into collective practice.

    There is also a practical barrier worth naming directly: building shared workflows takes time that individual AI use does not. When a staff member discovers that a particular prompt works well for grant writing, they can simply save it in a personal document and move on. Turning that insight into a shared workflow requires writing documentation, getting agreement on process, training colleagues, and maintaining the system over time. None of these steps are difficult, but together they represent an investment that feels optional when individual use is already delivering personal value.

    Middle managers may also resist, often for reasons that are difficult to articulate. When AI tools make their teams more efficient, the logical implication is that the same amount of work can be done with fewer people or that existing workloads should expand. Managers who have built their authority around managing the complexity of current processes may perceive shared AI adoption as a threat to their teams and to their own expertise.

    The Real Cost of Staying in Individual Mode

    It is tempting to frame individual AI use as progress, and in a narrow sense it is. Staff who use AI on their own are more productive than those who do not. But the opportunity cost of staying in individual mode is significant and often invisible, because you are measuring improvement against yesterday rather than against what is possible.

    The Productivity Gap Between Individual and Shared AI Use

    Research from Atlassian's AI Collaboration Report illustrates the difference at stake

    People who use AI individually as a personal productivity tool save an average of 53 minutes per day. That is meaningful. But people who use AI collaboratively as part of shared team workflows save an average of 105 minutes per day, a full extra workday per week per person. The same Atlassian research found that 85% of people using AI in collaborative, team-supported ways report improved work quality, compared to 54% of individual users. The gap between individual and shared AI use is not marginal. It is roughly double across every meaningful metric.

    • Shared AI workflows save twice as much time as individual use (105 vs. 53 minutes daily)
    • 85% of shared AI users report quality improvement vs. 54% of individual users
    • Organizations with leadership support for shared AI adoption are 2.5x more likely to become strategic AI collaborators

    Beyond productivity, there is a knowledge retention problem. When AI expertise lives in individual staff members rather than organizational systems, it walks out the door when those staff members leave. Nonprofit turnover rates are notoriously high, and every departure takes with it that person's prompts, their understanding of which tools work for which tasks, and the institutional knowledge embedded in how they have configured their workflows. Organizations that have not systematized their AI use must essentially start over with each new hire.

    There is also a consistency problem. When different staff members are independently developing their own approaches to AI-assisted grant writing, donor communications, or program reporting, the outputs vary in quality, tone, and format. Funders and major donors notice inconsistency even when they cannot name it. Brand voice becomes fragmented. Quality becomes dependent on which staff member handled a given task rather than on organizational standards. Shared workflows solve this by ensuring that anyone on the team, regardless of their individual AI experience, can produce outputs that meet consistent organizational standards.

    The deepest cost, however, is strategic. The 92% adoption / 7% major impact gap identified in the Virtuous report is essentially the cost of staying individual. Organizations that have not moved to shared workflows are experiencing real productivity gains for individual staff, but those gains are not adding up to transformational organizational change. The efficiency plateau is real: you get individual improvements that never compound into organizational capability. This means that while you are experiencing incremental gains, peer organizations that make the leap to shared workflows are building compounding advantages that widen the gap over time.

    What Shared AI Workflows Actually Look Like

    The phrase "shared AI workflows" can sound abstract, but the practical reality is straightforward. A shared AI workflow has four distinguishing characteristics that separate it from individual AI use: shared knowledge inputs, shared prompts, documented review processes, and shared learning loops.

    1. Shared Knowledge Inputs

    Centralized organizational content that anyone can draw from when prompting AI

    In individual AI use, each person includes contextual information about the organization in their prompts from memory, which means that information varies based on what they happen to remember. In a shared workflow, this context lives in a central location that everyone draws from: the organization's mission statement, program descriptions, target populations, past grant language, donor segments, brand voice guidelines, key outcomes data. This shared knowledge base ensures that AI outputs are grounded in consistent, accurate organizational information regardless of who is doing the prompting.

    The most sophisticated version of this is what grant writing platforms like Grantable have built: a shared organizational knowledge base that captures all approved program language, financials, and outcomes data, and allows anyone on the grant team to draw from it when drafting proposals. But you can start simply with a shared Google Doc or Notion page that captures the essential organizational context your team uses repeatedly.

    2. A Shared Prompt Library

    Tested, documented prompts organized by function that the whole team can use and improve

    The most practical first step toward shared AI workflows is building a prompt library: a documented collection of effective prompts organized by function. Instead of each staff member spending time figuring out how to prompt AI for a thank-you letter, an impact summary, a grant narrative section, or a social media post, anyone on the team can pull from a library of approaches that have already been tested and refined.

    A prompt library can start as simple as a shared Google Doc with sections for each major use case: grant writing, donor communications, social media, board materials, program reporting. Each entry includes the prompt itself, notes on when to use it, guidance on what to review before using the output, and space for team members to suggest improvements. The library becomes more valuable over time as the team refines prompts and adds new ones based on collective experience.

    The key distinction from individual prompt collections is that the library is a shared organizational asset, not a personal file. When a staff member discovers a better approach, they update the shared library. When someone new joins the team, they can immediately access the organization's collective AI knowledge rather than starting from scratch.

    3. Documented Review Processes

    Clear standards for who reviews AI outputs and what they are looking for

    A shared workflow includes explicit guidance on how AI outputs should be reviewed before use. This is not about distrust of AI. It is about maintaining quality standards and organizational voice while capturing the efficiency benefits of AI-assisted drafting. A donor communication workflow might specify that AI-generated drafts be reviewed by the development director before sending. A grant writing workflow might require that program staff verify that the AI's description of program outcomes matches current data.

    These review standards serve another important function: they make AI use visible and sanctioned. When review is built into the process, staff are not making individual judgments about whether their AI use is appropriate. The organization has made that judgment at the policy level, and the review process is the mechanism for maintaining quality. This visibility is essential for moving AI from shadow practice to legitimate organizational tool.

    4. Shared Learning Loops

    Mechanisms for capturing what works and improving shared resources over time

    The final component of a mature shared AI workflow is a structured way for the team to share discoveries and improve the collective system. This can be as simple as a standing item on the monthly team meeting agenda to share "AI wins and fails," a Slack channel for sharing effective prompts, or a quarterly review of the prompt library where the team identifies what should be updated or expanded.

    Without this mechanism, even well-designed shared workflows stagnate. AI tools evolve rapidly, organizational needs shift, and individual staff members will continue discovering better approaches. The learning loop is what converts those discoveries from personal insights into organizational improvements. It is also what makes shared workflows self-reinforcing over time: each contribution from a team member makes the system more valuable, which increases the incentive to contribute.

    Shared AI Workflows in Practice: Function by Function

    The most convincing way to understand shared workflows is to see what they look like in concrete practice. Here are four of the highest-value use cases for nonprofit organizations, with enough detail to understand how the individual and shared versions differ.

    Grant Writing

    In individual mode, each grant writer has their own prompting approach, their own template phrases, and their own understanding of how to describe the organization's programs. In shared mode, a centralized knowledge base holds all approved program descriptions, outcome metrics, organizational narratives, and sample language from successful past proposals. Writers draw from this shared foundation, and the AI handles adapting content to word limits and funder-specific requirements.

    The impact can be substantial. When grant teams systematize their approach this way, they can significantly scale the number of proposals submitted without proportionally scaling staff time. The same program narrative language, quality-controlled and approved, flows into many more proposals with consistent accuracy.

    Donor Communications

    A shared donor communications workflow starts with approved message templates organized by donor segment and communication type: first-time donor thank-you, lapsed donor reengagement, major donor impact update, recurring donor acknowledgment. The shared knowledge base includes the organization's brand voice guidelines, approved impact statistics, and sample language that has performed well in past campaigns.

    The review process specifies that any AI-generated communications bound for major donors get reviewed by the development director before sending, while standard acknowledgment letters can be sent after a basic quality check by the development coordinator. This tiered review structure captures efficiency benefits while maintaining quality controls where they matter most.

    Program Reporting

    Program staff often spend significant time translating raw data and participant notes into narrative reports for funders. A shared AI workflow for program reporting includes prompts that help staff structure their observations, data entry templates that AI can convert into narrative format, and review checklists that ensure all required reporting elements are included.

    The shared component is the set of prompts and templates that any program staff member can use, regardless of their writing confidence. This levels the playing field across the program team and reduces dependence on a single staff member who might be a strong writer. The shared review process ensures that funder-specific requirements are met consistently.

    Internal Knowledge Management

    Meeting transcription, summary generation, and action item tracking are among the most immediately useful shared AI workflows. Tools like Microsoft Copilot in Teams or standalone transcription tools can automatically capture meeting notes, generate summaries, and extract action items. When this is configured as a shared team practice with consistent formatting, it dramatically reduces the time spent on meeting administration.

    More importantly, it creates a searchable organizational memory. When a staff member wants to find out what was decided about a program three months ago, they can search meeting summaries rather than asking around or reconstructing decisions from email threads. This kind of shared institutional memory is one of the most compelling arguments for moving from individual to shared AI use.

    How to Get Started: A Practical Roadmap

    The shift from individual to shared AI use does not require a large-scale initiative, expensive software, or a technology overhaul. The organizations making this shift successfully are starting small and building incrementally, creating visible wins that build momentum and demonstrate value before expanding.

    Phase 1: Surface and Aggregate (Weeks 1-4)

    Discover what already exists before building anything new

    Before you can build shared workflows, you need to know what individual workflows already exist. Ask staff informally what AI tools they are using and for what. You will likely discover that AI adoption is more widespread than leadership realized, and that some staff members have already developed sophisticated approaches that could benefit the whole team. This discovery phase has an important secondary effect: it signals that AI use is visible and sanctioned, which reduces the status anxiety that drives knowledge hoarding.

    • Survey staff anonymously about current AI tool use and primary use cases
    • Identify 2-3 staff members who are already using AI extensively and recruit them as early contributors
    • Identify the 1-2 use cases that create the most administrative burden across the most staff
    • Create a basic AI policy that establishes what is appropriate and what data should not be shared with AI tools

    Phase 2: Build the Foundation (Weeks 5-8)

    Create the shared resources that will anchor your workflows

    Start with two foundational assets: a shared knowledge base and a starter prompt library. The knowledge base does not need to be sophisticated. A shared Google Doc or Notion page with the organization's mission, program descriptions, key outcome data, and brand voice guidelines is enough to start. The prompt library can begin with the prompts that your early contributors are already using, organized by function and documented well enough that someone unfamiliar could follow them.

    • Create a shared organizational context document with mission, programs, outcomes, and brand voice
    • Document 5-10 high-value prompts in a shared library, organized by function
    • Define a simple review process for the first shared workflow you are piloting
    • Train a small pilot group of 3-5 staff members on using these shared resources

    Phase 3: Pilot and Measure (Weeks 9-16)

    Run one workflow with a small group and generate evidence of impact

    The most common mistake in building shared AI workflows is trying to change everything at once. Start with one use case, one team, and a specific time window. The goal of the pilot is not perfection. It is generating enough concrete evidence of impact, time saved, quality improved, consistency increased, to make the business case for expanding the approach. This evidence is what changes the conversation from "should we do this" to "how do we scale this."

    • Run the pilot for 30-60 days with consistent documentation
    • Track time spent on AI-assisted tasks vs. equivalent tasks without AI assistance
    • Collect qualitative feedback from pilot participants on what worked and what needs refinement
    • Document specific improvements to make before broader rollout

    Phase 4: Scale and Sustain

    Expand what works and build habits that keep the system alive

    Once you have evidence from a successful pilot, expansion becomes much easier. Staff who were skeptical become curious when they see peers saving time and producing better outputs. The shared library grows as more team members contribute. The organizational context document gets richer as more functions are added. The learning loop becomes a natural part of how the team operates. At this stage, you are no longer implementing a new system. You are maintaining and improving one that has already demonstrated value.

    • Share pilot results with the full team and use them to build enthusiasm for expansion
    • Designate an AI champion to maintain shared resources and onboard new staff
    • Schedule quarterly reviews of the prompt library to update and expand based on collective experience
    • Add new use cases incrementally rather than trying to cover everything at once

    The Role Leadership Must Play

    One finding from research on AI adoption is consistent and unambiguous: leadership behavior is the single most powerful predictor of whether individual AI use becomes shared organizational practice. Research from Atlassian found that people with active leadership support for AI adoption are 2.5 times more likely to become strategic AI collaborators rather than staying in individual experimentation mode. The inverse is equally true: where leadership is absent, passive, or skeptical, shared adoption stalls.

    Leadership support does not require technical expertise. It requires four things: visibility, resources, a clear signal about job security, and accountability. Visibility means that leaders visibly use AI tools themselves, share what they are learning, and treat AI adoption as a genuine organizational priority rather than a staff-level experiment. When executives model transparent AI use, they remove the status anxiety that drives knowledge hoarding and shadow practice.

    Resources means giving staff the time and support to build shared systems, which looks very different from individual adoption. Building a prompt library, documenting workflows, running a pilot, and training colleagues all require time that is not available unless leadership explicitly carves it out. Treating shared workflow development as extra work piled on top of existing responsibilities is a reliable way to ensure it never happens.

    The job security signal is often the most overlooked. When staff perceive that AI efficiency gains might reduce headcount, they have a rational incentive to minimize visible productivity improvements and resist sharing knowledge that could make their role redundant. Leaders who make explicit and credible commitments that AI adoption is about augmenting capacity, not reducing staff, remove a significant barrier to genuine knowledge sharing. This commitment needs to be backed up by organizational behavior, not just stated in an all-hands meeting.

    Accountability means measuring AI adoption at the organizational level, not just waiting for individual staff to self-report wins. This might mean tracking the number of shared workflows documented, measuring time savings in piloted use cases, or including AI workflow development as a goal in performance reviews for relevant roles. What gets measured gets done. If shared AI workflow development is never measured, it will always lose to the immediate demands of daily work.

    The AI champion model is one effective way to create leadership accountability without requiring every executive to become an AI expert. Designating one or two people per team as internal AI champions, giving them explicit time and authority to build shared resources, and holding them accountable for measurable outcomes creates the organizational structure that organic, bottoms-up adoption never produces. Champions who have clear mandates, not just enthusiasm, are the ones who actually move organizations from the 81% to the 7%.

    Overcoming the Most Common Obstacles

    Even well-designed shared workflow initiatives encounter predictable obstacles. Understanding these in advance makes them easier to navigate without losing momentum.

    "We don't have time to document workflows on top of doing the work."

    This is the most common objection, and it is not wrong. Documentation does require time. The answer is to make documentation the minimum viable version: a prompt that works, a brief note about when to use it, and a sentence about what to review in the output. Fifteen minutes of documentation can save a colleague hours of reinvention. You are not writing a manual. You are leaving a note for future colleagues and for yourself six months from now.

    "Staff are resistant to changing how they work."

    Resistance is usually about one of three things: uncertainty about job security, skepticism that AI produces good results, or lack of confidence in using new tools. Address these directly. Job security language needs to come from leadership, not middle management. Demonstrating actual outputs, not just explaining the concept, converts skeptics faster than any argument. Peer-to-peer training, where a colleague teaches rather than IT or leadership, reduces the confidence barrier significantly.

    "We tried sharing prompts before and it didn't stick."

    Prompt libraries that fail to stick usually have one of two problems: they were built by one person without broader input, so the team does not feel ownership over them, or they were not integrated into actual workflows, so using them required extra steps rather than fewer. The solution is co-creation and integration. Build the library with the team rather than for the team, and connect it directly to the tools and processes staff use daily.

    "We're worried about data privacy with shared AI tools."

    This is a legitimate concern that deserves a clear policy answer, not a workaround. The AI policy you create in Phase 1 should specify exactly what data can be included in prompts and what must be excluded: no donor names and contact information, no beneficiary identifiers, no financial data that is not already public. Establishing these rules clearly enables sharing rather than preventing it, because staff can participate confidently knowing they are operating within sanctioned boundaries. You can explore resources on AI knowledge management for nonprofits for deeper guidance on data governance within shared systems.

    Connecting Shared Workflows to Your Broader AI Strategy

    Building shared AI workflows is not a standalone initiative. It is the operational foundation for every other AI ambition your organization might have. If you are incorporating AI into your strategic plan, the shared workflow is how strategy becomes practice. If you are trying to overcome staff resistance to AI adoption, the shared workflow gives skeptics a concrete, low-risk way to experience AI value rather than asking them to take it on faith.

    Shared workflows are also how you make AI adoption equitable within your organization. Individual AI use tends to benefit the staff members who are most confident with technology, most willing to experiment, and most connected to information about new tools. These are often, though not always, the people who already have structural advantages in the organization. Shared workflows democratize access: when the knowledge is documented and available, the benefit is no longer confined to the most tech-forward individuals.

    There is also a board and funder dimension worth considering. As funders increasingly expect organizations to demonstrate how they are using AI responsibly and effectively, shared workflows are a concrete answer to "what does your AI implementation look like?" A documented prompt library, a clear review process, and measurable outcomes from a pilot are far more compelling than vague claims about staff using ChatGPT. As you prepare board materials about your AI strategy, shared workflows give you something specific and verifiable to report.

    Conclusion: From 81% to Something Better

    The 81% problem is not a technology failure. Your staff are already using AI. The tools are accessible. The capabilities are real. What is missing is the organizational infrastructure to convert individual discovery into shared practice. The gap between 92% adoption and 7% major impact is almost entirely explained by the difference between AI that lives in individual staff members and AI that lives in the organization.

    Closing this gap is not about launching a major transformation initiative. It is about starting somewhere specific: one use case, one team, a shared document with ten prompts and some organizational context, a commitment to review outputs before they go out, and a monthly habit of sharing what is working. From that starting point, the system builds on itself. Each contribution makes the next one more valuable. Each efficiency gained creates space to document the next workflow. What begins as a small deliberate investment grows into organizational infrastructure that compounds over time.

    The organizations that will look back on 2026 as the year they made a genuine AI breakthrough are not those that deployed the most sophisticated tools. They are those that made a quiet, determined decision to stop letting AI expertise live exclusively in individual heads and start building it into how the organization operates. That decision is available to any nonprofit, regardless of size or budget. The only requirement is making it.

    Ready to Move Beyond Individual AI Use?

    We help nonprofit organizations build shared AI workflows, prompt libraries, and the governance structures that turn scattered experimentation into lasting organizational capability.