Back to Articles
    Leadership & Strategy

    Embedded AI in Goals, Budgets, and KPIs: The 7% Standard Most Nonprofits Haven't Reached

    The 2026 Nonprofit AI Adoption Report from Virtuous and Fundraising.AI surveyed 346 organizations and found that 92 percent of nonprofits use AI but only 7 percent describe it as embedded in their goals, budgets, and key performance indicators. This article walks through what embedded actually means in practice: how a strategic plan should reference AI, how an annual budget should account for it, what KPIs separate efficiency talk from mission impact, and who in the organization owns the answer when the board asks.

    Published: May 12, 202616 min readLeadership & Strategy
    Embedded AI in nonprofit goals, budgets, and KPIs

    A useful way to read the 92 / 7 gap is as the difference between adoption and architecture. Adoption is having ChatGPT licenses, a few staff who write good prompts, and a vague sense that the technology is improving how the organization works. Architecture is the deliberate set of decisions that turn those tools into capability the board can defend, the auditor can trace, and the next executive director can inherit. The 7 percent have an architecture. The 92 percent have logins.

    Embedding does not require a chief AI officer, a six-figure technology budget, or a partnership with a major consulting firm. What it requires is that AI stop floating outside the organization's three primary planning instruments and start showing up inside them. Those instruments are the strategic plan that says where the organization is going, the budget that allocates the resources to get there, and the performance indicators that tell the board whether the plan is working. If AI is invisible in all three documents, no amount of staff enthusiasm or executive memos will translate into durable capability.

    This article is written for executive directors, COOs, CFOs, and board chairs at small and mid sized nonprofits who already know that ad hoc adoption has plateaued and are trying to figure out what formalizing AI actually looks like without overbuilding. We will work through what embedding means in each of the three instruments, what line items belong in a defensible AI budget, which KPIs to track at which stage of maturity, and where the accountability for AI strategy should sit on an org chart. The goal is to give you the artifacts a board would expect to see if it asked the question directly: tell us how AI fits into our plan, our money, and our results.

    A related piece, What Separates 7% from 92%: An Anatomy of the Nonprofits Actually Getting AI Right, looks at the broader habits of high impact organizations. This article focuses specifically on the three planning instruments and the work product they should contain.

    What "Embedded" Actually Means

    The word embedded gets used loosely. In the context of organizational maturity it has a specific meaning: AI is treated as a strategic investment category on par with technology infrastructure, process improvement, and leadership development rather than as a discretionary experiment funded out of operational slack. That promotion changes how the organization plans, budgets, governs, and reports.

    Three tests separate embedded from ad hoc adoption. First, can a new board member, looking only at the strategic plan, identify the organization's AI priorities for the next three years? Second, can a new CFO, looking only at the annual operating budget, see what AI costs, where the money goes, and what it is meant to produce? Third, can a new executive director, looking only at the most recent quarterly dashboard, tell whether AI investments are paying off in terms of mission outcomes? If the answer to any of these is no, the organization is in the 92 percent regardless of how many tools it has adopted or how often staff use them.

    Ad Hoc Adoption Looks Like

    • Individual staff using AI on personal or unmanaged accounts
    • No documented prompts, playbooks, or shared assets
    • No baseline metrics captured before adoption
    • AI spend buried in software or general administration
    • No named owner; "whoever is interested" runs the work

    Embedded Adoption Looks Like

    • AI objectives written into the strategic plan with timelines
    • Named owner accountable to the board for AI strategy
    • Discrete budget line items in the operating plan
    • KPIs with baselines, targets, and a reporting cadence
    • Documented workflows that survive staff turnover

    AI in the Strategic Plan

    A strategic plan that mentions AI in a single sentence under "technology" is not a plan that has embedded AI. It is a plan that has acknowledged AI. There is a difference, and the board members reviewing your next adoption report will notice. A plan that embeds AI treats it the way it treats any other capability the organization is intentionally building: with named objectives, target outcomes, and milestones the leadership team can be measured against.

    The cleanest approach is to weave AI through the existing strategic pillars rather than creating a separate AI pillar that competes with mission goals. If your plan has pillars for fundraising, programs, operations, and people, each pillar should contain at least one AI related objective. The fundraising pillar might commit to an AI assisted donor research workflow that lifts qualified prospect identification by a defined margin within eighteen months. The programs pillar might commit to AI assisted intake that shortens the time from inquiry to first appointment. The operations pillar might commit to documented prompt libraries covering the ten highest volume back office tasks. The people pillar might commit to an AI literacy benchmark for every staff member within a given timeframe.

    This approach forces a useful discipline. It requires the leadership team to answer the question that the 92 percent never answer: where, in our actual work, do we believe AI will make the mission go further? An AI pillar lets you defer that question and treat AI as a technology project. Distributing AI objectives across mission pillars forces every program leader to own a piece of the answer.

    The plan should also name the risks the organization is consciously accepting. The 2026 reporting on AI mental health liability, including the Gavalas litigation against Google over Gemini, has made clear that boards will be asked what they knew about AI risks and when. A strategic plan that lists AI objectives without listing the corresponding risks and controls reads as incomplete. See our coverage of state AI mental health laws nonprofits must track for the regulatory context that should inform this section.

    What Belongs in the AI Section of a Strategic Plan

    A short list that distinguishes embedded planning from acknowledgment.

    • Distributed objectives: at least one AI related goal in each strategic pillar, tied to a mission outcome rather than a tool adoption metric.
    • A named owner: one person on the leadership team accountable for the portfolio, with a clear escalation path to a board committee.
    • Risk acknowledgment: a candid list of AI risks the organization is accepting, with the corresponding controls.
    • A maturity reference point: a statement of where the organization sits today and where it intends to be at the end of the plan.
    • Sunset criteria: conditions under which an AI initiative will be retired rather than expanded.

    Our companion piece on the three stage AI maturity model can serve as the reference point a strategic plan uses to describe its starting and target positions.

    AI in the Annual Budget

    Budgets reveal what an organization actually values. If AI does not appear as a discrete category in your operating plan, it does not matter what your strategic plan says. Auditors, funders, and incoming finance staff will read your budget for evidence of priority, and a budget that bundles AI into "software" or "office supplies" is a budget that has not yet treated AI as strategic.

    The argument for a separate budget category is not about size. A small organization might allocate only a few thousand dollars total. The argument is about visibility. When AI costs are visible, they can be reviewed, defended, and optimized. When they are invisible, they grow without scrutiny and shrink without analysis. The CFO who can answer the question "what are we spending on AI this year and what are we getting" is in a different position from the CFO who has to dig through five general ledger codes to produce an estimate.

    The most useful structure is to break AI spend into six categories. Each category has a distinct purpose and a distinct way of being controlled. Together they account for the full economic footprint of an AI program rather than only the obvious software licenses.

    1. Platform and Licensing

    Managed subscriptions: ChatGPT Team or Enterprise, Claude for Work, Microsoft Copilot, and any AI features embedded in your CRM, fundraising platform, or productivity suite. The visible spend most nonprofits already track.

    2. Integration and Data Infrastructure

    Connectors, API usage, data cleanup, vector databases for retrieval, and the engineering or consulting time required to make AI tools talk to your CRM or case management system. Often the largest line item once an organization moves past basic chat use.

    3. Training and Capability Building

    Staff training time, external workshops, AI literacy programs, and certifications. A budget that funds tools but not training is a budget that has decided to depend on luck.

    4. Governance and Compliance

    Policy development, legal review, monitoring tools, third party assessments, and red team exercises. Particularly important for organizations subject to state AI mental health laws or the EU AI Act high risk system rules.

    5. Pilot and Innovation Reserve

    A dedicated experimentation budget for testing new tools and workflows without disrupting core operations. Common practice is to ring fence one to three percent of the operating budget, sized to the organization's risk tolerance.

    6. Measurement and Evaluation

    Analytics tooling, evaluation consultants, and the staff time required to baseline metrics before deployment and track them after. Without this line item, ROI is a story rather than a number.

    For organizations grappling with the rising cost of token based pricing, see AI as a metered utility: a nonprofit CFO's framework. The CFO who treats AI spend the way a utility CFO treats kilowatt hours is the CFO who can predict next year's bill.

    KPIs: Four Categories That Separate Efficiency Talk from Mission Impact

    The most common measurement mistake in nonprofit AI programs is reporting only on efficiency. Hours saved is the easiest number to produce and the least interesting one to a board. The board wants to know whether the hours saved have been converted into mission output, whether the quality of work has held or improved, and whether adoption is broad enough that the gains will survive the departure of any one staff member. Embedded measurement covers four categories, and a program reporting on only one is a program that has not yet thought through what success looks like.

    Category 1: Efficiency and Productivity

    How much faster, with how much less.

    These KPIs measure the immediate operational lift from AI assisted work. They are the easiest to capture and the most likely to be celebrated prematurely.

    • Hours saved per staff per week on tasks where AI is in active use
    • Time to first draft for grant proposals, donor letters, and appeals
    • Cost per task on AI assisted workflows such as donor research profiles or acknowledgment letters
    • Backlog reduction on case notes, intake forms, or other repetitive documentation

    Category 2: Adoption and Engagement

    How broadly the capability is in use across the organization.

    Efficiency without adoption is fragility. If three power users disappear, so does the lift. Adoption KPIs measure organizational breadth and durability.

    • Active AI users divided by total staff, tracked monthly
    • Number of documented, repeatable workflows in active use
    • Percentage of staff who have completed AI literacy training appropriate to their role
    • Tools in use per department, tracked to identify sprawl

    Category 3: Quality and Risk

    How well the AI assisted work performs against organizational standards.

    Speed without quality is debt. Quality KPIs track the rate at which AI assisted outputs need correction, the rate at which they meet policy, and the rate at which they create incidents.

    • Rework rate on AI assisted outputs (drafts that need significant editing before use)
    • Reviewer quality scores from a sampled audit of AI assisted work
    • Policy adherence rate, including disclosure where required
    • Number of AI incidents (hallucinations reaching constituents, data leakage, escalations) and time to remediation

    Category 4: Mission Outcomes

    Whether the efficiency converts into something the board cares about.

    The category that the 7 percent track and the 92 percent skip. Mission KPIs link AI investments directly to the outcomes in your theory of change.

    • Donor retention rate in AI assisted segments versus a held out control
    • Cost per dollar raised, tracked before and after AI deployment in fundraising workflows
    • Beneficiaries reached per full time staff equivalent in service delivery roles
    • Program throughput indicators such as clients served, cases closed, or applications processed per cycle

    Whichever KPIs you select, the critical move is establishing baselines before deployment. Without a baseline, lift is a claim you cannot defend. A board member who asks "compared to what?" should never be met with silence.

    Linking AI Investments to Mission Outcomes

    The most rigorous approach to connecting AI spend to mission impact uses your theory of change as the scaffold. Inputs (AI tools, training, integration) lead to activities (specific AI augmented workflows), which produce outputs (more grants drafted, more clients screened, more donors qualified), which generate outcomes (more funding secured, faster service delivery, higher retention), which contribute to impact (mission advancement on the metrics your strategic plan promised).

    The discipline this imposes is useful. Every proposed AI investment has to answer a question: which outcome metric does this move? If the answer is unclear, the investment does not get approved. That single rule prevents the slow accumulation of disconnected tools that produces what the Virtuous report calls the efficiency plateau. It also forces a conversation about whether the saved hours are being redirected to mission critical work or simply absorbed back into ambient busyness.

    A useful frame here is to think about each AI initiative as having a redirect plan. If an AI assisted intake workflow saves a case worker five hours a week, the plan should state where those five hours go. Possible answers: more direct service hours, more follow up calls with at risk clients, more time spent on case complexity that previously got short shrift. An organization that cannot answer the redirect question is an organization that has converted AI gains into vague ambient relief rather than mission output, and that is precisely the efficiency plateau the 92 / 7 gap describes.

    Board Level Reporting on AI

    A board that hears about AI only when something goes wrong is a board that has been set up to be reactive. Embedded reporting puts AI on a quarterly cadence with a defined dashboard the board sees every meeting. The dashboard does not need to be elaborate. It needs to answer four questions a board would ask if it were paying attention.

    The Quarterly AI Board Dashboard

    Four questions, one page, ten minutes of meeting time.

    • Where are we spending? Total AI spend by category against budget, with variance explanations for any category over fifteen percent off plan.
    • What is the lift? Two or three efficiency, adoption, and quality KPIs against baseline, with the mission KPI most directly affected.
    • What is the risk? A short list of AI risks the organization is tracking, recent incidents and their resolution, and any new regulatory exposure (state mental health laws, EU AI Act, IRS disclosure rules).
    • What is next? What is in pilot, what is scaling, what is being retired, and what major decisions the board will be asked to approve in the coming quarter.

    Reporting at this cadence produces a different relationship between the board and the organization's AI work. The board moves from being a recipient of crisis updates to being an informed steward of an ongoing investment. The leadership team gets durable buy in for difficult decisions. The minutes create the documentation that protects the organization when an incident does eventually occur. For a deeper treatment, see seven ways AI can improve board communications.

    Who Owns AI Strategy

    Ownership is the question the 92 percent quietly avoid. AI gets used by everyone and owned by no one. That is a sustainable arrangement only while AI is genuinely experimental. The moment it becomes operationally meaningful, the absence of an owner becomes a structural risk. The board cannot hold management accountable for an outcome no one is named on, and management cannot prioritize work without a single neck on the line.

    The right answer varies by organization size. Below are the most common patterns, ordered roughly by organizational scale.

    Small Organization (Under 25 Staff)

    An AI working group of three to five people representing programs, fundraising, finance, and IT or operations. Chaired by a director level staffer with AI fluency. Reports to the executive director, who carries the AI agenda to the board.

    This pattern keeps the cost low and the conversation cross functional without requiring a new hire.

    Mid Sized Organization (25 to 100 Staff)

    A named AI lead, typically the COO or director of operations, owns the portfolio and is evaluated on AI specific objectives. A working group continues to provide cross functional input, but accountability is singular and visible on the org chart.

    The board sees one person at the table when AI is on the agenda.

    Larger Organization (100+ Staff)

    A dedicated chief innovation, chief digital, or chief AI officer role becomes economically defensible. Often paired with a small team and a formal board AI or technology committee.

    At this scale the cost of not having dedicated leadership exceeds the cost of the role.

    Federated or Affiliate Models

    National or umbrella organizations face the additional question of how much AI strategy is set centrally and how much sits with affiliates. The cleanest pattern is central policy and shared tooling, distributed implementation, with a regular forum for affiliate AI leads.

    See our writing on building an AI champions network for the operational mechanics.

    Common Pitfalls When Formalizing AI

    Even organizations committed to embedding AI run into predictable failure modes. Knowing them in advance is cheaper than living through them.

    Over Engineering Governance Before There Is Anything to Govern

    A twenty page AI policy at an organization with three AI users will be ignored and will reduce trust in subsequent governance work. Match the governance to the scale of the use. Start with a one page acceptable use policy and grow from there.

    Treating AI as IT

    If AI is owned by the technology team and reports through the IT budget, it will be optimized for stability and procurement, not mission outcomes. Embedded AI sits in operations or program leadership, not IT.

    No Baselines

    The single most preventable failure. Capturing baseline metrics for cost, time, and quality before deployment is the only way to credibly claim lift afterward. Build baselining into the pilot phase of every initiative.

    Tool Sprawl

    Every department picks its own AI vendor, none of them integrate, and the integration cost arrives all at once two years later. A modest review gate on new AI procurement saves real money. See how nonprofits end up running thirty bots for the runaway version of this problem.

    Measuring Only Efficiency

    The efficiency plateau is real. Hours saved that are not redirected toward mission critical work disappear into ambient busyness, and the organization gets nothing measurable from the investment. The redirect plan is part of the initiative, not an afterthought.

    Pilot Purgatory

    Endless pilots that never scale and never sunset. Every pilot needs a written decision date, a scale or kill threshold, and an owner empowered to make the call. Without those three elements, pilots become a comfortable substitute for difficult prioritization.

    Excluding the Board

    A board that is blindsided by an AI incident will respond by adding controls that constrain the program for years. A board that has been briefed quarterly has the context to respond proportionately. The cost of regular reporting is small compared with the cost of a surprise.

    A Sequence That Works

    Reading a list of practices can feel overwhelming. The practical sequence is shorter than the list suggests, because the items reinforce one another and most organizations only need to take the first three steps to break out of the 92 percent.

    A 12 Month Path to Embedded AI

    Designed for a small to mid sized nonprofit at the ad hoc stage.

    • Months 1 to 2: Name the owner. Identify a director level staffer accountable for AI strategy and announce the role internally and to the board.
    • Months 2 to 4: Inventory current AI use. Document where AI is in active use, on which platforms, by which staff, for what purposes. The inventory itself is often a revelation.
    • Months 3 to 5: Draft the AI section of the strategic plan, with one objective in each pillar and a candid risk acknowledgment.
    • Months 4 to 6: Restructure the budget. Move AI spend out of general administration and into the six categories outlined above. The total may not change; the visibility will.
    • Months 5 to 7: Establish baselines for two or three priority workflows. Time, cost, and quality, captured for at least four weeks before any AI assisted version of the workflow is rolled out.
    • Months 7 to 9: Build the quarterly board dashboard. Pilot it with the executive team before presenting to the board, then walk the board through the first quarter live.
    • Months 9 to 12: Run the first formal scale or kill decision on at least one pilot, using the data the dashboard now produces. The act of retiring something openly is what signals to staff that the new approach is real.

    Conclusion

    The 92 / 7 gap is not a gap of tools, talent, or budget. Every organization in the 92 percent has access to the same models, the same prompt techniques, and the same training resources as the 7 percent. The difference is architecture: whether AI lives inside the strategic plan, the budget, and the KPIs that the board reviews, or whether it floats outside those instruments as a fashionable but unaccounted for activity.

    The work of moving across that boundary is not glamorous. It is filing AI line items in the right category, writing one paragraph into each pillar of the strategic plan, asking program leaders which mission outcome a given AI investment is meant to move, and showing up at the board meeting with a one page dashboard that answers four questions honestly. Done quarterly, this work compounds. The organization becomes able to defend its AI spend, retire failed pilots without political pain, and absorb staff turnover without losing capability. The 7 percent built that ability with these unglamorous moves, not with a transformative platform or a heroic chief AI officer.

    The question to bring to your next board meeting is simple. If a new director joined our board tomorrow and asked to see how AI fits into our strategic plan, our annual budget, and our quarterly reporting, what would we hand them? If the answer is "we will pull something together," the organization is in the 92 percent. The path out is to build the artifacts before the next meeting, not as a polished governance program, but as honest, visible documentation that AI is a strategic investment this organization is making on purpose.

    Ready to Embed AI in Your Strategic Plan?

    One Hundred Nights helps nonprofits move AI out of ad hoc use and into goals, budgets, and KPIs the board can defend. We work alongside executive teams on strategic plan revisions, budget restructuring, KPI design, and the quarterly dashboards that make embedded AI real.