Back to Articles
    AI Governance

    Agent Sprawl: How Nonprofits End Up Running 30 Bots and Knowing What None of Them Do

    The same structural problem that overwhelmed software teams when microservices exploded in the 2010s is arriving for AI. Organizations are deploying agents one at a time, each for a legitimate reason, until one day leadership realizes they have dozens of bots in production, most undocumented, many unmonitored, and a few no one can explain at all.

    Published: May 6, 202614 min readAI Governance
    Illustration of AI agent management and governance for nonprofits

    Consider a scenario that plays out across the nonprofit sector with increasing frequency. The fundraising director, eager to respond to donors faster, connects a chatbot to the CRM. A few weeks later, the programs team enables an AI assistant inside their project management tool. The communications staff spins up an automation on Zapier that uses an AI writing tool to draft social posts. A grant-funded initiative launches with its own AI component. The IT lead, pressed for time, enables the AI features bundled into the organization's email platform. By the end of the quarter, nobody has a full list of what is running.

    This is agent sprawl: the unmanaged accumulation of AI agents, bots, and automations across an organization, without coordination, documentation, or centralized oversight. It is not a failure of ambition. It is a success story gone ungoverned. Every individual agent was probably a good idea. The problem is that together, they form a shadow infrastructure that nobody fully controls.

    For nonprofits, the stakes are particularly high. The data flowing through these agents often includes donor relationships, beneficiary case records, financial information, and protected health data. The budgets governing these organizations leave little margin for surprise costs or security incidents. And the governance capacity, often spread thin across a small IT team or a single tech-savvy operations director, is rarely sized to monitor a growing fleet of autonomous agents.

    According to Gartner's April 2026 research on AI agent governance, only 13% of organizations believe they have the right governance in place for AI agents, and only 21% have mature governance models for autonomous agents. Given that the same research projects the average enterprise will have over 150,000 agents by 2028, the governance gap is widening faster than most organizations realize. This article explains how agent sprawl happens, what it costs, and how to build the oversight structures that keep it manageable.

    How Agent Sprawl Happens in Nonprofits

    Agent sprawl is not a single event. It is a slow accumulation driven by several overlapping forces that are especially pronounced in the nonprofit environment. Understanding these patterns is the first step toward interrupting them.

    The Low-Code Democratization Effect

    Platforms like Microsoft Copilot Studio, Zapier, Make, and n8n let any staff member deploy an AI-powered automation with near-zero technical skill. The friction of building agents has essentially disappeared. When creating an agent is as easy as configuring a form, the number of agents grows at the pace of staff problem-solving, which is to say, constantly.

    Platform Bundling and Default Activations

    AI capabilities are increasingly embedded inside tools organizations already pay for. A CRM upgrade enables an AI assistant by default. An email platform adds AI-powered send-time optimization in the background. A project management tool activates a workflow agent in the next billing cycle. Agents arrive without anyone explicitly choosing to deploy them.

    Pilot-to-Permanent Drift

    Organizations launch AI pilots with the intention of evaluating them formally before committing. But pilots rarely die. They quietly become permanent infrastructure while the formal evaluation is perpetually postponed. By the time anyone asks whether the pilot succeeded, the agent is already woven into daily operations and removing it would create disruption.

    Grant-Funded Agent Orphaning

    Grants often fund standalone AI implementations tied to specific programs. When the grant ends, the agent lives on. Nobody decommissions it because decommissioning would require acknowledging it exists and deciding who owns that responsibility. These orphaned agents continue running, accessing systems, and potentially making decisions, without any active owner or maintenance budget.

    Departmental Autonomy Without Coordination

    The fundraising team builds a donor-response bot. The programs team builds a volunteer scheduler. The communications team builds a social media assistant. Each decision makes sense in isolation. No one coordinates. By the time leadership realizes how many agents exist, there are a dozen in production, some doing overlapping work, some potentially accessing the same data through different credentials, and none of them fully documented.

    Staff Turnover and Knowledge Loss

    An employee who built or configured an agent leaves. Their institutional knowledge of what it does, why it was built, and which systems it connects to goes with them. The agent continues operating. When something eventually breaks or a question arises about its data access, nobody can answer it. Nonprofit sector turnover rates make this pattern especially common.

    What Agent Sprawl Actually Costs

    Agent sprawl is not merely a tidiness problem. It creates compounding risks across four dimensions that matter directly to nonprofit sustainability and mission delivery.

    Security Risks: Every Agent Is an Attack Surface

    Each agent creates new connection points that may be exploited

    Each agent deployed in your environment creates new connection points to APIs, SaaS applications, databases, and email systems. Every connection is a potential attack vector. Agents often inherit excessive permissions through OAuth tokens or API keys, enabling actions beyond their intended scope. These credentials are rarely reviewed or rotated, particularly for agents that operate in the background without regular human interaction.

    When agents are ungoverned, their behavior falls outside traditional security monitoring. Abnormal activity, data exfiltration, or credential compromise can go undetected for extended periods. For nonprofits whose funding depends on donor trust, and whose operations often involve protected client information under HIPAA or state privacy laws, a breach involving an ungoverned AI agent is not merely costly. It is existential.

    • Ungoverned OAuth connections persist after staff departures, leaving active access credentials nobody actively monitors
    • Agents with case management access create HIPAA exposure when those agents are not documented in your risk assessment
    • Prompt injection attacks targeting ungoverned agents can exfiltrate data through normal-looking API responses

    Budget Risks: Silent Token Consumption

    Agents accumulate costs invisibly across billing cycles

    An idle agent making frequent LLM API calls can run up hundreds or thousands of dollars monthly before anyone notices. When agents are deployed across departments without centralized oversight, nobody has a complete view of what the total AI spend is. Subscription fees for AI-enabled platforms appear under multiple department budget lines. API consumption charges arrive in a single invoice without workflow-level attribution.

    The problem compounds with agentic architectures. An agent stuck retrying a failed task or cycling through validation steps can burn thousands of tokens in minutes. Without spending caps and alerts on individual agents, there is no mechanism to detect runaway consumption before the invoice arrives. This connects directly to the token economics challenges explored in AI as a Metered Utility, where unpredictable consumption is identified as the primary budget failure mode for nonprofits adopting AI.

    Governance Risks: The Questions You Cannot Answer

    Ungoverned agents create accountability gaps with funders and regulators

    Consider what happens when a funder, auditor, or regulator asks: How many AI agents does your organization currently have deployed? What data do they access? Who is responsible for each? For most nonprofits with agent sprawl, the honest answer to all three questions is "we don't know." That answer is increasingly unacceptable. Foundation funders are adding AI governance requirements to grant agreements. Auditors are beginning to treat AI agent inventories as an audit scope item. State privacy regulators are asking about automated decision-making systems.

    Beyond external accountability, ungoverned agents create operational failures. A fundraising bot and a communications bot trained on different data can produce contradictory messages to donors. Two agents accessing the same calendar system can schedule conflicting appointments. An agent making grant compliance claims may be operating on outdated program data. These are not hypothetical edge cases. They are the predictable consequences of deploying agents without an orchestration layer or shared governance framework. The Agent Governance for Nonprofit Boards framework provides a useful starting point for board-level oversight structures.

    Warning Signs Your Nonprofit Has an Agent Sprawl Problem

    Agent sprawl rarely announces itself. It reveals itself gradually through operational friction, budget confusion, and governance questions that nobody can answer cleanly. These warning signs, drawn from enterprise AI governance research and adapted to the nonprofit context, indicate that the problem may already be present.

    Leadership cannot answer 'How many AI agents do we have deployed right now?' without significant investigation
    Different departments are using different AI tools for similar goals, with no awareness of each other's work
    An employee leaves and takes all institutional knowledge of an AI automation with them, leaving behind a running agent nobody fully understands
    AI-related subscription costs appear in the budget under multiple department line items with no consolidated view
    Staff are regularly using personal or free-tier AI accounts for work tasks because approved tools don't meet their needs
    A data governance question arises and nobody can quickly confirm which AI tools have access to what data
    Agents are producing conflicting outputs, such as two bots sending contradictory information to the same donor
    IT discovers new API connections or OAuth authorizations in core systems that nobody requested or approved
    Grant reports reference AI tools your organization cannot now locate or document
    Security or IT staff learn about agents only when something breaks

    The Shadow AI Connection

    Research from IBM's 2025 findings found that only 37% of organizations have policies to detect shadow AI. For nonprofits, where staff are resourceful and resource-constrained, and where free AI tiers are widely available, shadow AI usage is likely even higher than in enterprise environments. The Building an AI Governance Framework article examines this pattern in more depth. Shadow AI agents are particularly dangerous because they combine autonomous action with no organizational oversight at all.

    How to Audit Your Existing AI Agent Landscape

    Before you can govern what you have, you need to know what you have. An agent audit is not a one-time event. It is a discovery process that should become a recurring practice. The following steps provide a practical approach sized for nonprofits without dedicated AI governance staff.

    Step 1: Discovery Sweep

    Survey all departments with a structured questionnaire: What AI tools are you using? What platforms have AI features enabled? What automations are running? Avoid the instinct to rely on voluntary disclosure alone. People often do not realize that the tools they use contain AI components, or they assume that "AI features" in an existing tool do not count as an "AI agent."

    • Review OAuth app connections in Google Workspace, Microsoft 365, Salesforce, and other core platforms to see what third-party applications have been granted access
    • Review vendor invoices and subscription records for AI platform fees or "AI add-on" line items that may have been silently enabled
    • Check Zapier, Make, n8n, and Power Automate workflow lists, which often contain AI-powered automations created without IT involvement
    • Ask grant program staff specifically about AI components in grant-funded initiatives, since these frequently fall outside normal IT review

    Step 2: Build the Agent Inventory

    For each agent discovered, document a standard set of attributes. At its simplest, this inventory is a maintained spreadsheet. The goal is not a perfect database. The goal is an authoritative list that can be referenced when governance questions arise.

    • Agent name and purpose: A one-sentence description of what it does and why it was deployed
    • Platform and vendor: What tool or platform hosts it, and who the vendor is
    • Owner: The staff member responsible for this agent. If no one is designated, that is itself a finding
    • Data access scope: What systems it connects to, and what types of data it touches
    • Authorization method: How it authenticates (OAuth, API key, service account), not the credentials themselves
    • Deployment and review dates: When it was created and when it was last formally reviewed
    • Status and risk classification: Active/inactive/deprecated, and a simple High/Medium/Low risk rating based on data sensitivity

    Step 3: Risk-Classify and Rationalize

    Once you have an inventory, classify each agent by risk level. High-risk agents have access to donor PII, beneficiary case data, financial systems, or external communications. Medium-risk agents access internal workflow data or content drafting. Low-risk agents operate on non-sensitive, public-facing content.

    Then ask four rationalization questions for each agent. Is it still actively used? Does someone own it and know how to maintain it? Does it duplicate another agent's functionality? Does its current data access scope still match its original purpose? Agents that fail these questions are candidates for immediate decommissioning or redesign.

    Building a Governance Framework That Prevents Future Sprawl

    The audit addresses what already exists. Governance prevents the problem from recurring. For nonprofits without dedicated AI operations teams, a lightweight governance structure that builds on existing IT and data governance practices is more sustainable than a standalone AI oversight bureaucracy.

    Establish an Agent Registry as Official Policy

    The single most impactful governance practice is simple: no agent is considered authorized unless it appears in the official registry. This shifts the default from "agents proliferate unless stopped" to "agents require approval before deployment." The registry is not bureaucracy for its own sake. It is the mechanism that answers governance questions quickly and enables rationalization reviews.

    Registry governance rules should include: a formal intake process before any new agent is deployed, quarterly review of all active agents, more frequent review for high-risk agents, and a named registry owner who is accountable for maintaining it. The AI Champions framework suggests that this registry owner role can be fulfilled by a part-time AI Steward rather than requiring a dedicated hire.

    Apply the Principle of Least Privilege at Deployment

    When a new agent is approved and deployed, it should be provisioned with only the minimum permissions required for its stated purpose. An agent that drafts donor thank-you letters does not need read access to your entire CRM contact database. An agent that schedules volunteer shifts does not need access to your financial systems. Overpermissioned agents are a primary driver of both security exposure and data governance failure.

    Each agent should also be assigned a unique identity within your systems. Agent actions should be attributable to a specific, documented identity, not a shared service account or a staff member's personal credentials. This makes behavior monitoring possible and credential revocation practical when an agent is retired.

    Build Retirement Into the Deployment Process

    "Forgotten retirements" are the primary driver of long-term agent accumulation. The moment an agent is deployed, set a review date. Treat agent retirement as an explicit workflow step, not an afterthought. When a project ends, explicitly decommission its agents: revoke credentials, remove API access, document the retirement in the registry, and preserve audit trails for the required retention period.

    Organizations that treat retirement as a deliberate step see dramatically lower agent accumulation over time. This is especially important for grant-funded projects, where the natural project-end milestone provides an automatic trigger for agent decommissioning review.

    Align Agent Governance With Existing Policies

    Avoid creating a parallel governance structure that adds bureaucratic overhead without adding protection. Instead, integrate agent governance into your existing data governance policy, IT acceptable-use policy, and vendor management process. Agents that access donor data should go through the same data privacy review as any other tool accessing that data. Agents provided by external vendors should go through the same vendor risk assessment you apply to any software purchase.

    This integration approach is particularly important for small organizations where governance capacity is limited. One person cannot realistically maintain a fully separate AI governance track while also handling all other IT and operations responsibilities. Embedding agent oversight into existing review cycles makes it sustainable. The Building an AI Governance Framework article provides additional context on embedding governance into existing organizational structures.

    Agent Lifecycle Management: From Provisioning to Retirement

    Governance is not just about what happens at the moment an agent is deployed. It covers the complete arc from the initial idea through eventual retirement. Agent Lifecycle Management (ALM) provides a framework for managing that arc consistently.

    Provisioning Stage

    Before deployment

    • Complete intake form documenting purpose, owner, and data access needs
    • Apply minimum necessary permissions
    • Assign unique identity, not shared credentials
    • Set automatic review date at deployment
    • Add to official registry before going live

    Operational Stage

    While active

    • Log what data was accessed and what actions were taken
    • Monitor for behavioral drift as models update
    • Conduct periodic rationalization reviews
    • Rotate API keys and credentials on schedule
    • Alert on anomalous consumption patterns

    Retirement Stage

    At end of life

    • Explicitly revoke all credentials and API access
    • Document retirement in registry with date and reason
    • Handle stored data per retention policy
    • Preserve audit trails for required retention period
    • Confirm no orphaned connections remain in core systems

    What Good Agent Governance Actually Enables

    It is worth being explicit about what an agent governance framework buys you, beyond avoiding the risks described above. Organizations that implement even a basic registry and lifecycle process consistently report tangible operational benefits that justify the modest time investment required.

    The most immediate benefit is the ability to answer governance questions quickly and confidently. When a foundation funder asks about your AI governance practices, when an auditor requests documentation of your automated decision-making systems, or when a state data privacy regulator inquires about tools accessing personal information, you have a coherent answer. This is not a minor benefit. AI governance inquiries from funders and regulators are increasing steadily, and organizations that cannot demonstrate basic oversight are facing growing scrutiny.

    A maintained registry also prevents the duplication that wastes resources in uncoordinated deployments. When a department wants to deploy a new agent, checking the registry first reveals whether a similar capability already exists. This avoids paying for two tools doing the same job, prevents the contradictory-output problem that emerges when parallel agents work from different data, and creates opportunities for cross-departmental coordination that often yields better outcomes than siloed deployments.

    Perhaps most importantly for resource-constrained nonprofits, basic agent governance reduces the cost of something going wrong. A governed agent has a documented owner who can act quickly when problems arise. It has documented data access that clarifies breach scope if there is a security incident. It has a clear decommissioning path when it is no longer needed. These properties do not prevent all failures, but they dramatically reduce the cost and disruption when failures occur. Building this foundation now, before agent count scales further, is substantially easier than attempting to retrofit governance onto a fully sprawled environment.

    The organizations that are successfully navigating the transition from AI experimentation to AI at scale, the organizations referenced in research on building agent orchestration layers, share a common characteristic. They treated governance not as a constraint on AI adoption but as the prerequisite for it. You cannot safely scale what you cannot see.

    Conclusion

    Agent sprawl is not a failure of AI ambition. It is the predictable consequence of AI adoption outpacing governance development. Every agent in a sprawled environment was probably deployed for legitimate reasons. The problem is not individual decisions. The problem is the absence of a system that connects individual decisions into a coherent, visible, managed whole.

    For nonprofits, the combination of data sensitivity, limited technical oversight capacity, and high stakes around donor and beneficiary trust makes ungoverned agent proliferation a meaningful organizational risk. The good news is that the governance structures required to address it are not technically complex. A maintained registry, a lightweight intake process, clear ownership assignments, and a formal retirement workflow represent achievable practices for organizations of any size.

    Start with the audit. Know what you have. Then build the lightweight governance that prevents tomorrow's sprawl from compounding today's. The organizations doing AI well in 2026 are not necessarily the ones with the most agents. They are the ones who know what every agent does, who owns it, and why it exists.

    Ready to Get Your AI Agents Under Control?

    One Hundred Nights helps nonprofits design governance frameworks that make AI adoption sustainable, auditable, and safe.