Back to Articles
    Leadership & Strategy

    How to Talk to Institutional Funders About Your AI Strategy

    As AI becomes central to nonprofit operations in 2026, communicating your AI strategy to institutional funders requires balancing enthusiasm with transparency, demonstrating responsible implementation, and addressing legitimate concerns while showcasing the impact potential. Foundations are simultaneously excited about AI's possibilities and worried about its risks—navigating this tension effectively can strengthen funder relationships and unlock new resources for your organization.

    Published: February 04, 202614 min readLeadership & Strategy
    Nonprofit leaders presenting AI strategy to foundation board members

    The relationship between nonprofits and their institutional funders is entering uncharted territory as artificial intelligence transforms how organizations operate, measure impact, and serve communities. A 2026 report from the Center for Effective Philanthropy reveals a striking gap: both foundation and nonprofit leaders believe foundation staff lack understanding of nonprofits' AI-related needs. Meanwhile, ten of America's most influential foundations have announced Humanity AI, a $500 million initiative ensuring AI delivers for people and communities, signaling that funders increasingly view AI as strategic infrastructure rather than optional technology.

    This creates both opportunity and risk for nonprofits. Organizations that can articulate clear, responsible AI strategies position themselves as forward-thinking leaders deserving continued investment. Those that avoid the conversation or implement AI quietly may find themselves unprepared when funders ask pointed questions about data governance, bias mitigation, or sustainability. In 2026, the question is no longer whether to talk to funders about AI, but how to do so effectively.

    The stakes are significant. Only 10% of funding foundations currently accept or plan to accept grant applications created by generative AI, though most admit they can't actually detect AI-assisted proposals. Government agencies like the National Institutes of Health now mandate disclosure of AI-generated content in proposals. Meanwhile, funders express optimism about AI as a tool for streamlining operations, improving data quality, and strengthening grant applications—but worry about whether nonprofits have the capacity and governance structures to implement AI responsibly.

    Foundation leaders and nonprofit executives often talk past each other when discussing AI. Funders focus on risk management, data privacy, equitable access, and long-term sustainability. Nonprofits emphasize efficiency gains, expanded reach, and competitive necessity. Bridging this communication gap requires understanding what funders genuinely care about, what questions keep them up at night, and how to present your AI work in ways that address their priorities while advancing your mission.

    This article provides a comprehensive framework for communicating with institutional funders about your AI strategy. You'll learn how to frame AI initiatives in mission-aligned terms, proactively address concerns about ethics and equity, demonstrate governance structures that build confidence, show financial sustainability beyond initial implementation, and navigate disclosure requirements when applying for grants. Whether you're seeking funding specifically for AI projects or simply need to discuss how AI factors into your broader organizational strategy, these approaches help you build funder trust while maintaining the flexibility to innovate.

    Understanding What Funders Really Care About

    Foundations aren't opposed to AI—many are actively investing in AI initiatives and encouraging grantees to explore technology. However, they're navigating their own learning curve while managing fiduciary responsibilities to ensure grant dollars create positive impact without causing harm. Understanding the specific concerns driving funder questions helps you address them proactively rather than defensively.

    Research from the Center for Effective Philanthropy shows that foundation leaders worry about three primary areas: whether nonprofits have adequate capacity and expertise to implement AI effectively, whether AI systems will perpetuate or amplify existing inequities in program delivery, and whether organizations have thought through long-term sustainability and maintenance costs. These aren't abstract philosophical concerns—they're practical questions about stewardship of resources and responsible grantmaking.

    Funders also grapple with their own policies around AI use in the grantmaking process. Some foundations experiment with AI for reviewing proposals or identifying potential grantees, while others prohibit staff from using AI tools for certain tasks. This internal ambivalence means funders may ask questions they're still working through themselves. Approaching these conversations as shared learning opportunities rather than tests you must pass can create productive dialogue.

    Mission Alignment and Impact

    Does AI advance your core mission or distract from it?

    Funders want assurance that AI serves your mission rather than becoming a technology project for its own sake. They've seen too many organizations chase trends that don't align with their strengths or strategic priorities.

    • How does AI help you serve more people or serve them better?
    • What would you stop doing to make time for AI implementation?
    • How does this fit your strategic plan and organizational priorities?
    • What happens if the AI initiative doesn't work as planned?

    Responsible Implementation

    Are you implementing AI thoughtfully and ethically?

    With 82% of nonprofits using AI but only 10% having policies, funders worry about gaps between adoption and governance. They want to see deliberate frameworks, not ad hoc experimentation.

    • Do you have AI usage policies and governance structures?
    • How are you protecting sensitive beneficiary and donor data?
    • What processes detect and mitigate bias in AI systems?
    • Who's accountable when AI systems make mistakes?

    Organizational Capacity

    Does your team have the skills and bandwidth?

    Funders know that 69% of nonprofit AI users have no formal training. They question whether organizations rushing into AI have the foundational capacity to succeed or whether they're setting themselves up for failure.

    • What training and skill development supports implementation?
    • Do you have dedicated staff time for this work?
    • How will you maintain systems after initial implementation?
    • What's your plan if key staff leading this work leave?

    Financial Sustainability

    Can you afford AI beyond the pilot phase?

    Foundations worry about funding AI implementations that prove unsustainable after grant dollars end. They've seen organizations struggle when software subscriptions increase, data storage costs grow, or technical support needs exceed budgets.

    • What are the total costs including hidden and long-term expenses?
    • How will you fund ongoing maintenance and updates?
    • What's your exit strategy if costs become prohibitive?
    • Have you modeled costs under different growth scenarios?

    Framing Your AI Strategy in Mission-Aligned Terms

    The most effective way to discuss AI with funders is to start with mission and work backward to technology, not the other way around. Rather than leading with "We're implementing AI," begin with "We're expanding our capacity to serve 500 more families annually by automating our intake screening process." The technology becomes a means to achieve mission-driven ends, not the story itself.

    This approach immediately connects AI to outcomes funders care about. It demonstrates strategic thinking rather than technology chasing. It also makes clear that you're solving real organizational challenges, not implementing AI because it seems like what everyone else is doing. Funders support organizations with clear problems to solve, not organizations looking for problems their new technology might address.

    The Mission-First Communication Framework

    Step 1: Articulate the Mission Challenge

    Start by describing the gap between your mission aspirations and current capacity. Be specific about numbers, populations served, and desired outcomes. This establishes that you're focused on impact, not technology for its own sake.

    "Our after-school program can serve 200 students, but we have 600 on our waitlist. Our current intake process requires two hours of staff time per family to assess eligibility and needs. We're turning away families we could serve because we lack the administrative capacity to process applications efficiently."

    Step 2: Explain the Strategic Solution

    Describe how you evaluated different approaches to the challenge and why AI emerged as the best option. This demonstrates that you've done your homework and considered alternatives rather than jumping to technology first.

    "We explored hiring additional intake coordinators, but that would cost $120K annually and only increase capacity by 100 families. We also looked at simplifying our eligibility criteria, but that would mean serving families who don't truly need our services. AI-powered intake screening allows us to gather the same detailed information while reducing processing time from two hours to 20 minutes, freeing our staff to focus on relationship-building with families once enrolled."

    Step 3: Connect to Funder Priorities

    Link your AI strategy to specific outcomes or strategic themes the funder cares about. Reference their grantmaking priorities, recent communications, or sector-wide initiatives they support.

    "This aligns with your foundation's emphasis on scaling effective interventions and reducing administrative burden on frontline staff. By automating routine screening, our program coordinators can spend 80% of their time on direct family engagement rather than 50% as currently. This increased face time directly supports the relationship-centered approach your foundation has identified as key to successful youth development programs."

    Step 4: Address Implementation Thoughtfully

    Only after establishing mission context should you discuss technical implementation details. Even then, focus on governance, ethics, and process rather than algorithms and features. Show that you're implementing AI thoughtfully, not recklessly.

    "We're piloting the AI screening tool with a small cohort while maintaining our traditional process in parallel. This allows us to identify any bias in eligibility decisions before scaling. We've established an oversight committee including program staff, families we serve, and a data ethics advisor to review the AI system's decisions monthly. All families will have the right to request human review of AI-generated recommendations."

    Communication Pitfalls to Avoid

    Certain approaches to discussing AI trigger immediate skepticism from funders. Avoiding these communication patterns helps you maintain credibility and build confidence.

    • Overpromising Unrealistic Results:Claiming AI will "revolutionize" your work or solve all problems makes funders question your judgment. Be specific about expected improvements and honest about limitations.
    • Dismissing Legitimate Concerns:Responding to questions about bias or privacy with "we'll figure it out" or "that's not really an issue" suggests you haven't thought through risks carefully.
    • Technology Jargon Without Context:Talking about "neural networks," "machine learning models," or "natural language processing" without explaining why these matter to your mission creates distance rather than understanding.
    • Competitive Pressure Justifications:Leading with "everyone else is using AI so we need to as well" positions you as reactive rather than strategic. Focus on your specific needs, not keeping up with peers.
    • Vague Implementation Plans:Presenting AI as something you'll "explore" or "test out" without clear timelines, budgets, or success metrics suggests lack of serious planning.

    Demonstrating Responsible AI Governance

    The governance gap—82% of nonprofits using AI while only 10% have policies—represents both a challenge and an opportunity. Organizations that can demonstrate thoughtful governance structures immediately differentiate themselves from peers who are implementing AI ad hoc. You don't need perfect, comprehensive policies before talking to funders, but you do need to show you're taking governance seriously.

    Funders increasingly reference frameworks like the NIST AI Risk Management Framework or the EU AI Act when thinking about responsible AI. Demonstrating familiarity with these standards (even if you're not formally implementing them) shows you understand the broader context and aren't operating in isolation. Organizations like United Way, Oxfam, and Save the Children have developed AI policies that can serve as models, and referring to how larger organizations approach these issues lends credibility to your own governance efforts.

    Essential Elements of AI Governance

    You don't need a 50-page AI policy document to satisfy funders, but you do need clear answers to fundamental governance questions. The following elements demonstrate that you're approaching AI thoughtfully.

    AI Usage Policy and Acceptable Use

    A clear policy outlining when staff can and cannot use AI tools, what types of information can be shared with AI systems, and approval processes for new AI applications. This doesn't need to be overly restrictive, but it should show you've thought through appropriate boundaries.

    When discussing this with funders, explain both the policy and the process you used to develop it. Did you involve staff from different departments? Did you consult with beneficiaries or community members? Did you review policies from similar organizations? The participatory process matters as much as the final document.

    Data Governance and Privacy Protection

    Clear protocols for what data gets used with AI systems, how you protect personally identifiable information, and compliance with regulations like HIPAA, FERPA, or GDPR if relevant to your work. Funders want to know you're not inadvertently exposing sensitive beneficiary information.

    Be prepared to discuss specific safeguards: Do you anonymize data before analysis? Do you use on-premise or local AI solutions for sensitive information? Have you reviewed vendor contracts to understand how they use your data? These concrete details build confidence more than general assurances about "taking privacy seriously."

    Bias Detection and Mitigation

    Processes for regularly reviewing AI system outputs to identify potential bias, mechanisms for affected individuals to report concerns, and clear remediation procedures when bias is detected. This is particularly critical for AI systems making recommendations about service eligibility or resource allocation.

    Funders appreciate honesty about the challenges of bias detection. Rather than claiming your systems are "bias-free," acknowledge that bias can emerge in unexpected ways and explain your ongoing monitoring approach. Referencing resources like the work being done around AI bias shows you're engaging with the broader conversation.

    Human Oversight and Accountability

    Clear designation of who's accountable for AI system performance, oversight mechanisms ensuring humans review high-stakes decisions, and escalation procedures when AI systems produce questionable results. Funders worry about "AI autopilot" scenarios where no one takes responsibility for outcomes.

    Describe both the formal accountability structure (who reports to whom, board oversight, external advisors) and the practical decision-making process. How often do you review AI system performance? What triggers a deeper investigation? Who has authority to pause or discontinue an AI system that's not working as intended?

    Transparency and Disclosure

    Commitments to transparency about where and how you use AI, both internally with staff and externally with beneficiaries and stakeholders. This might include updating your privacy policy, creating plain-language explanations of AI systems for beneficiaries, or developing donor communications about AI use.

    When discussing transparency with funders, explain both what you disclose and your reasoning. Some organizations proactively share AI use in annual reports; others take a more targeted approach. The key is demonstrating intentionality rather than trying to hide AI implementation from public view.

    Creating a Governance Roadmap If You Don't Have Policies Yet

    If you're using AI but haven't formalized governance structures, don't hide this from funders. Instead, present a clear roadmap for developing appropriate policies and demonstrate that you're moving deliberately toward stronger governance.

    Honest Framing for Organizations Building Governance:

    "We began using AI tools for [specific applications] six months ago and have learned valuable lessons about what works in our context. We're now at the stage of formalizing governance structures to support responsible scaling. Over the next quarter, we're developing an AI usage policy in consultation with our leadership team, program staff, and a volunteer data ethics advisor. We're also conducting an audit of current AI use across the organization to ensure we have full visibility before expanding further."

    This framing acknowledges you don't have everything figured out while demonstrating you're taking a deliberate, thoughtful approach. It positions governance development as a sign of organizational maturity rather than a gap to apologize for.

    Include specific milestones and timelines in your governance roadmap: policy draft completion dates, staff training sessions, board approval processes, and implementation checkpoints. This concrete planning reassures funders that governance isn't just something you'll "get to eventually."

    Addressing Long-Term Financial Sustainability

    Funders have seen too many technology implementations that work brilliantly with grant funding but collapse when support ends. They worry about creating dependency on tools your organization can't sustain independently. Addressing these concerns upfront demonstrates strategic thinking and protects your relationship with funders who don't want to see their investments evaporate.

    Building a Complete Financial Picture

    When discussing AI costs with funders, go beyond initial implementation expenses to paint a complete picture of total cost of ownership. This thoroughness demonstrates financial sophistication and realistic planning.

    Implementation Phase Costs

    • Software licenses and setup fees
    • Consulting or technical assistance if needed
    • Staff training and capacity building
    • Data preparation and system integration
    • Change management and organizational adaptation

    Ongoing Operational Costs

    • Annual software subscription renewals (often increasing over time)
    • API usage fees that scale with organizational growth
    • Data storage costs as you accumulate more information
    • Ongoing training for new staff and skill refreshers
    • Technical support and troubleshooting assistance
    • System maintenance, updates, and occasional customization

    Hidden or Indirect Costs

    • Staff time for system administration and monitoring
    • Vendor relationship management and contract negotiations
    • Potential increased cybersecurity needs
    • Hardware upgrades if AI tools require more computing power
    • Audit and compliance costs if AI affects regulated activities

    Presenting Financial Sustainability to Funders:

    "We've modeled AI costs under three scenarios: current scale, 25% growth, and 50% growth over three years. At current scale, annual costs stabilize at $15K after year one. We're building this into our operating budget through a combination of efficiency savings from automated processes (estimated $20K in staff time annually) and allocating 3% of our technology budget specifically for AI tools. If we grow significantly, we'll seek additional capacity-building grants in year two, but the core system remains sustainable at our current size with existing resources."

    This type of detailed financial planning demonstrates that you've thought through sustainability realistically. It also signals that you're not expecting funders to support AI indefinitely—you have a path to organizational ownership of costs.

    Alternative Approaches When Resources Are Limited

    If comprehensive financial sustainability seems challenging, consider alternative approaches that reduce long-term costs while still advancing AI capabilities:

    • Open Source AI Tools:Explore open source alternatives that eliminate subscription costs while providing similar functionality to commercial tools.
    • Nonprofit-Specific Discounts:Leverage programs from TechSoup, Microsoft Nonprofits, Google for Nonprofits, and Salesforce.org that offer deep discounts or free access to AI-enabled platforms.
    • Shared Services Models:Partner with similar organizations to share AI infrastructure costs, similar to how some nonprofits share back-office systems or fiscal sponsorship arrangements.
    • Phased Implementation:Start with low-cost or free AI tools to build capacity and demonstrate value before investing in more expensive enterprise solutions.

    Navigating AI Disclosure in Grant Applications

    The question of whether and how to disclose AI use in grant applications has become more pressing in 2026. Government agencies like the NIH now mandate disclosure, while foundation policies vary widely. Navigating this landscape requires understanding both formal requirements and the strategic communication considerations.

    Understanding Current Disclosure Expectations

    Government Funding Agencies

    Federal agencies increasingly require explicit disclosure of AI use in proposals. The NIH mandates that investigators disclose any AI-generated content including text, figures, or methodologies. This policy aims to ensure transparency while encouraging innovative use of AI tools.

    When writing government proposals, err on the side of over-disclosure. Clearly mark AI-assisted sections, explain how AI tools were used (e.g., "initial draft created with AI assistance, then substantially revised and fact-checked by research staff"), and ensure all factual claims are independently verified. Remember that AI should support your proposal development, not generate content you can't verify or stand behind.

    Foundation and Private Funder Policies

    Foundation policies vary dramatically. Only 10% currently accept or plan to accept AI-generated applications, though most admit they can't reliably detect AI use. Some foundations are experimenting with voluntary disclosure checkboxes, while others remain silent on the issue.

    When foundation guidelines don't explicitly address AI, consider the relationship context. For long-term funding partners who know your organization well, proactive transparency builds trust. For competitive first-time applications, focus on the strength of your proposal rather than the tools used to create it. If directly asked about AI use, always answer honestly—the risk to your reputation from being caught in misleading statements far outweighs any potential benefit.

    When to Disclose Proactively vs. When Asked

    Proactive disclosure makes sense when AI is central to your proposed work (e.g., requesting funding to implement AI systems), when the funder has expressed interest in AI innovation, or when you're highlighting AI use as an efficiency that allows more resources to go toward programmatic work.

    Disclosure upon request applies when AI played a minor supporting role in proposal development (editing, formatting, literature review assistance), when funders haven't indicated any policy or interest in the topic, or when the focus should remain squarely on your programmatic approach rather than tools used. The key principle: never hide AI use if directly questioned, but don't derail your narrative to discuss tools that aren't central to the story.

    Best Practices for AI-Assisted Grant Writing

    Whether or not you disclose AI use, following these practices ensures your proposals meet ethical standards and effectively represent your organization:

    • Never Fabricate Accomplishments or Data:AI should help articulate your genuine work, not create fictional achievements. Every claim in your proposal must reflect actual organizational capacity and results.
    • Customize Deeply, Don't Just Polish:Generic AI-generated boilerplate that doesn't reflect your organization's unique voice and community context will be obvious to experienced reviewers. Use AI as a starting point, then invest significant time customizing.
    • Verify All Factual Claims and Citations:AI systems can hallucinate statistics, citations, or research findings. Independently verify every fact, check every citation, and confirm every claim before submission.
    • Maintain Authentic Organizational Voice:Funders support relationships with organizations, not with well-written proposals. Ensure your application sounds like your team wrote it, reflecting your mission, values, and approach authentically.
    • Document Your Process:Keep records of how AI tools were used in proposal development. If questions arise later, you can provide transparent explanations of your process and the human judgment applied throughout.

    Sample Disclosure Language:

    "This proposal was developed with assistance from AI writing tools to improve clarity and organization. All programmatic content, data, and organizational descriptions reflect our actual work and were created and verified by staff. AI tools assisted with editing, formatting, and literature review, but all substantive decisions about program design, budget allocation, and evaluation approaches represent our team's professional judgment and organizational expertise."

    Building Long-Term Funder Relationships Around AI

    Conversations with funders about AI shouldn't be one-time events happening only during proposal season. The organizations building strongest funder confidence treat these as ongoing dialogues where they share both successes and challenges, contributing to funders' own learning about effective AI support for nonprofits.

    Creating Opportunities for Ongoing Dialogue

    • Include AI Updates in Regular Reports:When submitting interim or annual reports, include brief sections on how AI tools are contributing to programmatic outcomes, what you're learning, and challenges you're navigating.
    • Invite Funders to See AI in Action:During site visits or virtual meetings, offer to demonstrate how AI tools work in your context. This demystifies the technology and shows funders the practical reality rather than abstract concepts.
    • Contribute to Sector Learning:Participate in funder-convened learning cohorts, contribute case studies to knowledge-sharing platforms, or present at sector conferences about your AI journey. This positions you as a thought partner rather than just a grantee.
    • Be Honest About Challenges and Failures:Funders appreciate honesty about what doesn't work. Sharing that you piloted an AI tool that didn't deliver expected results demonstrates learning orientation and helps funders understand realistic expectations.
    • Seek Funder Input on Strategy:When facing significant AI decisions, consider consulting with key funders before finalizing plans. This isn't asking permission, but treating funders as strategic advisors who may offer valuable perspective.

    Positioning Your Organization as an AI Learning Partner

    Foundations are actively trying to understand how to support nonprofit AI adoption effectively. Organizations willing to share detailed implementation experiences become valuable learning partners. This can strengthen relationships and potentially unlock additional capacity-building support.

    Consider offering to participate in funder research on AI adoption, contributing to development of sector resources or toolkits, hosting peer learning sessions for other grantees, or reviewing draft guidelines or policies funders are developing. These contributions require time but position your organization as a sector leader and build goodwill that extends well beyond individual grant relationships.

    The Center for Effective Philanthropy's research showing that both nonprofits and foundations believe foundation staff lack understanding of AI needs suggests significant opportunity for organizations that can help bridge this gap through clear, practical communication about real-world implementation experiences.

    Building Funder Confidence in Your AI Strategy

    Talking to institutional funders about AI represents an opportunity to demonstrate organizational sophistication, strategic thinking, and commitment to responsible innovation. The funders making significant investments in nonprofit AI capacity—through initiatives like Humanity AI's $500 million commitment and OpenAI's $50 million fund—are looking for organizations that can articulate clear vision, demonstrate thoughtful governance, and show realistic understanding of both opportunities and challenges.

    Success in these conversations comes from leading with mission, addressing concerns proactively rather than defensively, demonstrating governance structures that build confidence, showing financial sustainability beyond initial implementation, and maintaining transparency about both successes and ongoing challenges. Organizations that can do this position themselves not just as grant recipients but as partners in advancing the sector's collective understanding of effective AI implementation.

    The communication gap identified by the Center for Effective Philanthropy—where both nonprofits and foundations acknowledge that foundation staff lack understanding of AI needs—presents both challenge and opportunity. Organizations that can translate their technical work into mission-focused narratives, explain governance approaches in accessible terms, and share honest implementation experiences help bridge this gap while strengthening their own funder relationships.

    Remember that funders want to see you succeed with AI. Their questions and concerns come from desire to ensure their investments create lasting positive impact, not from skepticism about technology itself. Approaching these conversations as collaborative problem-solving rather than adversarial reviews creates space for authentic dialogue where both parties learn and where funders become true partners in your AI journey rather than just sources of funding to be managed.

    As AI becomes increasingly central to nonprofit operations in 2026 and beyond, the organizations that build strongest funder relationships will be those that communicate openly, implement responsibly, share their learning generously, and maintain focus on mission impact above technological novelty. These principles guide effective communication whether you're seeking dedicated AI funding, discussing how AI factors into broader organizational strategy, or simply being transparent about tools you're already using.

    Need Help Developing Your AI Strategy?

    Whether you're preparing to discuss AI with funders or building the governance structures and strategic plans that give them confidence, we can help you develop approaches that balance innovation with responsibility. Our work focuses on helping nonprofits implement AI in ways that advance mission, protect stakeholders, and build long-term organizational capacity.