Transparency in Tech: Disclosing AI Use to Funders in Grant Applications
Navigate the evolving expectations around AI transparency in grant applications. Learn when disclosure is required, how to communicate AI use professionally, and why proactive transparency builds funder trust rather than creating barriers to funding.

The question of whether to disclose AI use in grant applications has moved from theoretical ethics debate to practical necessity. As AI tools become standard in nonprofit operations, funders are developing their own perspectives, expectations, and sometimes explicit requirements about how applicants use these technologies. Some foundations now include AI-related questions in their applications. Federal agencies like the NIH have established disclosure mandates. Program officers increasingly ask about proposal development processes during site visits and check-in calls. The landscape is evolving rapidly, and nonprofits that haven't developed clear policies around AI transparency risk being caught unprepared when funders inquire.
The underlying tension is understandable. Funders want to know that grants support genuine organizational capacity, not just sophisticated technology use. They worry about applications that obscure an organization's actual abilities behind AI-polished prose. They want proposals that authentically represent programs, outcomes, and institutional voices. At the same time, AI tools have become legitimate productivity enhancers that help under-resourced nonprofits compete with larger organizations. Drawing clear lines between acceptable tool use and problematic misrepresentation requires nuance that blanket policies often lack.
The good news is that most funders aren't opposed to AI use—they're opposed to AI misuse. When organizations deploy AI as a tool to enhance human work rather than replace it, when they maintain rigorous quality control and verification processes, when they're transparent about their methods, and when proposals genuinely reflect organizational capacity and voice, funders generally view AI assistance favorably. The challenge lies in communicating your AI practices in ways that build confidence rather than raise concerns, demonstrating that technology enhances rather than masks your capabilities.
This article explores the current landscape of funder expectations around AI disclosure, the ethical frameworks guiding transparency decisions, practical strategies for when and how to disclose AI use, and approaches for building organizational policies that position your nonprofit as both technologically capable and fundamentally trustworthy. Whether you're preparing for explicit disclosure requirements or developing proactive transparency practices, understanding this evolving terrain helps you navigate funder relationships with integrity while leveraging the efficiency benefits AI provides.
The organizations succeeding in this environment treat AI transparency not as a compliance burden but as an opportunity to demonstrate organizational values. By being thoughtful and proactive about disclosure, they differentiate themselves from nonprofits that hide or minimize AI use, building funder trust that transcends any individual proposal. This approach recognizes that grant relationships extend beyond single applications to ongoing partnerships where integrity matters more than any particular technical choice.
Understanding the Current Funder Landscape
Funder approaches to AI disclosure in 2026 span a wide spectrum, from explicit requirements to complete silence on the topic. Understanding where different funders fall on this spectrum helps you tailor disclosure decisions appropriately. More importantly, understanding the concerns driving funder policies—regardless of their specific requirements—helps you communicate in ways that address what funders actually care about rather than simply checking compliance boxes.
Federal Agency Requirements
Government funders have established the most explicit disclosure frameworks
Federal agencies have moved furthest toward explicit AI disclosure requirements, driven by concerns about research integrity, accountability, and responsible use of public funds. The National Institutes of Health (NIH) mandates that investigators disclose any AI-generated content in their proposals, including text, figures, or methodologies produced by tools like large language models. Applicants must note AI involvement, detail the extent of its role, and describe human modifications applied. This requirement reflects NIH's broader emphasis on transparency in scientific inquiry.
The National Science Foundation (NSF) has focused particularly on confidentiality concerns, prohibiting reviewers from uploading any content from proposals or review materials to non-approved AI tools. While this primarily affects the review side, it signals heightened attention to AI use throughout the grant process. The NSF treats violations as breaches of their confidentiality pledge with potential legal and regulatory consequences. This strict approach reflects concerns about protecting proprietary research information that might be exposed through AI tool usage.
For nonprofits applying for federal grants, these requirements aren't optional considerations—they're compliance mandates. Failure to disclose AI use when required could jeopardize not just individual applications but your organization's ability to receive federal funding. If you're uncertain whether disclosure requirements apply to your federal applications, err on the side of transparency. Documenting AI use even when not explicitly required protects you if questions arise later and demonstrates the good faith approach that federal agencies value.
Private Foundation Approaches
Foundation policies range from explicit guidance to emerging informal expectations
Private foundations have taken more varied approaches to AI disclosure, with most still developing their policies as the technology evolves. Some progressive foundations have issued explicit guidance emphasizing transparency and accountability while encouraging applicants to disclose AI assistance in drafting. These foundations typically frame AI as a legitimate tool—similar to other productivity technologies—while stressing the importance of accuracy, authenticity, and human oversight in the final product.
A growing number of foundations have added simple checkbox questions to their applications asking whether AI tools assisted in proposal preparation. Importantly, most foundations implementing such questions have made clear there's no penalty for answering "yes." The question serves to gather data about AI prevalence in the grant application ecosystem and encourages transparency rather than punishing technology adoption. One international grantmaking organization explicitly stated that asking about AI use is "similar to asking someone if they used a laptop or pen"—acknowledging AI as simply another tool in the writer's toolkit.
Even foundations without explicit policies have program officers forming opinions about AI use. When proposals arrive with a certain polished uniformity, or when language doesn't quite match an organization's historical voice, experienced reviewers notice. This doesn't necessarily trigger concerns—many program officers appreciate well-crafted applications regardless of how they were produced—but it does mean that AI use isn't invisible to sophisticated readers. The gap between "not asking" and "not noticing" matters for organizations deciding how transparent to be.
For nonprofits navigating private foundation expectations, the safest approach is preparing for a world where transparency becomes standard. Develop your AI disclosure practices now so you're ready when funders ask, and consider proactive disclosure even when not required as a way to build trust and demonstrate organizational values. Foundations increasingly value partners who share their commitment to transparency, and early adoption of disclosure practices can differentiate your organization positively.
What Funders Actually Worry About
Understanding underlying concerns helps you address them proactively
Behind disclosure requirements and emerging expectations lie specific concerns that funders want addressed. Understanding these concerns—rather than just following rules—helps you communicate in ways that build genuine confidence rather than merely technical compliance.
Accuracy and fabrication: Funders worry most about AI tools inventing statistics, creating non-existent research citations, or making outcome claims that don't reflect reality. These hallucinations are well-documented AI limitations, and sophisticated funders know that AI-generated content can contain "convincing statistics or research citations that don't exist." When you disclose AI use, addressing how you verify accuracy—every number, outcome claim, and citation checked against actual data—directly confronts this primary concern.
Organizational capacity representation: Funders invest in organizations, not just proposals. A beautifully written application means nothing if the organization lacks capacity to execute the proposed work. Concerns arise when AI potentially masks organizational weaknesses—polishing prose to a level that doesn't reflect actual staff capabilities, overstating capacity that doesn't exist, or presenting a sophistication that the organization can't maintain. Addressing this concern means demonstrating that AI enhances genuine capacity rather than creating illusions of competence.
Voice and authenticity: Grant applications should reflect your organization's unique perspective, community connections, and mission-driven approach. Generic AI-generated prose lacks the distinctive voice that makes proposals compelling and demonstrates deep understanding of the work. Funders seek partners who genuinely understand their communities and bring authentic perspectives—qualities that generic AI outputs often lack. Showing that AI serves your voice rather than replacing it addresses this authenticity concern.
Ethical alignment: Many funders view transparency as a fundamental value they expect from grantees. Organizations that hide or minimize AI use when directly questioned raise red flags about broader ethical alignment. If you're not honest about how you prepared a proposal, funders might reasonably wonder what else you might not be forthcoming about. Proactive transparency signals the ethical orientation funders want in long-term partners.
The funder landscape will continue evolving as AI use becomes more prevalent and sophisticated. Organizations that develop thoughtful disclosure practices now—based on understanding funder concerns rather than minimum compliance—position themselves well regardless of how specific requirements develop. The principles underlying disclosure decisions remain stable even as policies change: accuracy matters, authenticity matters, transparency builds trust, and demonstrating human oversight addresses legitimate concerns about AI limitations.
When to Disclose AI Use
The decision about when to disclose AI use depends on multiple factors: explicit funder requirements, the extent of AI involvement, your organization's values, and strategic relationship considerations. Rather than a simple yes/no decision, think about disclosure as a spectrum of situations requiring different approaches. Some contexts clearly require disclosure, others make it advisable, and still others leave it appropriately optional.
Always Disclose
- When funders explicitly require disclosure (federal agencies, specific foundations)
- When funders directly ask about AI use in applications or conversations
- When AI generated substantial portions of narrative content
- When AI created data visualizations, graphics, or supporting materials
- When your organizational policy requires disclosure
Consider Disclosing
- When building new relationships with funders who value transparency
- When AI assisted with brainstorming, outlining, or research synthesis
- When organizational values emphasize radical transparency
- When you want to normalize AI use within your sector
- When the funder's sector is likely to adopt disclosure requirements soon
The Tool vs. Content Distinction
Differentiating between AI as productivity tool and AI as content creator
A useful framework for disclosure decisions distinguishes between AI as a productivity tool (similar to spell-checkers, grammar assistants, or research databases) and AI as a content creator (generating substantial text, ideas, or materials). Most people wouldn't consider disclosing that they used Microsoft Word's grammar check or Grammarly to polish their writing. The question becomes: at what point does AI assistance cross from tool use into content generation that warrants disclosure?
Tool-level use typically doesn't require disclosure: Using AI to check grammar and spelling, suggesting alternative word choices, helping organize existing thoughts into outline form, or conducting initial research that you then verify and synthesize yourself. In these cases, AI functions similarly to other writing aids that have long been standard practice. The human remains clearly the author; AI assists with execution.
Content-level use typically warrants disclosure: Having AI draft narrative paragraphs that appear substantially unchanged in the final proposal, using AI to generate program descriptions or organizational histories, creating budget narratives or methodology sections primarily through AI prompting, or producing graphics, charts, or other visual materials. In these cases, AI has moved from assistant to co-creator, and transparency about that role serves funder relationships.
The gray area between these poles requires judgment. When you prompt AI to draft a needs statement but then substantially rewrite it with your organization's voice and specific data, where does that fall? When AI suggests program outcomes that you verify and incorporate, is that tool use or content creation? Rather than seeking bright-line rules, focus on the underlying question: would a reasonable funder want to know about this level of AI involvement? If the answer is "probably yes" or "I'm not sure," disclosure protects both the relationship and your integrity.
The Honesty Principle
When in doubt, transparency protects relationships and reputation
Beyond specific disclosure decisions, a fundamental principle should guide your approach: if asked directly about AI use, always respond honestly. The reputational and relational damage from being caught in deception far exceeds any benefit from concealment. Funders who discover undisclosed AI use after the fact—whether through their own detection methods, subsequent questions, or other means—will question everything else about your organization's honesty and integrity.
This honesty principle extends beyond direct questions to situations where concealment would feel like deception. If you've used AI extensively to draft a proposal and a program officer praises your writing during a site visit, letting that compliment stand without acknowledgment creates an implicit misrepresentation. You don't need to interrupt every conversation with disclosures, but you should be prepared to acknowledge AI involvement when it would be deceptive not to.
Organizations developing internal AI champions and robust AI policies typically find that a culture of transparency about AI use feels more comfortable than constant judgment calls about what to disclose. When your default is openness, you spend less mental energy calculating disclosure decisions and more energy ensuring AI enhances rather than replaces genuine organizational capacity.
How to Disclose AI Use Effectively
How you communicate AI use matters as much as whether you disclose it. Effective disclosure builds confidence by addressing funder concerns proactively, demonstrating thoughtful AI governance, and showing that human judgment remains central to your work. Poor disclosure—apologetic, vague, or defensive—can raise more concerns than it resolves. The goal is presenting AI use as a considered organizational choice that enhances rather than compromises your work.
Framing AI as Professional Tool Use
Language that normalizes AI assistance while demonstrating responsibility
The tone of your disclosure shapes how funders interpret your AI use. Frame AI as a professional productivity tool that responsible organizations incorporate thoughtfully—similar to how you might discuss using project management software, data analysis tools, or research databases. Avoid apologetic language that suggests you're confessing something problematic. Equally, avoid boastful language that makes AI use seem like technological showing off. Aim for matter-of-fact professionalism.
Effective disclosure language:
"Our proposal development process includes AI-assisted drafting tools for initial content generation. All AI-generated content undergoes rigorous human review for accuracy, organizational voice, and alignment with our programs. Our development team verifies every statistic and claim against source documentation before inclusion."
Less effective approaches:
"We had to use AI because we're understaffed..." (apologetic, suggests desperation)
"Our cutting-edge AI implementation demonstrates our technological leadership..." (boastful, misses the point)
"We may have used some AI tools..." (vague, sounds evasive)
Addressing Quality Control and Verification
Demonstrating the human oversight that ensures accuracy and authenticity
The most important element of effective disclosure isn't acknowledging AI use—it's explaining how you ensure AI assistance produces accurate, authentic results. Funders concerned about AI want reassurance that you haven't substituted technology for judgment. Describing your verification processes addresses these concerns directly and demonstrates organizational maturity around AI governance.
Accuracy verification
Explain how you verify factual claims, statistics, and citations that AI might generate or include. This might include: "All outcome data referenced in proposals is verified against our actual program records. We cross-reference any citations against original sources. Budget figures are confirmed by our finance team independent of narrative development."
Voice preservation
Describe how you ensure proposals reflect your organization's authentic voice rather than generic AI outputs. Consider: "Our executive director reviews all proposals for alignment with organizational voice and values. We train AI tools on our previous successful applications to maintain consistency with our established communication style."
Human decision-making
Clarify that strategic decisions remain human-driven: "AI assists with initial drafting and research synthesis, but all decisions about program design, budget allocation, and organizational positioning are made by our leadership team. AI provides suggestions; humans make choices."
Disclosure Formats for Different Contexts
Adapting disclosure approach to application requirements and funder preferences
When applications include AI questions
Answer checkbox questions honestly, and use any open text fields to briefly describe your verification processes. Don't over-explain in limited space, but do provide enough context that your answer demonstrates thoughtful governance rather than casual acknowledgment.
When disclosure is required but no specific format exists
Include a brief methodology statement in your proposal, perhaps in a section describing organizational capacity or in supplementary materials. A few sentences suffice: describe what tools you use, what verification processes you follow, and who maintains oversight.
When choosing proactive disclosure
If you're voluntarily disclosing without specific requirements, decide whether to include it in the written proposal or save it for relationship conversations. Written disclosure creates documentation but might feel unnecessary for funders who haven't asked. Conversational disclosure during calls or site visits allows more natural discussion but lacks documentation. Your choice might depend on the funder relationship stage and their apparent interest in AI topics.
When asked directly in conversation
Be prepared with a concise, confident response that you can deliver naturally: "Yes, we use AI tools to assist with initial drafting, similar to how we use research databases and editing software. Our team reviews everything for accuracy and ensures it reflects our actual programs and capabilities." Practice this response so it comes across as matter-of-fact rather than rehearsed or defensive.
Documentation for Internal Records
Maintaining records that support disclosure and demonstrate governance
Beyond what you tell funders, maintain internal documentation about AI use in each proposal. This protects you if questions arise later, supports consistent disclosure practices, and demonstrates organizational governance maturity. Good documentation doesn't need to be elaborate—a simple record that you maintain for each application.
- Which AI tools were used in proposal development
- Which sections involved AI assistance (initial drafting, editing, research)
- Who reviewed and verified AI-generated content
- What verification steps were completed (fact-checking, voice review, accuracy confirmation)
- Whether and how AI use was disclosed to the funder
Effective disclosure transforms a potential concern into an opportunity to demonstrate organizational sophistication. When you explain not just that you use AI but how you govern AI use responsibly, you show funders that your organization thinks carefully about technology adoption, maintains appropriate oversight, and prioritizes accuracy and authenticity. These qualities matter for grant relationships regardless of AI involvement—disclosure simply provides an opportunity to demonstrate them explicitly.
Building an Organizational AI Disclosure Policy
Individual disclosure decisions become easier when they flow from clear organizational policy. Rather than each grant writer making judgment calls about when and how to disclose, a well-crafted policy provides consistent guidance that reflects organizational values and protects funder relationships. Policy development also forces the conversations about AI governance that many nonprofits have deferred, creating clarity that benefits far more than just disclosure decisions.
Essential Policy Components
Elements every AI disclosure policy should address
Scope definition
Clarify what counts as "AI use" for disclosure purposes. Does grammar checking count? Research synthesis? Initial draft generation? Content editing? Establish clear categories so staff understand which activities require disclosure consideration. Many organizations distinguish between "tool-level" AI assistance that doesn't require disclosure and "content-level" AI generation that does.
Disclosure triggers
Specify when disclosure is required, recommended, or optional. This might include: "Always disclose when funders ask. Always disclose to federal funders. Disclose to private foundations when AI generated substantial narrative content. Tool-level assistance doesn't require disclosure unless asked." Clear triggers reduce ambiguity and support consistent practice.
Verification requirements
Establish what verification must occur before AI-assisted content can be included in applications. This might specify: review by subject matter experts for accuracy, review by leadership for voice and positioning, verification of all statistics and citations against source documentation, confirmation that program descriptions match actual activities.
Documentation standards
Require records of AI use and verification for each application. Specify what must be documented, where documentation is stored, and who is responsible for maintaining it. This supports both internal governance and response to any funder inquiries.
Standard disclosure language
Provide approved language for different disclosure contexts: brief responses for checkbox questions, fuller statements for narrative disclosure, conversational responses for verbal inquiries. Standard language ensures consistency and prevents ad hoc formulations that might miss key elements or create inconsistency across applications.
Who Should Develop the Policy
AI disclosure policy development benefits from diverse perspectives. Consider including:
- Development staff who understand grant writing realities and funder relationships
- Executive leadership who set organizational values and risk tolerance
- Program staff who verify accuracy of program descriptions
- Board input particularly if board members have relevant expertise or strong views
Balancing Transparency and Practicality
Policies should reflect your organization's actual capacity for compliance. Overly ambitious requirements that staff can't realistically follow create inconsistency and cynicism. Consider:
- Start with essential requirements and expand as organizational capacity grows
- Distinguish between minimum requirements and best practices
- Test policy requirements against recent grant applications to ensure feasibility
- Build in review periods to refine policy based on implementation experience
Connecting to Broader AI Governance
Disclosure policy as part of comprehensive AI approach
AI disclosure policy works best as part of broader AI acceptable use policies and governance frameworks. Organizations with comprehensive AI policies find disclosure decisions easier because they've already established principles about appropriate use, oversight requirements, and organizational values around technology. Disclosure policy then becomes an application of existing principles rather than a standalone consideration.
If your organization lacks broader AI governance, developing disclosure policy can catalyze those larger conversations. The questions raised—what counts as AI use, what verification is appropriate, how to balance efficiency and authenticity—apply beyond grant writing to communications, program operations, and organizational administration. Use disclosure policy development as an opportunity to establish foundations for comprehensive AI governance.
Document your AI governance approach in ways you can share with funders if asked. Having a written policy demonstrates organizational thoughtfulness and provides ready evidence that you take AI use seriously. Some organizations proactively share their AI policies with major funders as part of relationship building, positioning themselves as responsible partners who've thought carefully about technology adoption.
Turning Disclosure into Trust-Building Opportunity
The organizations handling AI disclosure most successfully don't view it as a compliance burden or potential obstacle—they see it as an opportunity to demonstrate organizational values that funders care about. Transparency, thoughtful governance, commitment to accuracy, preservation of authentic voice—these qualities matter to funders regardless of AI involvement. Disclosure conversations create natural opportunities to highlight these qualities in ways that strengthen rather than complicate funder relationships.
Demonstrating Organizational Values
How you handle AI disclosure reveals something about your organization's character. Funders notice when nonprofits approach transparency proactively rather than reluctantly, when they've thought carefully about governance rather than adopting technology casually, when they prioritize accuracy over convenience. These qualities extend beyond AI use to how you'll manage grant funds, report on outcomes, and maintain the partnership relationship.
Consider disclosure as a way to demonstrate the same values you bring to program work: commitment to the communities you serve (ensuring AI doesn't create false impressions), respect for partner relationships (being honest with funders), and dedication to impact (using tools that genuinely advance your mission rather than creating illusions). When framed this way, disclosure becomes consistent with your organizational identity rather than a separate compliance consideration.
Organizations that have built strong AI champions internally often find that these individuals become effective ambassadors for AI transparency externally. Their genuine enthusiasm for how AI enhances their work, combined with honest acknowledgment of limitations and oversight requirements, creates compelling narratives that build funder confidence. Authentic advocacy beats defensive disclosure every time.
Engaging Funders in AI Conversations
Proactive disclosure can open valuable conversations with funders about AI more broadly. Many foundation staff are themselves navigating AI adoption and appreciate grantees who've thought carefully about implementation. These conversations position you as a thoughtful partner rather than a passive recipient of funding, potentially strengthening relationships in ways that benefit future applications.
When funders ask about your AI use, don't just answer the immediate question—engage with genuine curiosity about their perspective. What concerns do they have? How are they thinking about AI in their own work? What would they like to see from grantees? This dialogue provides valuable intelligence about evolving funder expectations while demonstrating the relationship orientation that funders value in long-term partners.
Some funders are actively interested in supporting responsible AI adoption in the nonprofit sector. Your thoughtful approach to disclosure might open doors to capacity-building conversations, technology-focused funding opportunities, or invitations to participate in funder learning initiatives. Being known as an organization that handles AI responsibly creates positioning that extends beyond individual grant applications.
Sector Leadership Through Transparency
As AI becomes standard in nonprofit operations, organizations that model responsible disclosure help establish healthy norms for the entire sector. Your transparency today contributes to an ecosystem where AI use is neither hidden nor stigmatized, where verification and oversight are expected, and where funders can trust that AI assistance enhances rather than undermines organizational capacity.
Consider sharing your AI policies and disclosure practices with peer organizations. Participating in sector conversations about AI governance raises your organization's profile while contributing to collective learning. Funders increasingly seek grantees who contribute to their fields beyond direct program work—thought leadership on responsible AI adoption demonstrates exactly this kind of broader contribution.
The nonprofits that will thrive in an AI-enhanced landscape are those that use technology to amplify their human capacity rather than replace it, that maintain authenticity even while leveraging efficiency tools, and that build trust through transparency rather than hoping no one notices their AI use. These organizations won't just survive funders' evolving expectations around AI—they'll help shape those expectations in ways that benefit the entire sector.
Moving Forward with Confidence and Integrity
The question of AI disclosure in grant applications ultimately comes down to a simple principle: be the kind of organization your funders would want to support if they knew everything about how you work. That means using AI thoughtfully and responsibly, maintaining rigorous verification and oversight, preserving your authentic voice and genuine capacity, and being transparent about your methods when transparency serves the relationship.
This approach doesn't require perfection in AI governance—it requires honest effort and continuous improvement. Start by understanding what your key funders expect, whether through explicit requirements or implied preferences. Develop organizational policy that reflects your values and capacity. Train your team on consistent disclosure practices. Document your AI use and verification processes. And approach disclosure conversations as opportunities to demonstrate organizational character rather than obstacles to overcome.
The funders who matter most—those seeking genuine partnerships with capable organizations doing important work—won't penalize thoughtful AI adoption accompanied by appropriate transparency. They may have questions, and you should be prepared to answer them confidently. They may have concerns, and your verification practices should address them. But they fundamentally want to fund effective organizations, and AI-enhanced efficiency combined with maintained authenticity serves that goal.
As the funder landscape continues evolving—with more explicit requirements, more sophisticated detection, and more nuanced expectations—organizations that established transparent practices early will find adaptation easier than those who deferred these conversations. The investment in developing good AI governance and disclosure practices pays dividends not just in funder relationships but in organizational integrity and staff confidence. When everyone knows the rules and follows them consistently, the mental burden of constant judgment calls disappears.
Finally, remember that disclosure is just one aspect of the broader challenge of building stakeholder confidence in AI-powered operations. The principles that guide funder disclosure—transparency, verification, authenticity, governance—apply equally to donor communications, board reporting, and community relationships. Developing strong practices in one area creates foundations for addressing AI transparency across all your stakeholder relationships. The organizations that get this right position themselves not just for successful grant applications but for sustainable success in an AI-enhanced nonprofit landscape.
Ready to Develop Your AI Transparency Strategy?
Let's discuss how to build AI disclosure policies that strengthen funder relationships, demonstrate organizational values, and position your nonprofit for success in an AI-enhanced landscape.
