How to Communicate AI Risks to Your Board (Not Just the Benefits)
Most nonprofit leaders know they should talk to their boards about AI opportunities. But the harder conversation is about the risks. Learn how to present a balanced, honest assessment that builds trust and accountability instead of fear.

There's a remarkable disconnect in nonprofit AI governance right now. According to recent research, 82% of nonprofits are already using AI tools, yet only about 10% have written governance policies. Even more concerning, more than half of board directors indicate that AI risks are not a standing item on their board agendas. This gap between adoption and oversight creates significant organizational vulnerability.
The problem isn't that nonprofit leaders are unaware of AI's potential. Most boards have heard the exciting pitch about efficiency gains, cost savings, and enhanced donor engagement. What they often haven't heard is the other side of the story: the risks, ethical concerns, and potential for harm that come with AI adoption. This imbalance creates boards that are enthusiastic but uninformed, supportive but unprepared to provide meaningful oversight.
Communicating AI risks to your board isn't about dampening enthusiasm or blocking innovation. It's about building the foundation for responsible, sustainable AI adoption. When boards understand both the opportunities and the risks, they can provide better guidance, ask better questions, and help the organization navigate challenges before they become crises. The goal is informed governance, not paralysis.
This article provides a practical framework for presenting AI risks to your nonprofit board in ways that foster understanding, accountability, and constructive oversight. You'll learn which risk categories matter most, how to frame concerns without creating panic, and what specific actions your board should take to fulfill their governance responsibilities in the AI era.
Why Boards Need to Hear About Risks (Not Just Opportunities)
Board members have a fiduciary duty to understand the risks the organization faces, and AI introduces new categories of risk that many boards haven't encountered before. Unlike traditional technology decisions, AI systems can make autonomous decisions, perpetuate bias, expose sensitive data, and create reputational harm in ways that are difficult to predict or reverse. Without understanding these risks, boards can't fulfill their oversight responsibilities.
The legal and regulatory landscape around AI is evolving rapidly. By 2026, organizations face increasing expectations for AI accountability from multiple directions: funders who want transparency about AI use, donors who express privacy concerns, regulatory frameworks that demand documentation, and communities who expect ethical technology practices. Boards that don't understand AI risks can't guide the organization through this complex environment.
Perhaps most importantly, boards set the organization's risk tolerance and ethical boundaries. When board members only hear about AI's benefits, they may inadvertently approve initiatives that exceed the organization's capacity for responsible implementation. A balanced presentation allows boards to make informed decisions about which AI opportunities align with organizational values and which pose unacceptable risks.
The Governance Gap You Can't Ignore
The statistics reveal a troubling pattern: while AI adoption accelerates, governance lags far behind. This gap creates organizational vulnerability and exposes nonprofits to risks they may not even recognize.
- 82% of nonprofits use AI tools, but only 10% have written governance policies governing their use
- 54% of board directors say AI risks are not a standing agenda item at board meetings
- 69% of nonprofit staff using AI have received no formal training on responsible use or risk mitigation
- 31% of donors say they would give less to organizations using AI without proper transparency and safeguards
Six AI Risk Categories Your Board Should Understand
Not all AI risks are created equal, and boards don't need to become technical experts to provide effective oversight. Instead, focus on six core risk categories that have the most significant potential impact on nonprofit organizations. Each category requires different mitigation strategies and different levels of board attention.
1. Data Privacy and Security Risks
Protecting sensitive information about donors, beneficiaries, and communities
AI systems require data to function, and nonprofits often work with highly sensitive information: donor financial details, beneficiary health records, client case notes, and vulnerable population data. When this information is fed into AI systems, whether commercial tools or custom applications, it creates new pathways for data exposure, unauthorized access, and privacy violations.
The risks extend beyond traditional cybersecurity concerns. Data provided to AI systems may be used to train models, shared with third parties, or stored in ways that violate donor expectations or legal requirements. Many nonprofits unknowingly violate data privacy principles by using free AI tools that harvest organizational data for commercial purposes.
Key questions for your board:
- Do we know which AI tools staff are using and what data is being shared with them?
- Have we obtained informed consent for using personal information in AI systems?
- Are we complying with data privacy regulations (GDPR, HIPAA, FERPA) in our AI implementations?
- What happens to our data if an AI vendor experiences a breach or sells to another company?
2. Algorithmic Bias and Discrimination Risks
Ensuring AI systems don't perpetuate or amplify existing inequities
AI systems learn from historical data, which means they can absorb and amplify existing biases present in that data. For nonprofits, this creates particularly troubling scenarios. An AI system used to screen grant applications might favor organizations that have historically received funding, perpetuating inequitable access to resources. A tool assessing program eligibility might systematically disadvantage certain demographic groups based on patterns in past decision-making.
The challenge is that bias in AI systems is often invisible to users. Unlike human bias, which can sometimes be identified and corrected through training or oversight, algorithmic bias is embedded in the model itself. Staff may trust AI recommendations without recognizing that the system is making systematically unfair decisions. This is especially concerning for nonprofits that serve marginalized communities and have explicit equity commitments.
Key questions for your board:
- How are we testing AI systems for bias before deploying them in decision-making processes?
- Are we tracking outcomes by demographic group to identify potential discriminatory impacts?
- What processes exist for community members to challenge AI-driven decisions that affect them?
- Does our AI use align with our organizational equity commitments and values?
3. Transparency and Accountability Gaps
Maintaining clear responsibility when AI makes decisions
One of the most challenging aspects of AI governance is establishing clear accountability when systems make mistakes or cause harm. Traditional organizational structures assume human decision-makers who can be held responsible for outcomes. AI systems complicate this picture by introducing automated decision-making that may be difficult to explain, audit, or reverse.
By 2026, regulatory and funder expectations increasingly demand transparent chains of responsibility. When an AI system makes an error, organizations must be able to explain who was responsible for the data, who validated the model, who approved its deployment, and who is accountable for monitoring its performance. Many nonprofits lack these accountability structures, creating governance gaps that can lead to serious consequences.
There's also the challenge of explaining AI decisions to stakeholders. When a donor asks why they received certain communications, when a grant applicant wants to understand why they were rejected, or when a program participant questions an eligibility determination, organizations must be able to provide meaningful explanations. "The AI decided" is not an acceptable answer for accountability purposes.
Key questions for your board:
- Can we explain how our AI systems make decisions in language stakeholders can understand?
- Who is accountable when an AI system makes a mistake or causes harm?
- Do we have documentation trails showing who approved AI deployments and on what basis?
- Are we transparent with stakeholders about when and how we use AI in decision-making?
4. Mission Drift and Value Misalignment
Ensuring AI supports rather than undermines organizational purpose
AI systems optimize for the metrics they're given, which can inadvertently shift organizational focus away from mission priorities. A fundraising AI optimized for donation volume might prioritize wealthy donors over grassroots community engagement. A program efficiency tool might recommend serving clients who are easiest to help rather than those with greatest need. These subtle shifts can gradually erode mission focus without anyone consciously deciding to change direction.
There's also risk of creating dependency on AI systems in ways that undermine the human relationships central to nonprofit work. Tools designed to enhance human capacity can inadvertently replace human judgment, reduce personal connection, or prioritize efficiency over empathy. This is particularly concerning for nonprofits whose value proposition is built on trust, understanding, and human dignity.
Key questions for your board:
- Are our AI implementations reinforcing or undermining our core mission and values?
- Have we clearly defined which decisions should always involve human judgment regardless of AI capabilities?
- How are we monitoring for unintended consequences that might shift us away from our mission?
- Are we tracking qualitative outcomes, not just efficiency metrics, in AI-enhanced programs?
5. Stakeholder Trust and Reputational Risks
Maintaining confidence from donors, partners, and communities
Public perception of AI in nonprofits is complex and often skeptical. Research shows that 31% of donors say they would give less to organizations using AI without adequate transparency and safeguards. This isn't irrational fear, it reflects legitimate concerns about privacy, authenticity, and the appropriate role of technology in mission-driven work.
Reputational harm from AI incidents can be swift and severe. A data breach exposing donor information, a biased algorithm creating discriminatory outcomes, or AI-generated content that feels inauthentic can undermine years of trust-building. Unlike some business sectors where stakeholders may accept AI risks as trade-offs for convenience, nonprofit stakeholders often hold organizations to higher ethical standards and may be less forgiving of AI-related failures.
There's also the challenge of "AI-washing," where organizations overstate their AI capabilities or sophistication to appear innovative. When reality doesn't match the claims, credibility suffers. Conversely, being too opaque about AI use can create suspicion. Finding the right balance of transparency requires careful communication strategy.
Key questions for your board:
- How are we communicating our AI use to donors, funders, and community members?
- Do we have a crisis communication plan for potential AI-related incidents or failures?
- Are we prepared to respond to stakeholder concerns about AI authenticity and appropriateness?
- Have we assessed how AI use might affect trust with the specific communities we serve?
6. Operational and Financial Risks
Managing dependencies, costs, and implementation failures
AI implementations carry significant operational risks beyond the technology itself. Many nonprofits underestimate the total cost of AI adoption, focusing on subscription fees while overlooking training costs, data preparation work, integration challenges, and ongoing maintenance. When AI projects exceed budgets or fail to deliver promised benefits, they can strain already limited resources.
There's also risk of creating dependency on AI vendors or systems that may not be sustainable long-term. When critical organizational functions become dependent on specific AI tools, vendor pricing changes, service discontinuation, or performance degradation can create serious operational disruptions. Nonprofits have less negotiating power than enterprise customers and may be particularly vulnerable to vendor decisions.
Implementation failures are common, with research showing that two-thirds of organizations struggle to scale AI initiatives beyond pilot phase. Failed AI projects waste resources, demoralize staff, and can create organizational resistance to future innovation. Understanding these risks helps boards set realistic expectations and ensure adequate support for successful implementation.
Key questions for your board:
- Do we have realistic understanding of total costs including training, integration, and maintenance?
- What's our backup plan if critical AI systems fail or vendors change terms?
- Are we allocating sufficient resources for successful implementation, not just procurement?
- How are we measuring whether AI investments are delivering expected value?
How to Frame Risk Communication to Your Board
The way you present AI risks matters as much as what risks you present. Frame the conversation poorly and you'll create unproductive fear or defensive dismissal. Frame it well and you'll build the foundation for mature, informed governance. Your goal is to foster understanding and accountability, not to block innovation or assign blame.
Lead with Mission, Not Fear
Start risk conversations by connecting to organizational mission and values, not by listing worst-case scenarios. Instead of "AI could expose our donor data to hackers," try "Our supporters trust us with sensitive information. As we adopt AI tools to serve them better, we need to ensure we're protecting their privacy with the same care we always have."
This approach frames risk management as mission protection rather than obstacle creation. It reminds board members that governance exists to enable mission success, not to prevent action. When risks are presented in the context of values the board already cares about (community trust, equity, stewardship), they're more likely to engage constructively rather than defensively.
Also acknowledge the opportunity costs of inaction. Not adopting AI carries its own risks, failing to serve growing demand, losing ground to peer organizations, or missing chances to amplify impact. The conversation shouldn't be "AI versus no AI" but rather "how do we adopt AI in ways that protect what matters while advancing our mission?"
Use Concrete Examples, Not Abstract Concepts
Board members without technical backgrounds often struggle with abstract discussions of algorithmic bias or data governance. Make risks tangible by using specific examples relevant to your organization's work.
Instead of "AI systems can perpetuate bias," say "If we use AI to identify which families should receive emergency assistance, and the AI is trained on our historical data, it might systematically overlook newly arrived immigrant families who weren't in our system before, even though they have genuine need."
Concrete examples help board members understand both the mechanism of risk and its potential impact on real people. They transform technical governance discussions into accessible conversations about organizational values and stakeholder impact. When possible, draw examples from your own sector or similar organizations to increase relevance.
Present Risks Alongside Mitigation Strategies
Never present a risk without also presenting potential mitigation approaches. This prevents the conversation from becoming stuck in paralysis or anxiety. For each risk category, explain what the organization is already doing to address it, what additional steps you recommend, and what trade-offs or resource requirements those steps involve.
For example: "We face privacy risks when staff use free AI tools like ChatGPT for drafting donor communications, because those tools may retain organizational data. To mitigate this, we've established a policy requiring use of enterprise AI tools with data protection agreements for any work involving sensitive information. This adds cost but ensures we maintain the privacy standards our donors expect."
This framing demonstrates that you're not just identifying problems but actively working to solve them. It gives the board concrete options to discuss and approve rather than abstract concerns to worry about. It also clarifies what board support (budget approval, policy endorsement) is needed for effective risk management.
Establish Regular Risk Reporting, Not One-Time Presentations
AI risk communication should be an ongoing governance practice, not a single presentation. Propose making AI governance a standing board agenda item, even if only for 15 minutes quarterly. Regular check-ins normalize risk discussion, allow the board to track how risk mitigation strategies are working, and ensure governance keeps pace with rapidly evolving AI capabilities.
Consider creating a simple AI risk dashboard that the board reviews regularly. This might include metrics like: number of AI tools in use, percentage of staff trained on responsible AI use, data privacy incidents or near-misses, stakeholder concerns received, and status of policy implementation. Concrete metrics make abstract risks more tangible and help boards fulfill their monitoring responsibilities.
Regular reporting also allows you to educate the board incrementally rather than overwhelming them with information all at once. Each meeting can dive deeper into one risk category, gradually building board sophistication and confidence in AI oversight. This approach respects that board members are volunteers with limited time while ensuring they develop the expertise their governance role requires.
Specific Actions to Request from Your Board
Risk communication is most effective when it culminates in clear, actionable board decisions. After presenting risks, ask your board to take specific governance actions that establish oversight structures and accountability mechanisms. These concrete steps transform risk awareness into risk management.
Approve an AI Acceptable Use Policy
Request board approval for a written AI policy that establishes clear parameters for staff AI use. This policy should define which AI applications are permitted, which require approval, and which are prohibited. It should address data privacy, outline accountability structures, and establish processes for evaluating new AI tools before deployment.
The policy doesn't need to be complex or restrictive. Even a simple two-page document clarifies expectations, provides staff with guidance, and demonstrates to stakeholders that the organization takes AI governance seriously. As you can learn more about in our article on creating data governance policies for AI, the goal is to enable responsible innovation, not to block it.
Designate Board-Level AI Oversight Responsibility
Ask the board to designate where AI oversight sits in the governance structure. This might be an existing committee (often Finance or Audit) or a new AI Ethics Committee for larger organizations. The key is establishing clear responsibility so AI governance doesn't fall through the cracks.
Whoever has this responsibility should receive regular updates on AI initiatives, review risk assessments for major implementations, and ensure the organization's AI use aligns with mission and values. This creates a formal channel for ongoing governance rather than treating AI as a special project that bypasses normal oversight.
Approve Resources for Responsible Implementation
Many nonprofits approve AI tool purchases without approving the resources needed for responsible implementation: staff training, data preparation, integration work, and ongoing monitoring. Ask your board to approve budgets that reflect total cost of ownership, not just subscription fees.
This might include resources for training programs to build AI literacy, technical support for data governance, or external expertise for bias audits. Making these investments visible to the board clarifies that responsible AI adoption requires more than tool procurement, it requires organizational capacity building.
Establish Criteria for High-Risk AI Systems
Not all AI applications carry equal risk. Work with your board to define which AI uses should be considered "high-risk" and require additional oversight, such as board approval before deployment. High-risk typically includes AI that makes decisions affecting people's access to services, uses sensitive personal data, or could create significant reputational harm if it fails.
Clear criteria help staff understand which initiatives need board involvement and prevent either excessive bureaucracy (approving every minor AI use) or insufficient oversight (deploying high-risk systems without board awareness). This risk-based approach focuses governance attention where it matters most.
Commit to Stakeholder Transparency
Ask your board to approve a transparency framework that guides how you'll communicate about AI use to donors, funders, program participants, and other stakeholders. This might include adding AI use disclosures to your website, including AI information in annual reports, or creating processes for stakeholders to ask questions about AI systems that affect them.
Transparency commitments demonstrate accountability and can actually build stakeholder confidence rather than undermine it. When organizations are open about both AI benefits and limitations, stakeholders are more likely to trust that the technology is being used responsibly. You can explore more about this in our article on transparency about AI use with funders.
Require Regular Risk Assessments
Request that the board establish a schedule for regular AI risk assessments, perhaps annually or when significant new AI systems are deployed. These assessments should evaluate whether existing risk mitigation strategies are working, identify new risks that have emerged, and recommend policy or practice updates.
Regular assessments ensure that governance keeps pace with technology evolution. AI capabilities change rapidly, and risk profiles shift as adoption expands. What was low-risk last year might be high-risk this year. Scheduled assessments prevent governance from becoming static while the technology landscape continues to evolve.
Addressing Common Board Objections and Concerns
When you present AI risks to your board, you're likely to encounter resistance or skepticism. Some board members may feel that risk discussion is unnecessarily cautious, while others may become overly alarmed. Understanding common objections and how to address them constructively helps you navigate these conversations effectively.
"We're too small to worry about AI governance"
Small nonprofits sometimes assume that formal AI governance is only necessary for large organizations with dedicated IT departments. In reality, smaller organizations often face greater risk because they lack technical expertise and have fewer resources to recover from AI-related incidents.
Respond by emphasizing that AI governance doesn't need to be complicated or bureaucratic. A simple two-page policy, quarterly check-ins, and clear accountability can provide meaningful risk management without creating administrative burden. Small organizations need governance frameworks appropriate to their scale, not permission to skip governance entirely. Our article on AI strategy for small nonprofits offers practical approaches.
"This will slow down innovation and make us less competitive"
Some board members worry that risk management creates bureaucratic friction that prevents the organization from keeping pace with more innovative peers. This concern reflects a false choice between governance and agility.
Address this by explaining that good governance actually accelerates responsible innovation by creating clear parameters within which staff can act confidently. When staff understand what's permitted and have processes for getting approval when needed, they move faster than when operating in governance ambiguity. Moreover, the reputational and operational damage from AI incidents creates far more disruption than thoughtful risk management ever will.
"We don't have the expertise to evaluate these risks"
Board members may feel overwhelmed by technical complexity and question whether they're qualified to provide AI oversight. This is actually a reasonable concern that reflects appropriate humility about technical expertise.
Respond by emphasizing that board oversight focuses on values, mission alignment, and risk tolerance, not technical implementation details. Board members don't need to understand machine learning algorithms to ask important questions like "How do we know this system isn't discriminating against vulnerable populations?" or "What's our plan if this vendor raises prices by 300%?" You might also consider whether the board needs additional expertise through recruitment, advisory support, or training. Building AI literacy across your organization, including at the board level, is explored in our article on developing AI champions.
"Aren't we being overly cautious? Everyone uses these tools"
Some board members may view risk concerns as excessive hand-wringing, especially if they use AI tools personally without incident. The "everyone's doing it" argument can make governance seem unnecessarily restrictive.
Address this by distinguishing between personal AI use and organizational AI use. When individuals use AI tools, they accept risks for themselves. When nonprofits use AI, they accept risks on behalf of donors, beneficiaries, and communities who trusted them with sensitive information and expected ethical stewardship. Nonprofits are held to higher accountability standards than individuals, and board governance reflects that responsibility. Also note that just because risks haven't materialized yet doesn't mean they don't exist, proactive governance prevents problems rather than reacting to them.
"Can't we just have staff handle this without board involvement?"
Some boards prefer to delegate AI decisions entirely to staff, viewing it as an operational rather than governance matter. While delegation is appropriate for implementation details, wholesale abdication of oversight is not.
Explain that board fiduciary duty includes understanding major organizational risks, and AI has become a major risk category. The board doesn't need to approve every AI tool purchase, but they do need to establish policies, understand risk exposure, and ensure adequate resources for responsible implementation. This is consistent with how boards approach other risk areas, they don't manage the details but they do provide oversight. Point to the statistics: 54% of boards aren't discussing AI risks at all, which represents a governance gap the organization can't afford.
Moving Forward: Building a Culture of Responsible AI Governance
Communicating AI risks to your board is not a one-time event but the beginning of an ongoing governance practice. The most effective nonprofit AI implementations are those where boards and staff work together to navigate both opportunities and risks with transparency, accountability, and shared commitment to mission.
As you build this governance culture, remember that the goal is not to eliminate all risk, that's neither possible nor desirable. The goal is to ensure that risks are understood, intentionally accepted or mitigated, and aligned with organizational values and capacity. Some risks are worth taking in service of mission impact. Others should be avoided because the potential harm outweighs the potential benefit. Boards can only make these distinctions when they have complete information about both benefits and risks.
The nonprofit sector has a unique opportunity to model responsible AI adoption that prioritizes community benefit, equity, and ethical stewardship over pure efficiency. By bringing your board into honest, ongoing conversations about AI risks, you help ensure that your organization's AI journey reflects the values that make nonprofit work meaningful in the first place.
Start with one board meeting. Present one or two risk categories that are most relevant to your current AI use. Propose one concrete governance action. Build from there. AI governance maturity develops incrementally, through repeated cycles of communication, decision-making, implementation, and learning. Your board doesn't need to become AI experts overnight, they need to become informed stewards who ask good questions and hold the organization accountable to its commitments.
The statistics show that most nonprofit boards aren't having these conversations yet. By initiating them at your organization, you're not just managing risk, you're helping to establish new norms for responsible AI governance across the sector. That leadership matters far beyond your individual organization.
Need Help Building Your AI Governance Framework?
One Hundred Nights works with nonprofit leaders and boards to develop practical, mission-aligned AI governance strategies. We can help you communicate risks effectively, establish appropriate oversight structures, and build board confidence in responsible AI adoption.
