AI Risk Registers for Nonprofit Boards: Tracking and Mitigating Technology Risks
As nonprofits deploy more AI tools across more functions, boards need a structured way to identify, track, and govern the risks those tools create. An AI risk register is the foundation of responsible board oversight.

Nonprofit boards have always been responsible for risk oversight, reviewing financial controls, monitoring compliance obligations, and ensuring the organization's operations align with its mission. What has changed in the past two years is that AI has become a significant and rapidly evolving source of institutional risk for organizations that didn't have technology risk on their governance agenda at all.
The risks are real and varied. A chatbot deployed for client intake could produce harmful advice. An AI-powered fundraising tool could process donor data in ways that violate privacy regulations. A machine learning model used for program evaluation could embed historical biases and disadvantage certain beneficiary populations. An AI vendor could be acquired, discontinue their nonprofit pricing tier, or experience a data breach. Any of these scenarios represents an organizational risk, and boards that aren't systematically tracking them are failing in their fiduciary and governance responsibilities.
An AI risk register is the mechanism through which boards fulfill this responsibility. It is a living document that identifies every significant AI application in use across the organization, assesses the risks associated with each, assigns clear ownership and mitigation responsibilities, and establishes a regular review cadence so the board can monitor how risks evolve as the technology and its applications change. This article explains how to build one, what to put in it, and how boards should engage with it over time.
The framing here is practical rather than theoretical. Many governance frameworks for AI exist at the enterprise and government level, but most are written for large organizations with dedicated risk management functions and legal teams. What follows is designed for nonprofits of all sizes, including those where a single board member may be leading the AI governance conversation with minimal staff support.
Why AI Risk Is Now a Board-Level Responsibility
Boards have fiduciary duties that include oversight of organizational risk. For decades, the primary risks in nonprofit governance were financial: budget management, fraud prevention, audit compliance, and investment stewardship. Technology risk occupied a secondary position, occasionally surfacing as a cybersecurity concern or a software procurement decision but rarely rising to board-level strategic attention.
AI changes this calculus in three important ways. First, AI is being deployed across mission-critical functions, including client services, fundraising, HR, and program evaluation, not just back-office administrative tasks. When AI touches mission delivery, AI risk is mission risk. Second, AI decisions can have consequential effects on beneficiaries, staff, donors, and communities that traditional technology decisions did not. A misconfigured database affects operations; a biased AI model in a service allocation system affects people's access to care. Third, the regulatory and legal landscape around AI is changing rapidly, with new compliance obligations taking effect in 2026 in multiple states, creating liability exposure that boards are expected to monitor.
The Forvis Mazars framework for board AI oversight, examined in our earlier article on board AI governance for nonprofits, identifies AI risk oversight as one of three core board responsibilities alongside AI strategy and AI policy. A risk register operationalizes that oversight responsibility by giving the board a structured, consistent way to monitor the AI risk landscape rather than addressing risks reactively as they arise.
The urgency is also organizational. The 2024 Nonprofit Standards Benchmarking Survey found that while the vast majority of nonprofits have adopted AI, fewer than 10% have formal policies governing its use. Boards that don't know what AI their organization is using, who owns each tool, what data it accesses, and what safeguards are in place are operating blind. Building a risk register starts with an inventory that many organizations need simply to understand their own AI footprint.
What Belongs in an AI Risk Register
A well-designed AI risk register tracks risks at two levels: the organizational level (risks that apply to your AI program broadly) and the tool level (risks associated with specific AI applications in use). Both levels are necessary. Tool-level risks are easier to identify and own, but organizational-level risks like data governance gaps, lack of staff training, or absent AI policies create vulnerabilities across all your AI applications simultaneously.
The core structure of each risk entry should include the risk description and its potential impact, the likelihood that the risk will materialize in the near term, the specific AI tool or context where the risk exists, the staff owner responsible for monitoring and mitigating the risk, the current mitigation measures in place, any gaps in mitigation that need to be addressed, and the date of the last review. This structure is deliberately simple. A risk register that requires too much administrative overhead won't be maintained consistently.
Core Risk Register Fields
What to capture for each AI risk entry
Required Fields
- Risk ID and title (for easy reference in board discussions)
- Risk description: what could go wrong and what the impact would be
- Risk category (see categories below)
- Likelihood rating (Low / Medium / High)
- Impact severity rating (Low / Medium / High / Critical)
- Staff owner responsible for mitigation
Additional Fields
- Relevant AI tool(s) or function affected
- Current mitigation measures in place
- Mitigation gaps or open action items
- Target resolution date for open action items
- Date of last review and next scheduled review
- Risk trend (Increasing / Stable / Decreasing)
For organizations that want a more formal framework, the NIST AI Risk Management Framework (AI RMF 1.0) organizes AI risk management into four functions: Govern (establish accountability structures and policies), Map (identify and categorize AI systems and their impacts), Measure (assess risks using consistent metrics), and Manage (prioritize, treat, and monitor risks over time). The NIST framework is the most widely referenced public standard for AI governance and can serve as a backbone for nonprofits designing more comprehensive AI risk programs.
The likelihood and impact ratings produce a risk priority score that helps the board focus attention on the most significant concerns. A risk rated High Likelihood plus Critical Impact demands immediate attention and board-level discussion. A risk rated Low Likelihood plus Low Impact can be monitored passively with periodic check-ins. The combination of these two dimensions creates a simple risk matrix that makes prioritization straightforward.
One important design principle: the risk register should be owned by staff, not by the board. The board's role is to review the register, ask questions, set expectations for how risks are managed, and hold leadership accountable for timely mitigation. If the board is responsible for building and updating the register, it will never get done. Assign clear staff ownership, typically the Executive Director, Chief Operating Officer, or Chief Technology Officer, and make risk register updates a standing item in board reporting.
The Core AI Risk Categories for Nonprofit Boards
Nonprofit AI risks cluster into seven primary categories. Organizations at an early stage of AI adoption may find that some categories don't apply yet; the register should still acknowledge them as future risks to monitor as AI use expands. Each category includes several specific risks that boards should examine at a minimum.
1. Data Privacy and Security Risks
These are often the most acute risks for nonprofits, given the sensitivity of beneficiary data in health, social services, legal aid, and immigration contexts.
- AI tools accessing or storing beneficiary PII without adequate security controls
- Staff entering sensitive client data into general-purpose AI tools (ChatGPT, Claude) without understanding how that data is used
- Data breach at an AI vendor exposing donor or beneficiary records
- HIPAA, FERPA, or state privacy law violations from AI use in healthcare or education programs
- Donor payment data or giving history transmitted to AI platforms without consent
2. Algorithmic Bias and Equity Risks
When AI is used in decisions that affect beneficiaries, bias in the model can translate directly into inequitable service delivery.
- AI models trained on historical data that reflect past inequities, producing biased recommendations
- Risk scoring or prioritization tools that disadvantage specific demographic groups
- AI-generated content that reflects cultural stereotypes or uses insensitive framing
- Absence of regular bias audits for AI systems used in program delivery
3. Regulatory and Legal Compliance Risks
The AI regulatory environment is changing rapidly, with multiple state laws taking effect in 2026 and ongoing federal activity creating new compliance obligations.
- Noncompliance with Colorado's AI Act, Texas RAIGA, California transparency laws, or other applicable state AI regulations
- AI-generated content that creates copyright, defamation, or intellectual property exposure
- AI use in employment decisions (hiring, performance management) that may conflict with employment discrimination law
- Failure to disclose AI use to clients or beneficiaries when required by law or contract
- Grant agreements that prohibit AI-generated content in deliverables without disclosure
4. Vendor Dependency and Business Continuity Risks
Nonprofits can become operationally dependent on AI vendors in ways that create significant vulnerability.
- Vendor discontinuing nonprofit pricing, being acquired, or shutting down services
- Critical workflows built around a single AI tool with no documented fallback
- Data portability limitations that make it difficult to migrate away from a vendor
- AI vendor contracts with unfavorable data ownership or liability clauses
- Service outages in AI-dependent workflows with no manual fallback procedures
5. Mission Alignment and Reputational Risks
AI use that conflicts with organizational values or donor expectations can damage reputation and trust even when no legal violation occurs.
- AI-generated fundraising content that donors perceive as inauthentic or manipulative
- AI decisions that appear to conflict with the organization's equity and inclusion commitments
- Public association with AI vendors whose practices conflict with your mission (e.g., labor practices, environmental impact)
- AI use in advocacy or communications that creates appearance of inauthentic grassroots activity
Real-world example: The National Eating Disorders Association deployed an AI chatbot called Tessa to support people with eating disorders. The chatbot was found to be giving some users calorie restriction advice that could exacerbate disordered eating behaviors, resulting in significant reputational harm and the chatbot being taken down. This incident illustrates how AI deployed with good intentions in direct service contexts can produce harmful outcomes when mission alignment isn't continuously monitored.
6. Accuracy, Hallucination, and Decision Quality Risks
AI-generated content and recommendations are not reliably accurate, creating risks when used without adequate human review.
- AI-generated grant application content containing factual errors or hallucinated citations
- AI financial or legal summaries that contain errors relied on for material decisions
- Over-reliance on AI outputs in staff decision-making without sufficient human judgment
- AI chatbots providing incorrect information to clients about services, eligibility, or procedures
7. Governance and Policy Risks
Many nonprofits are adopting AI faster than their governance infrastructure can keep up, creating systemic vulnerabilities.
- No AI policy or acceptable use guidelines, leaving staff uncertain about appropriate AI use
- No inventory of AI tools in use across the organization
- No designated staff responsible for AI governance oversight
- Insufficient staff training on AI risks and responsible use practices
- Board without sufficient AI literacy to meaningfully oversee AI risk
How to Build Your AI Risk Register: A Step-by-Step Process
Building a risk register for the first time involves a structured process that starts with discovery and moves through assessment, prioritization, and ongoing maintenance. For most nonprofits, the initial build can be completed by a small working group over two to four weeks, with periodic updates handled as a routine governance function.
The AI Risk Register Build Process
Conduct an AI inventory (Week 1)
Survey all departments to identify every AI tool currently in use, including free tools, tools embedded in existing platforms, and tools staff are using independently. The goal is a complete list of what AI your organization is actually running, not just what IT knows about.
Categorize each tool by risk profile (Week 1-2)
Assign each tool to a risk tier based on what data it accesses, whether it touches clients or beneficiaries, and how consequential errors would be. A general-purpose writing tool that staff use for internal drafts carries different risk than an AI client intake system.
Identify risks across all seven categories (Week 2)
Working through the seven risk categories above, identify specific risks relevant to your organization's tools, programs, and context. Involve department heads who are closest to the AI applications in use.
Rate each risk for likelihood and impact (Week 2-3)
Assign a likelihood score (Low, Medium, High) and an impact score (Low, Medium, High, Critical) to each risk. Be honest about what your current safeguards do and don't address. A risk isn't Low Likelihood just because it hasn't happened yet.
Assign ownership and document current mitigations (Week 3)
For each risk, designate a staff owner responsible for ongoing monitoring and mitigation. Document what safeguards, policies, or controls are currently in place and what gaps remain.
Present to board and establish review cadence (Week 4)
Share the initial register with the board as a governance document. Agree on a regular review schedule and establish that updated risk summaries will be part of quarterly or semi-annual board reporting.
The first version of your register doesn't need to be perfect. A complete list of identified risks with rough ratings is far more useful for board oversight than an empty document that you're still refining. You can add nuance in subsequent reviews as your organization develops more experience with its AI applications and a clearer understanding of which risks are most relevant.
For small nonprofits with limited staff capacity, a simplified register with 10-20 entries covering the highest-priority risks is a perfectly appropriate starting point. The register should scale with your organization's AI footprint, not the other way around.
How Boards Should Use the Risk Register
Building a risk register is the beginning of the governance process, not the end. The register's value comes from consistent use, not from its existence. Boards that file the initial register and return to it once a year are not providing meaningful oversight. Boards that build risk review into regular governance cycles are.
The standard approach is to have the Executive Director or designated AI staff lead provide a risk register update at each board meeting. This doesn't require reading through every entry. The summary report should highlight any risks that have increased in severity since the last review, new risks identified since the last review, action items that are overdue or unresolved, and any significant changes to the organization's AI footprint, such as new tools adopted or existing tools discontinued.
Full risk register reviews, where the board works through all entries in detail, are typically conducted annually. This annual review is the opportunity to update risk ratings based on new information, evaluate whether mitigation measures are working, remove risks that have been resolved, and add new risks that have emerged as the AI landscape evolves.
The board's role in risk register discussions is to ask questions, not to manage risks directly. Useful board questions include: Who owns this risk and what are they doing about it? What would it take to bring this risk from High to Medium? What would happen to operations if this risk materialized? Has the organization consulted legal counsel on this compliance risk? Are there resources needed to address this gap that aren't in the current budget? These questions push leadership toward accountability without the board crossing into operational territory.
Boards can also use the risk register to inform strategic decisions. If the register consistently shows high vendor dependency risks across multiple tools, that's a signal that the board should ask leadership about diversification strategy and contingency planning. If governance and policy risks dominate the register, the board should ensure that resource allocation for AI governance improvement is reflected in the budget. The register makes risk patterns visible in a way that drives strategic, rather than just reactive, oversight.
Suggested Board AI Risk Review Cadence
Brief risk status update: any new High/Critical risks, overdue action items, and significant AI footprint changes since last meeting
Review of all High and Critical-rated risks with mitigation progress; update on open action items and compliance status
Full risk register review: reassess all risk ratings, evaluate mitigation effectiveness, update AI inventory, add emerging risks, remove resolved risks
Triggered review when a new AI law takes effect, a significant vendor change occurs, an AI-related incident happens, or a major new AI tool is adopted
Connecting the Risk Register to Broader AI Governance
An AI risk register is one component of a broader AI governance framework, not a standalone document. For it to function effectively, it needs to connect to other governance infrastructure: an AI policy that defines acceptable use, clear procurement processes that include risk assessment for new tools, training programs that build staff awareness of the risks the register identifies, and incident response protocols for when AI risks materialize.
The risk register is also most effective when it connects to the organization's overall strategic AI direction. Boards that have engaged in building their own AI literacy are better equipped to ask meaningful questions about risk register entries. Boards that have reviewed their organization's AI governance policies are better positioned to evaluate whether the risk mitigations described in the register are adequate.
As nonprofits move further along the AI maturity curve, the risk register should evolve as well. Early-stage organizations often focus primarily on governance and privacy risks because those are the most immediate. More mature AI adopters need to add risks associated with agentic AI, multi-tool workflows, and AI-generated content at scale. The register is a living document that should reflect your current AI reality, not a snapshot of where you were two years ago.
Organizations considering building out a full board AI oversight program should also look at AI governance dashboards that make real-time organizational health data available to board members, and consider how risk register information can feed into those dashboards for a more integrated view of AI risk and performance.
Getting Started: What Your Board Can Do This Quarter
Building a comprehensive AI risk register is a multi-week process, but there are actions your board can take immediately to begin exercising AI risk oversight even before the register is complete.
This Quarter's AI Governance Priorities
- Request an AI inventory from leadership. Ask the Executive Director to provide a list of all AI tools currently in use across the organization at the next board meeting. This single ask often surfaces gaps that both boards and staff didn't realize existed.
- Ask about the AI policy. Does your organization have an AI acceptable use policy? When was it last reviewed? Does it address the state AI laws taking effect in 2026?
- Designate a board AI risk liaison. Assign one board member, ideally one with some technology familiarity, to take a leading role in AI risk oversight. This person can review the risk register between meetings and brief the full board on the most significant items.
- Commission the initial risk register. Task the Executive Director with producing a first version of the AI risk register using the framework above within 30-60 days, and schedule board time to review it at the following meeting.
- Add AI risk to standing board reporting. Establish that AI risk status will be included in the Executive Director's report at every board meeting going forward, even if it's a brief one-paragraph update.
Conclusion
AI risk oversight is no longer optional for nonprofit boards. The same fiduciary duty that requires boards to oversee financial risk, legal compliance, and reputational risk now extends to the AI tools their organizations are deploying across fundraising, program delivery, HR, and communications. Boards that are not systematically tracking AI risks are failing a governance responsibility that regulators, funders, and beneficiaries will increasingly expect them to fulfill.
An AI risk register provides the structure to fulfill that responsibility without requiring boards to become technical experts. By identifying risks, rating them, assigning ownership, and reviewing them regularly, boards create accountability, surface problems before they become crises, and demonstrate to stakeholders that the organization takes responsible AI use seriously.
The register doesn't need to be comprehensive on day one. Start with what you know, rate it honestly, assign clear ownership, and build the governance habit of regular review. A simple, maintained risk register is worth far more than a sophisticated one that sits untouched in a shared drive.
Build Your AI Governance Framework
Our team works with nonprofit boards and leadership teams to design AI governance frameworks, build risk registers, and create the policies and training programs that make responsible AI adoption possible.
