Back to Articles
    AI Governance & Policy

    How to Update Your Data Governance Policy for the AI Era

    The rapid adoption of AI tools across the nonprofit sector has created an urgent governance gap. While 82% of nonprofits now use AI in some capacity, fewer than 10% have formal policies governing its use—and even fewer have updated their foundational data governance policies to address AI-specific risks. This guide provides a practical framework for reviewing and updating your data governance policy to protect your organization, maintain stakeholder trust, and ensure compliance in an era where AI touches nearly every aspect of nonprofit operations.

    Published: January 25, 202615 min readAI Governance & Policy
    Nonprofit team reviewing data governance documentation

    Data governance has always been fundamental to responsible nonprofit operations. The policies, processes, and standards that ensure data is managed effectively, securely, and consistently enable organizations to fulfill their missions while protecting the people they serve. But the introduction of AI tools into daily operations has fundamentally changed the data landscape—creating new categories of risk, new compliance obligations, and new ethical considerations that most existing policies simply don't address.

    Consider how AI tools interact with your organization's data: staff might paste donor information into ChatGPT to draft personalized appeals, upload beneficiary records to AI analysis tools for outcome prediction, or use AI-powered CRM features that automatically process sensitive information. Each interaction represents potential data exposure, privacy implications, and compliance concerns that traditional data governance policies weren't designed to handle. Without explicit guidance, well-meaning staff make daily decisions about AI and data that could expose your organization to significant risk.

    The regulatory landscape adds urgency to this update. January 2026 brought major changes including California's new Automated Decision-Making Technology regulations, the full applicability of the EU AI Act in August 2026, and new privacy laws taking effect in Indiana, Kentucky, and Rhode Island. Organizations that haven't updated their governance frameworks face increasing compliance exposure—with penalties that can reach €35 million or 7% of global revenue under the EU AI Act for serious violations.

    Beyond compliance, updating your data governance policy is about maintaining stakeholder trust. Donors, beneficiaries, volunteers, and funders increasingly want to understand how their information is used—particularly when AI is involved. Research shows that 31% of donors give less when they learn AI is used in fundraising communications, largely due to concerns about data handling and personalization. Clear governance policies, communicated transparently, can transform this skepticism into confidence. For guidance on communicating AI use to donors, see our article on transparency in AI-powered fundraising.

    This article walks through the key areas where your data governance policy needs AI-specific updates, provides frameworks for addressing each area, and offers practical guidance for the update process itself. Whether you're working with legal counsel on a comprehensive revision or making targeted updates with limited resources, these principles will help ensure your governance framework is fit for purpose in the AI era.

    Understanding the AI Data Governance Gap

    Traditional nonprofit data governance policies typically address data collection, storage, access controls, retention, and disposal. They establish who can access what data, how long data is kept, and what security measures protect it. These foundational elements remain essential—but they don't address the unique ways AI tools create, process, and potentially expose organizational data. Understanding these gaps is the first step toward addressing them.

    Data Flows Beyond Organizational Control

    When data leaves your systems for AI processing

    Most AI tools are cloud-based services operated by third parties. When staff use these tools, organizational data flows outside your direct control—to servers you don't manage, under terms of service you may not have reviewed, potentially in jurisdictions with different privacy laws. Your current policy may carefully govern internal data access while being silent on this external data flow that now happens dozens of times daily.

    Consider: when a program manager pastes client intake notes into an AI tool for summarization, where does that data go? How long is it retained? Is it used to train the AI model? Could it surface in responses to other users? These questions may have no answers in your current governance framework, yet they arise constantly in day-to-day operations.

    • Most AI tools process data on external servers outside organizational control
    • Terms of service may permit data use for model training
    • Data residency and cross-border transfer issues may apply

    New Categories of Sensitive Data

    AI interactions create new data protection concerns

    AI introduces new types of data that require governance: prompts and conversations that may contain sensitive information, AI-generated outputs that could be inaccurate or biased, automated decisions that affect individuals, and the inferences AI systems draw from data. Your current policy likely has clear categories for donor data, beneficiary data, and financial records—but what about AI interaction logs that contain fragments of all these categories combined?

    The 2026 regulatory updates expand sensitive data definitions significantly. Neural data is now protected in some jurisdictions. Inferences about individuals—conclusions an AI draws from analyzing someone's data—are increasingly treated as personal data requiring consent and protection. Your governance framework needs categories and controls for these new data types.

    • AI prompts and conversations as a new data category
    • Inferences and predictions require protection
    • Automated decision outputs need governance

    Automated Decision-Making Accountability

    When AI makes or influences decisions about people

    When AI tools are used to score donors for major gift potential, prioritize beneficiaries for services, or screen job applicants, they're making or influencing decisions that significantly affect people. California's new ADMT regulations, effective January 2026, require organizations to allow consumers to opt out of automated decision-making for significant decisions—and to conduct risk assessments for such systems.

    Your data governance policy needs to address who can authorize AI-assisted decisions, what documentation is required, how affected individuals can contest decisions, and what human oversight is maintained. For organizations using AI in program delivery, these considerations intersect directly with your mission and values.

    • New regulations require opt-out rights for significant automated decisions
    • Risk assessments are mandatory for high-impact AI applications
    • Human oversight requirements must be documented and enforced

    Essential Policy Updates for AI Data Governance

    Updating your data governance policy for AI doesn't require starting from scratch. Most organizations can build on their existing framework by adding AI-specific provisions to each major section. The following areas represent the most critical updates, organized by the typical structure of data governance policies.

    Data Classification Updates

    Expanding categories to address AI-specific data types

    Your data classification scheme needs to expand to include AI-related data types. This means adding categories for AI interaction data (prompts, conversations, and outputs), AI-generated content (distinguishing it from human-created content), inference data (predictions and scores generated by AI analysis), and model training data (if you use any AI tools that train on your data).

    Each new category needs a classification level (public, internal, confidential, restricted) and handling requirements. AI prompts containing beneficiary information should be treated with the same sensitivity as the underlying beneficiary records. AI-generated content should be labeled to ensure staff know it may require verification. These classifications drive the controls that apply to each data type.

    Recommended Classification Additions

    • AI Interaction Data: Classified based on the sensitivity of content within prompts and outputs
    • AI-Generated Content: Labeled clearly and subject to accuracy verification before external use
    • Inference Data: Treated as personal data when identifying individuals; subject to consent requirements
    • Automated Decision Records: Classified as confidential; retention aligned with appeals periods

    Data Access and Authorization

    Controlling who can use AI tools with organizational data

    Traditional access controls focus on who can view or modify data within organizational systems. AI introduces a new dimension: who can share data with AI tools, and under what circumstances? Your policy update should establish which AI tools are approved for use with organizational data, which data classifications can be used with each approved tool, what authorization is required for different use cases, and how to document AI tool usage for audit purposes.

    Many organizations establish tiers of AI tool authorization. Consumer AI tools (free ChatGPT, Claude) may be approved only for public or internal data with no personal information. Enterprise AI tools with appropriate data processing agreements may be approved for confidential data. Restricted data—HIPAA-protected health information, for example—may require specialized AI tools with specific security certifications.

    Access Control Framework

    • Tool Approval Process: Formal review before new AI tools can be used with organizational data
    • Data-Tool Matrix: Clear mapping of which data classifications can be used with which approved tools
    • Role-Based Authorization: Different staff roles have different AI tool and data access permissions
    • Usage Logging: Requirements for documenting when and how AI tools process organizational data

    Privacy and Consent Updates

    Addressing AI in privacy notices and consent frameworks

    Your privacy notices and consent frameworks likely need updating to address AI data processing. Under current regulations, individuals have a right to know if their data is used by AI systems, particularly for automated decision-making. California's ADMT regulations require disclosure when AI is used for decisions that significantly affect consumers, along with opt-out mechanisms.

    Review your donor privacy policy, beneficiary consent forms, employee data notices, and volunteer agreements. Each should address whether AI tools may process the individual's data, for what purposes, what automated decisions may result, and how individuals can opt out or request human review. This transparency builds trust while ensuring compliance.

    Privacy Notice Requirements

    • AI Processing Disclosure: Clear statement of whether and how AI processes personal data
    • Purpose Specification: Specific purposes for which AI is used with personal data
    • Automated Decision Disclosure: Notice when AI makes or influences significant decisions
    • Rights Information: How individuals can opt out, contest decisions, or request human review

    Third-Party Data Sharing

    Governance for AI vendors and service providers

    Using AI tools typically means sharing data with third parties—the AI vendors. Your data governance policy should establish requirements for AI vendor assessment, including data processing agreements, security certifications, data residency commitments, and model training policies. The EU AI Act and various state privacy laws impose specific requirements on organizations that use AI systems, making vendor governance a compliance obligation.

    Key questions for AI vendors include: Where is data processed and stored? Is data used to train AI models? How long is data retained? What security measures protect data in transit and at rest? Can data be deleted on request? Your policy should require satisfactory answers to these questions—documented in appropriate agreements—before AI tools are approved for use with organizational data.

    Vendor Assessment Requirements

    • Data Processing Agreement: Required for any AI vendor processing personal data
    • Training Data Opt-Out: Confirmation that organizational data won't train vendor models
    • Security Certification: SOC 2, ISO 27001, or equivalent for sensitive data processing
    • Data Residency: Clear understanding of where data is processed and stored

    Data Retention and Disposal

    Managing the lifecycle of AI-related data

    AI interactions generate new data that requires retention decisions. How long should you keep AI conversation logs? What about AI-generated reports or analysis? Automated decision records? Your policy needs clear retention schedules for these new data types, balancing operational needs against privacy principles of data minimization.

    Consider regulatory requirements when setting retention periods. Automated decision records may need retention aligned with appeals windows or legal challenge periods. AI outputs used for official purposes should follow the same retention schedules as equivalent human-created content. Routine AI interactions without lasting business purpose should have short retention periods or be deleted immediately.

    Retention Schedule Additions

    • Routine AI Interactions: Minimal retention; delete after session or within 30 days
    • Automated Decision Records: Retain for duration of appeals period plus legal hold requirements
    • AI Analysis Outputs: Same retention as equivalent human-created analysis
    • Model Training Data: Document what was used; retain records of consent and authorization

    Regulatory Compliance Considerations

    The regulatory environment for AI and data privacy is evolving rapidly. While comprehensive compliance guidance requires legal counsel familiar with your specific situation, understanding the major frameworks helps shape your policy updates. The following regulations are most likely to affect nonprofit data governance in 2026.

    U.S. State Privacy Laws

    Expanding consumer protection requirements

    Twenty U.S. states now have comprehensive privacy laws, with Indiana, Kentucky, and Rhode Island joining the enforcement phase in January 2026. California continues to lead with the most stringent requirements, including the new Automated Decision-Making Technology regulations that require disclosure, opt-out rights, and risk assessments for AI-powered decisions affecting consumers.

    Key requirements across states include: enhanced consent requirements for sensitive data (often including precise geolocation and certain demographic information), universal opt-out mechanisms that must be honored when consumers use browser signals or privacy tools, expanded individual rights including access, correction, deletion, and portability, and vendor contract requirements ensuring downstream data protection.

    For nonprofits operating across state lines or serving constituents in multiple states, the practical approach is designing governance for the strictest applicable requirements. This creates compliance baseline that covers all jurisdictions rather than managing separate policies for each state.

    EU AI Act and GDPR

    Requirements for organizations with EU connections

    The EU AI Act becomes fully applicable in August 2026, establishing risk-based obligations for AI systems. While primarily targeting AI developers and deployers in the EU, international nonprofits and those serving EU residents need to understand the framework. High-risk AI applications—including those affecting access to education, employment, or essential services—face specific requirements for documentation, human oversight, and quality management.

    GDPR continues to apply to any organization processing EU resident data, with AI-specific implications under the Digital Omnibus updates. These include new legal bases for processing sensitive data for AI testing and development, higher thresholds for breach notifications, and unified approaches to data protection impact assessments. The penalties remain significant: €20 million or 4% of global revenue for GDPR violations, and up to €35 million or 7% of global revenue for serious EU AI Act violations.

    For nonprofits without EU operations or data subjects, these regulations provide useful governance frameworks even where not legally binding. The risk-based approach and documentation requirements reflect emerging best practices that protect organizations regardless of jurisdiction.

    Sector-Specific Regulations

    HIPAA, FERPA, and other specialized requirements

    Nonprofits in healthcare, education, and other regulated sectors face additional requirements for AI data governance. HIPAA's requirements for protected health information apply when AI tools process PHI—requiring business associate agreements with AI vendors and specific security controls. FERPA imposes similar requirements for student education records, limiting how AI can be used for student data analysis.

    These sector-specific regulations generally weren't written with AI in mind, creating interpretation challenges. The safest approach is treating AI processing of regulated data as requiring the same protections as any other processing—with additional caution given the complexity of AI data flows. Document your compliance approach clearly and seek guidance from legal counsel familiar with both the sector regulations and AI applications.

    • HIPAA: Business associate agreements required; security rule applies to AI processing of PHI
    • FERPA: Student consent or authorized exception required for AI processing of education records
    • State regulations: Many states have additional requirements for specific data types

    Implementing Your Policy Update

    Policy documents only matter if they're implemented effectively. The process of updating your data governance policy for AI is as important as the policy content itself. A well-managed update process builds organizational awareness, surfaces practical concerns, and creates buy-in for compliance. The following approach balances thoroughness with pragmatism for resource-constrained nonprofits.

    Stakeholder Engagement Process

    Building understanding and buy-in across the organization

    Phase 1: Current State Assessment

    Before drafting policy updates, understand how AI is actually being used in your organization. Survey staff about their AI tool usage, review existing data governance policies for gaps, identify sensitive data that's most likely to interact with AI, and document current practices—both official and informal. This assessment reveals the practical challenges your policy needs to address.

    • Survey staff on current AI tool usage
    • Review existing policy documents for AI-related gaps
    • Map data flows that may involve AI processing

    Phase 2: Draft Development

    Develop draft policy updates with input from key stakeholders: IT leadership (or whoever manages technology), program directors (who understand operational data needs), development staff (who handle donor data), finance (for any financial data considerations), and legal counsel if available. Each perspective helps ensure policies are both protective and practical.

    • Convene cross-functional drafting team
    • Review model policies from peer organizations
    • Test draft provisions against real scenarios

    Phase 3: Review and Refinement

    Circulate draft policies for review by affected staff before finalization. This step catches impractical requirements, surfaces concerns, and builds awareness of coming changes. Legal review is valuable if affordable—but don't let perfect be the enemy of good. A practical policy implemented is better than a perfect policy stuck in review.

    • Circulate for comment period
    • Address practical concerns from front-line staff
    • Obtain legal review if available

    Phase 4: Approval and Rollout

    Obtain appropriate approval (board, executive director, or governance committee depending on your organization's structure) and plan the rollout. Training is essential—policies only work when people understand them. Consider phased implementation if major changes are involved, with clear timelines and support for compliance.

    • Secure appropriate governance approval
    • Develop training materials and rollout plan
    • Communicate changes clearly to all affected staff

    Creating Living Governance

    AI capabilities and regulations are evolving rapidly. Your updated policy should include provisions for regular review—at minimum annually, but ideally more frequently during this period of rapid change. Establish a process for evaluating new AI tools before adoption, assign responsibility for monitoring regulatory developments, and create mechanisms for staff to raise governance questions or report concerns.

    Consider establishing a small AI governance group—it doesn't need to be a formal committee—that meets quarterly to review tool usage, assess emerging risks, and recommend policy updates. This ongoing attention ensures governance keeps pace with practice rather than falling behind.

    Resources for Policy Development

    You don't have to develop your AI data governance policy from scratch. Several organizations have created resources specifically for nonprofits that can accelerate your update process. These templates and frameworks provide starting points that you can adapt to your organization's specific needs and context.

    NetHope Data Governance Toolkit

    NetHope's comprehensive toolkit guides nonprofits through implementing data governance with template policies, role definitions, roadmaps, and KPIs. While not AI-specific, it provides an excellent foundation that can be extended with AI provisions.

    • Template policies and procedures
    • Role and responsibility definitions
    • Implementation roadmaps

    NTEN AI Resource Hub

    NTEN's AI for Nonprofits resource hub includes governance frameworks, policy templates, and assessment tools specifically designed for the sector. The community forums provide peer support for organizations working through governance challenges.

    • AI governance framework
    • Policy templates and examples
    • Community peer support

    Vera Solutions: Responsible AI Principles

    Vera Solutions' nine principles of responsible AI for nonprofits provide an ethical framework that can inform governance policy development. The principles address transparency, accountability, fairness, and human oversight—core governance concerns.

    • Ethical principles framework
    • Implementation guidance
    • Mission alignment focus

    NIST AI Risk Management Framework

    While developed for broader audiences, NIST's AI Risk Management Framework provides a structured approach to identifying and managing AI risks. The framework can help nonprofits develop systematic governance practices aligned with emerging standards.

    • Comprehensive risk taxonomy
    • Assessment methodology
    • Alignment with regulatory expectations

    Building Governance That Enables and Protects

    Updating your data governance policy for the AI era is essential work that protects your organization, your stakeholders, and your mission. The gap between AI adoption (82% of nonprofits) and AI governance (less than 10% with formal policies) represents significant organizational risk—risk that grows as regulations tighten and stakeholder expectations rise. Closing this gap positions your organization for sustainable, responsible AI use.

    Effective AI data governance isn't about restricting innovation or creating bureaucratic obstacles. It's about establishing clear frameworks that enable staff to use AI confidently, knowing they're operating within appropriate boundaries. Well-designed policies answer the questions staff face daily: Can I use this tool with this data? What approval do I need? How should I document what I'm doing? Without clear answers, either staff avoid AI tools entirely (missing efficiency gains) or they use tools inappropriately (creating risks).

    The regulatory environment will continue evolving, with full EU AI Act applicability in August 2026 and additional U.S. state laws coming into effect throughout the year. Organizations that build robust governance frameworks now will be better positioned to adapt to new requirements than those scrambling to catch up later. The investment in governance pays dividends in reduced compliance risk, increased stakeholder trust, and more confident AI adoption.

    Start with the most critical gaps—data classification, access controls, and privacy notices—and build from there. Use available resources rather than creating everything from scratch. Engage stakeholders throughout the process to build understanding and buy-in. And plan for ongoing governance rather than treating policy updates as one-time events. The organizations that treat AI governance as an ongoing practice rather than a project will be best positioned for whatever changes come next.

    Your data governance policy is a living document that reflects your organization's values and commitments. Updating it for the AI era is an opportunity to reaffirm your commitment to protecting the people you serve while embracing tools that can amplify your impact. That combination—protection and possibility—is what responsible AI governance makes possible.

    Need Help with Your AI Governance Framework?

    We help nonprofits develop comprehensive data governance policies that address AI-specific requirements while remaining practical to implement. From policy review to full governance framework development, we can help you build the foundation for responsible AI use.