The Case for Nonprofit Data Standards: How Shared Schemas Could Transform Sector-Wide AI
The nonprofit sector spends vast amounts of resources reformatting the same data over and over because every platform speaks a different language. Shared data standards would change this, enabling AI tools that actually talk to each other and sector-wide insights that no single organization can generate alone.

Imagine a foundation that funds 200 organizations working on food security. Each of those organizations tracks program participation, meal counts, and nutrition outcomes. But each uses different definitions of a "participant," different ways of measuring meal quality, and different software systems that cannot talk to each other. The foundation wants to understand what works across its portfolio, and it wants to use AI to analyze patterns. But the AI cannot do this analysis because the data is incompatible at a fundamental level. Before any actual analysis can happen, someone has to spend months reformatting everything into a consistent structure. This is the nonprofit data standards problem, and it plays out thousands of times every day across the sector.
The absence of shared data standards is not just an inconvenience. It is a structural drag on the entire sector's capacity to learn, improve, and demonstrate impact. Organizations spend enormous shares of their evaluation budgets on data cleanup rather than analysis. AI tools that could surface powerful insights across programs remain limited to the specific, proprietary data formats of whichever CRM or program management system an organization happens to use. Funders cannot easily compare outcomes across grantees. Researchers cannot aggregate findings across organizations. The sector collectively knows far less than it could because its data infrastructure is fragmented by design, even when that design was accidental.
Other industries have faced this problem and solved it. Healthcare built HL7 and FHIR, standards that now enable nearly 500 million health records to be exchanged across systems that were previously incompatible. Finance built open banking standards that transformed a sector where every institution guarded its data jealously into one where interoperability drives innovation. The nonprofit sector has the building blocks of similar standards already available, including the Microsoft Nonprofit Common Data Model, the Open Referral Human Services Data Specification, and IRIS+ for impact measurement. What has been missing is the sector-wide commitment to adopt and enforce them.
This article makes the case for why nonprofit data standards matter, examines what already exists that organizations can adopt today, draws lessons from sectors that have succeeded in building standards infrastructure, and offers practical steps that individual organizations can take to move toward a more interoperable future, whether or not the sector as a whole moves with them.
The Scale of the Fragmentation Problem
The average nonprofit uses five or more separate systems beyond their primary CRM, and these systems almost never share data natively. A typical organization might have one platform for donor management, another for volunteer coordination, a third for program tracking, a grant management tool, and a communication platform, plus spreadsheets filling the gaps where none of the systems talk to each other. Each system uses its own data model, its own naming conventions, and its own export formats. When data needs to move between systems, it usually involves manual export, reformatting in spreadsheets, and manual import, a process that introduces errors and consumes staff time that could be spent on mission delivery.
Research from Heller Consulting published in 2026 found that nonprofits using multiple unintegrated data systems spend a disproportionate share of their evaluation capacity on data cleanup rather than actual analysis. One foundation that moved to standardized data collection with shared identifiers and consistent field definitions reduced its portfolio reporting time from six weeks to two days, a more than 95% efficiency improvement driven entirely by eliminating the reformatting work that had previously consumed most of the reporting cycle. This kind of gain is not exceptional; it is the predictable result of solving a structural problem that most organizations have simply learned to live with.
The AI implications are significant and underappreciated. AI tools are as useful as the data they can access. An AI grant writing assistant that can only read your organization's own historical applications cannot compare your approach to what has worked across hundreds of similar nonprofits. A donor retention model trained only on your CRM cannot incorporate the broader patterns across thousands of comparable organizations that would make its predictions genuinely predictive. A program evaluation tool that cannot aggregate findings across organizations serving similar populations cannot answer the sector-wide question of what interventions actually work. Every AI limitation that stems from data fragmentation is a limitation that shared standards would eliminate.
Data Cleanup Cost
Organizations report spending 40-60% of evaluation time cleaning and reformatting data rather than analyzing it
Reporting Efficiency Gain
One foundation reduced portfolio reporting from 6 weeks to 2 days after implementing shared data standards across grantees
System Fragmentation
The average nonprofit uses 5+ separate systems that do not share data natively, creating compounding integration costs
Nonprofit Data Standards That Already Exist
The nonprofit sector is not starting from zero. Several data standards initiatives have already done significant work that organizations can adopt today. The gap is not in the existence of standards but in their adoption, which remains fragmented because adoption requires organizational investment and coordination that individual organizations have limited incentive to prioritize unilaterally.
The Microsoft Nonprofit Common Data Model (CDM) is perhaps the most comprehensive existing standard. It is an open-source schema on GitHub with over 90 entity definitions covering donor commitments, designations, financial transactions, awards, disbursements, delivery frameworks, program results, outcome indicators, and benefit recipients. Critically, the CDM is not dependent on Microsoft products; it can serve as a data modeling foundation for any platform. Oracle/NetSuite and Unit4 have already adopted the CDM, which means organizations using these platforms have a pathway to CDM-aligned data without custom development work. NTEN has framed adoption of the CDM as an equity issue, arguing that shared schemas democratize access to sophisticated data infrastructure that only large organizations can currently afford to build independently.
The Open Referral Human Services Data Specification (HSDS) is purpose-built for organizations that provide health, human, and social services. It defines a common vocabulary enabling community resource directories to communicate with each other, so that information about shelters, food programs, legal services, mental health resources, and other human services can be shared across platforms without reformatting. Version 3.0.1 is currently active and has been formally recommended by the Alliance of Information and Referral Systems and the UK government. If your organization provides direct human services and participates in community referral networks, adopting HSDS creates immediate interoperability with 211 directories and other referral platforms in your region.
IRIS+, developed by the Global Impact Investing Network, provides a common language for describing goals, strategies, and social and environmental performance. It includes interoperability with third-party data platforms and aligns with the UN Sustainable Development Goal framework. For organizations that need to report impact to funders, adopting IRIS+ metrics creates compatibility with impact investors and other organizations measuring similar programs. The Common Impact Data Standard, developed by the Common Approach to Impact Measurement initiative, goes further by enabling "data capsules" to be shared platform-to-platform with minimal human effort, aligned with both IRIS+ and the International Aid Transparency Initiative (IATI) standard.
Microsoft Nonprofit Common Data Model
Open-source schema for organizational data
90+ entity definitions covering donors, programs, outcomes, and financials. Platform-independent and community-governed. Adopted by Oracle/NetSuite and Unit4.
- Best for: Organizations wanting a comprehensive internal data architecture standard
- Available at: nonprofitcdm.org and Microsoft GitHub
Open Referral HSDS
Standard for human services data exchange
Defines common vocabulary for community resources. Enables real-time sharing with 211 directories, government systems, and referral platforms. Version 3.0.1 active.
- Best for: Direct service organizations participating in community referral networks
- Available at: docs.openreferral.org
IRIS+ Impact Measurement
Standard for social and environmental performance
Common language for impact goals and performance metrics. Aligned with UN SDGs and IATI. Interoperability with third-party data platforms and impact investors.
- Best for: Organizations reporting to impact investors or wanting comparable outcome metrics
- Managed by: Global Impact Investing Network (GIIN)
Common Impact Data Standard
For automated cross-platform data sharing
Enables "data capsules" to be shared platform-to-platform with minimal human effort. Aligned with IRIS+, IATI, and UN SDGs. Developed by Common Approach initiative.
- Best for: Organizations wanting automated funder reporting and peer benchmarking
- Available at: commonapproach.org
What Healthcare and Finance Got Right
The nonprofit sector does not need to invent standards infrastructure from scratch. Two other sectors have navigated this transition in ways that offer directly applicable lessons, one through voluntary coordination combined with regulatory mandate (healthcare), and one primarily through regulatory mandate (finance). Both paths have worked, with different timelines and trade-offs.
Healthcare's journey to interoperability began decades ago with HL7 (Health Level Seven), which established standards for exchanging clinical and administrative data between healthcare systems. The more recent and transformative development was FHIR (Fast Healthcare Interoperability Resources), which brought modern API architecture to healthcare data exchange. The US government mandated FHIR-based data sharing through the 21st Century Cures Act, creating regulatory pressure that made adoption economically unavoidable for large healthcare systems. By early 2026, TEFCA (Trusted Exchange Framework and Common Agreement), the national health data interoperability network built on FHIR, reached nearly 500 million health records exchanged across systems that were previously incompatible. HL7 launched an AI Office in 2025 specifically to apply FHIR's interoperability architecture to AI use cases, recognizing that standards-based data is the foundation that makes healthcare AI genuinely useful rather than limited to siloed applications.
The lessons for nonprofits from healthcare's standards journey are specific and actionable. Open, collaborative standards communities create well-designed specifications that find adoption because they serve genuine needs rather than proprietary interests. Involving practitioners, not just technologists, in standards development is essential for adoption because practitioners are the ones who know what data they actually generate and what they actually need from data. Pilot testing before full rollout prevents the specification drift that occurs when standards are written without grounding in real organizational data. And data quality is the foundation on which any AI value depends: healthcare AI experts consistently describe good data quality as a prerequisite for good AI performance, not a nice-to-have.
Finance's path through open banking offers a different but equally instructive lesson. European open banking, mandated by PSD2, standardized financial data APIs in a way that created an ecosystem where third-party services could build on shared data infrastructure. The API call volume generated by this ecosystem is projected to grow from 137 billion in 2025 to over 700 billion globally by 2029, demonstrating the innovation that standards unlock. The key insight from finance is that regulatory mandate can accelerate adoption of voluntary standards that have stalled. The nonprofit sector does not currently have a regulatory equivalent to PSD2, but foundations and institutional funders have significant leverage to create adoption incentives by requiring grantees to use standard data schemas for reporting.
Lessons from Healthcare (HL7/FHIR)
- Open standards level the playing field for resource-constrained organizations
- Practitioner involvement in standards design drives real-world adoption
- Modern API architecture is AI-ready in ways that legacy formats are not
- Data quality tagging and transparent AI output labeling are non-negotiable
- Government mandate (21st Century Cures Act) accelerated voluntary coordination that had stalled
Lessons from Finance (Open Banking)
- Standardized APIs unlock innovation ecosystems that proprietary systems cannot
- Token-based authorization with granular permission scopes protects data while enabling sharing
- Nominal compliance without true shared governance still leaves interoperability gaps
- Funder requirements can substitute for regulatory mandate in driving adoption
- Standards lower barriers for smaller players who cannot afford to build proprietary infrastructure
The AI Opportunity That Standards Unlock
The most exciting implication of nonprofit data standards is not better reporting or reduced reformatting work. It is the AI models that become possible when organizations are contributing compatible data to shared pools. The nonprofit sector collectively generates enormous amounts of data about what interventions work, which donor segments respond to which messages, how program outcomes vary by context, and what organizational characteristics predict sustainability. Almost none of this data is usable for sector-wide learning because it is trapped in incompatible formats across thousands of separate systems.
A sector-wide dataset built on shared standards would enable AI models that no individual organization could develop alone. A grant writing AI trained on thousands of successful grant applications across hundreds of nonprofit types and dozens of foundation relationships would be dramatically more useful than one trained only on your organization's history. A donor retention model trained on millions of giving relationships across the sector would identify patterns in lapsing risk that are invisible at organizational scale. A program outcomes predictor trained on standardized outcome data from thousands of similar programs would genuinely help organizations understand which of their interventions are working and why.
Candid's December 2025 partnership with Anthropic provides a concrete current example of what this looks like in practice. Candid, the merged entity of GuideStar and Foundation Center, created a Model Context Protocol (MCP) connector that pipes verified nonprofit and foundation data directly into Claude. This gives AI users real-time access to Candid's database of over 2,500 data points on nonprofits and funders, enabling questions that previously required hours of manual research to be answered in seconds. Candid has explicitly positioned itself as a data platform for AI, recognizing that its value in an AI-powered sector depends on the quality and accessibility of its structured data. This is an early but significant example of what happens when high-quality, standardized nonprofit data becomes accessible to AI systems.
The corollary risk of fragmentation is what NTEN calls the "AI tax": when nonprofit organizations use AI tools built by vendors who aggregate their clients' data to train commercial models, those organizations are contributing their proprietary data to products they then have to pay for, while the sector insights that data contains are captured by the vendor rather than shared across the sector. Standards that enable data portability give organizations the ability to choose how their data is used and by whom, rather than ceding that choice to the vendor by default.
AI Models That Shared Standards Would Enable
These applications are impossible with fragmented data but become viable with sector-wide standards
- Cross-sector grant writing intelligence: AI trained on thousands of successful applications across nonprofits, foundations, and program types, identifying patterns invisible at organizational scale
- Sector-calibrated donor retention models: Predictions grounded in millions of giving relationships rather than hundreds, dramatically improving accuracy for individual organizations
- Program effectiveness benchmarking: Comparing your outcomes against standardized results from similar programs across the sector to identify what drives performance differences
- Funder alignment matching: AI that can identify funders whose stated priorities align with your specific program model, based on comprehensive grant history across the sector
- Collaborative impact analysis: Understanding how your programs interact with and complement the work of peer organizations in your community or field
The Specific Data Domains Where Standards Matter Most
While data fragmentation affects every aspect of nonprofit operations, three domains present the highest opportunity for standards-driven improvement: donor data, program outcome data, and funder and grant data. Each has distinct characteristics and distinct standards implications.
Donor data is perhaps the most immediately tractable problem. The core issue is the absence of a universal donor identity standard: the same person appears as separate, unlinked records across Salesforce NPSP, Blackbaud Raiser's Edge, DonorPerfect, Little Green Light, and dozens of other platforms. There is no equivalent of a shared identifier that would allow organizations to recognize that their mid-level donor who gives annually is the same person who made a major gift to a hospital foundation two years ago. This fragmentation limits the sector's understanding of philanthropic behavior and prevents portfolio-level analysis of giving patterns. Data privacy laws, which as of 2026 have been enacted in 13 or more US states with varying requirements, add compliance complexity to any donor data standardization effort but do not eliminate the opportunity; they just require careful design of consent and governance frameworks.
Program outcome data is perhaps the most fragmented domain of all. There is no agreed-upon definition of a "program outcome" that applies across nonprofit types, program models, or even similar organizations working in the same field. Organizations measuring employment outcomes after job training programs use different definitions of "employment," different follow-up timeframes, and different measurement instruments. Even funders who require outcome reporting often accept whatever framework the grantee prefers, resulting in portfolio data that cannot be aggregated. Multiple frameworks compete for adoption, including IRIS+, the Common Impact Data Standard, Outcomes Star, and Social Return on Investment, without a clear sector standard emerging. The AI opportunity in this domain is enormous: sector-wide outcome data would finally allow the field to answer questions about what works that have been debated for decades without resolution.
Funder and grant data has a more developed standards ecosystem than the other two domains. Candid maintains a comprehensive database of foundation information and grant history with APIs that provide access to over 2,500 data points. The International Aid Transparency Initiative (IATI) provides a standard for publishing development aid data that many international funders use. The combination of Candid's data infrastructure and AI tools like its Anthropic MCP connector represents the sector's most advanced current example of what standardized funder data enables for nonprofits: research that previously took weeks now takes minutes.
Donor Data
No universal donor identity standard. Same person appears as separate records across all major CRM platforms. Privacy law compliance adds complexity but does not eliminate the opportunity.
Program Outcomes
Most fragmented domain. No agreed definitions of outcomes across program types. Multiple competing frameworks (IRIS+, CIDS, Outcomes Star) without a clear sector standard.
Funder & Grant Data
Most developed standards ecosystem. Candid's API + Anthropic MCP connector represents the sector's leading current example of what standardized funder data enables.
Practical Steps Organizations Can Take Now
Individual organizations do not need to wait for sector-wide standards adoption to begin positioning themselves for interoperability. Several practical steps provide immediate organizational benefit while also contributing to the broader standards ecosystem. The key insight is that organizations that build internal data infrastructure aligned with existing standards will be the first to benefit when those standards gain broader adoption, and will face far less disruption when that adoption happens.
The most important immediate action is a systematic data landscape audit. Many organizations do not have a clear picture of where their data actually lives, what fields each system captures, how those fields are defined, and whether data can be exported in accessible formats. Creating this map is the prerequisite for every other step. It typically reveals unexpected redundancies (the same information being captured in three different systems with different field names), critical gaps (important program data being tracked only in spreadsheets outside any formal system), and data portability risks (systems where data is held in proprietary formats with no export capability).
For organizations evaluating or renewing CRM and program management contracts, the negotiation moment is the best opportunity to establish data portability protections. Vendors should be able to provide documented, usable API access to your organizational data. If a vendor cannot demonstrate clear API access with reasonable rate limits and field-level documentation, that represents a data sovereignty risk. Explicit data portability clauses, which specify that your organization owns its data and can export it in full and in standard formats at any time and at contract termination, should be non-negotiable requirements in any new vendor agreement.
The NTEN Tech Accelerate assessment tool provides a free structured evaluation of your organization's technology practices, including data governance. Using it as an annual baseline measurement creates a basis for tracking improvement over time and for benchmarking against peer organizations. For organizations that want to take a more active role in sector standards development, NTEN's annual Nonprofit Technology Conference (NTC) and the West Coast Nonprofit Data Conference are key gathering points for the community of practice working on these issues.
A Practical Data Standards Roadmap
Steps that provide immediate organizational benefit and position you for interoperability
- Conduct a data landscape audit: Map every system holding organizational data, document field definitions, and identify export capabilities. This is the prerequisite for everything else.
- Model your internal data against the Nonprofit CDM: Even without changing platforms, aligning your field naming conventions and data structure to the CDM creates future portability
- Adopt IRIS+ for outcome measurement: Using IRIS+ metrics creates compatibility with impact investors, peer organizations, and sector-wide benchmarking tools
- Require API access in vendor contracts: Any vendor without documented, usable API access to your own data represents a data sovereignty risk that should be resolved before renewal
- Negotiate explicit data portability clauses: Your data should be yours. Confirm in writing that you can export everything in standard formats at any time, including at contract termination
- Pilot shared measurement with peer organizations: If you work in the same program area as peer nonprofits, coordinate on a shared outcome framework before your next AI adoption project
- Connect to Candid's API: Access Candid's 2,500+ nonprofit data points for funder research and benchmarking, and use the Anthropic MCP connector if you use Claude
- Run the NTEN Tech Accelerate assessment annually: Free structured evaluation that creates a baseline and enables progress tracking over time
The Adoption Challenge and How to Address It
The nonprofit sector's standards challenge is not primarily a technical problem. The standards exist. The business case is clear. The barrier is coordination: each organization has limited incentive to invest in standards adoption unilaterally, because most of the value from standards comes from network effects that only materialize when many organizations participate. This is the classic collective action problem, and the nonprofit sector has historically struggled to solve it.
The most powerful levers for changing this dynamic lie with funders and intermediaries rather than individual organizations. When funders require grantees to report outcomes using a specified standard framework, they create an adoption incentive that individual organizations would not otherwise have. When major sector intermediaries like Salesforce.org, Blackbaud, or Microsoft adopt and promote specific standards in their platforms, they reduce the implementation cost for the organizations that use those platforms. When peer learning networks make standards adoption part of their shared infrastructure, they create communities of practice that sustain adoption over time.
Individual organizations that adopt standards early gain competitive advantages in the funding landscape as these dynamics develop. Foundations that are building toward data-driven grantmaking and AI-assisted due diligence will increasingly favor organizations whose data can be aggregated and analyzed without custom formatting work. Organizations that can demonstrate interoperability with sector data infrastructure will have stronger grant applications, clearer impact narratives, and faster reporting cycles than those still operating entirely in proprietary data silos. The case for early adoption is not altruism; it is strategic positioning for a sector that is moving, however slowly, toward data as infrastructure.
For more on the technology infrastructure decisions that underpin AI effectiveness, see our articles on why nonprofit AI tools don't talk to each other, data governance policies for AI, and the clean data imperative for nonprofit AI.
From Fragmentation to Infrastructure
The case for nonprofit data standards is ultimately a case for treating data as sector infrastructure rather than organizational proprietary information. When healthcare built FHIR and finance built open banking standards, they did not eliminate competition or organizational autonomy. They built shared infrastructure on top of which individual organizations could innovate more effectively and more efficiently than they could when everyone was rebuilding the same foundations from scratch.
The nonprofit sector has an unusual opportunity right now, because the AI transition is creating new urgency around data quality and interoperability that was not present in previous technology generations. The organizations and funders that recognize this moment and invest in standards adoption, even when the immediate return is not obvious, will be the ones best positioned as AI tools become more central to how the sector operates. The standards exist. The AI tools that would benefit from them exist. What remains is the sector-wide commitment to connect the two.
The path forward does not require waiting for sector-wide consensus. Individual organizations that build on the Nonprofit CDM, adopt IRIS+ metrics, require API access from vendors, and participate in peer learning networks around data standards are advancing the cause while also advancing their own organizational effectiveness. In a field where resources are always constrained, that combination of individual benefit and collective impact is as good an argument for action as any.
Build Data Infrastructure That Scales with AI
One Hundred Nights helps nonprofits assess their data landscape, align with sector standards, and build the infrastructure that makes AI genuinely useful rather than limited by fragmentation. Let's build something that lasts.
