UN AI Resolution: What "Safe, Secure, and Trustworthy AI" Means for Nonprofits
The United Nations General Assembly's unanimous 2024 AI resolution represents the world's first global consensus on how artificial intelligence should be governed. For nonprofits working on mission-driven programs, understanding what this landmark agreement actually means in practice is essential for aligning your AI strategy with emerging global standards.

On March 21, 2024, the United Nations General Assembly made history by adopting its first-ever standalone resolution on artificial intelligence. Titled "Seizing the Opportunities of Safe, Secure, and Trustworthy Artificial Intelligence Systems for Sustainable Development," the resolution was co-sponsored by 123 member states and adopted by consensus among all 193 UN members, a remarkable degree of international agreement on a technology that had generated fierce geopolitical competition.
For most nonprofits, a UN General Assembly resolution might seem like distant geopolitical news with little practical relevance to daily program delivery and fundraising. That perception would be a mistake. This resolution represents the first formal global consensus on what responsible AI development and deployment should look like, and it specifically calls on civil society organizations to "develop and support regulatory and governance approaches and frameworks related to safe, secure and trustworthy use of AI." Nonprofits are not merely observers in this global governance conversation. They are active participants with a recognized role in shaping how AI is used in their communities.
The resolution also carries practical implications that extend beyond governance advocacy. Its principles align closely with what thoughtful AI implementation looks like for mission-driven organizations: human-centered approaches, attention to equity and access, protection of vulnerable populations, and transparency about how technology decisions are made. For nonprofits already committed to ethical AI use, the resolution provides valuable external validation and a useful framework for explaining your AI principles to funders, board members, and the communities you serve.
This article unpacks what the resolution actually says, what its three core principles, safety, security, and trustworthiness, mean in the context of nonprofit AI use, and how your organization can practically align its AI strategy with these emerging global standards. Understanding this framework is becoming increasingly important as funders begin asking more sophisticated questions about AI governance, and as the regulatory landscape continues to evolve in ways that reflect the resolution's core principles.
What the Resolution Actually Says
Before exploring practical implications, it is worth understanding what the resolution actually contains, and what it does not. UN General Assembly resolutions are non-binding declarations of principle. They do not create enforceable legal obligations the way treaties do. However, they carry significant weight as expressions of global consensus and frequently shape subsequent regulatory frameworks, funding priorities, and international norms.
The 2024 AI resolution establishes several foundational positions. It recognizes that AI systems have "immense potential to accelerate and enable progress towards achieving" the UN's 17 Sustainable Development Goals, while simultaneously acknowledging that "the improper or malicious design, development, deployment, and use" of AI systems "may pose risks to the enjoyment of human rights." This dual acknowledgment, opportunity alongside risk, reflects the balanced approach that characterizes thoughtful AI governance.
The resolution explicitly calls on member states to promote AI that is human-centric, reliable, explainable, ethical, inclusive, privacy-preserving, and responsible. These are not vague aspirations. Each term has specific meaning in the context of AI system design and deployment, and together they form a comprehensive framework for evaluating whether a particular AI application is consistent with the resolution's principles.
Importantly, the resolution specifically addresses the role of civil society. It urges "all States, the private sector, civil society, research organizations and the media, to develop and support regulatory and governance approaches and frameworks related to safe, secure and trustworthy use of AI." This language positions nonprofits as active participants in governance, not just recipients of policy decisions made by governments and corporations.
Understanding the Three Pillars: Safety, Security, and Trustworthiness
The resolution's three core principles are often grouped together, but they represent distinct concepts with different practical implications for nonprofit AI use. Understanding the difference helps organizations implement each principle appropriately.
Safety: Preventing Harm Across the AI Lifecycle
Ensuring AI systems operate as intended and do not cause unintended harm
AI safety refers to the technical and organizational measures that ensure AI systems operate as intended and do not cause unintended harm. In the context of the UN resolution, safety includes both technical safety, ensuring systems function correctly, and broader safety considerations around the social impacts of AI deployment.
For nonprofits, safety concerns manifest most directly in applications involving vulnerable populations. A social services organization using AI to triage client needs must ensure the system does not systematically underservice certain groups. A mental health nonprofit using AI in client communications must ensure the technology never inadvertently causes harm in sensitive situations. A youth-serving organization using AI for any purpose must apply heightened care given the additional protections owed to minors.
Practical safety measures include:
- Human oversight of AI decisions, especially those affecting individual client welfare
- Regular testing of AI outputs for bias and systematic errors across demographic groups
- Clear protocols for when AI recommendations should be overridden by human judgment
- Incident reporting systems that capture and respond to AI-related harms
- Periodic safety audits of AI systems, especially those with significant impact on client outcomes
Security: Protecting Data and Systems from Misuse
Ensuring AI systems and the data they use are protected from unauthorized access and misuse
Security in the context of AI governance addresses the protection of AI systems and the sensitive data they process from unauthorized access, misuse, and malicious interference. The UN resolution's security emphasis reflects growing concerns about AI systems being exploited by bad actors, used to generate misinformation, or vulnerable to manipulation that could undermine their intended purpose.
For nonprofits, AI security concerns are closely tied to data security more broadly. Organizations handling sensitive information about clients, donors, or beneficiaries have existing obligations to protect that data. AI systems that process such information extend those obligations to new contexts and create new vectors of potential breach. An AI tool that ingests client case notes, donor financial information, or health records requires the same rigorous security practices as the systems that originally hold that information.
The resolution's security principle also extends to protecting AI systems from generating harmful content or being weaponized against the people they are supposed to serve. Nonprofits should understand what safeguards their AI vendors have built against misuse, including protections against prompt injection attacks, unauthorized data extraction, and the generation of harmful or misleading content.
- Review AI vendor data handling practices and storage locations
- Implement access controls limiting who can interact with sensitive AI systems
- Include AI tools in your organization's cybersecurity risk assessment
- Maintain clear policies about what sensitive data may and may not be shared with AI tools
Trustworthiness: Transparency, Explainability, and Accountability
Ensuring AI systems earn and maintain the trust of those they affect
Trustworthiness is arguably the most complex of the three principles because it encompasses multiple dimensions: transparency about how AI systems work, explainability of their decisions, accountability for their outcomes, and the ongoing maintenance of stakeholder confidence. An AI system can be technically safe and secure while still being untrustworthy if it operates as a black box whose decisions cannot be questioned or understood.
For nonprofits, trustworthiness is particularly important because of the mission-based relationships at the heart of their work. Donors trust that their contributions are being used effectively. Clients trust that services are being delivered fairly. Staff trust that organizational systems support rather than undermine their work. AI systems that affect these stakeholders must be able to withstand scrutiny and earn ongoing trust through demonstrable accountability.
The resolution's emphasis on explainability is especially relevant for nonprofits making consequential decisions about service delivery, resource allocation, or donor engagement. When an AI system influences a significant decision, the people affected have a legitimate interest in understanding why that decision was made and how they might seek recourse if they believe it was wrong.
- Maintain an internal register of AI tools in use and their purposes
- Be transparent with stakeholders about how AI influences organizational decisions
- Establish appeal or override processes for AI-influenced decisions
- Assign clear organizational responsibility for AI governance and accountability
- Report honestly to funders and boards about AI use, including challenges and limitations
The Equity and Access Dimension
One of the most significant aspects of the UN AI Resolution for nonprofits is its explicit concern with AI equity and access. The resolution recognizes that AI's benefits are not being distributed evenly, and that without intentional governance intervention, AI development risks deepening existing inequalities rather than alleviating them. The General Assembly expressed particular concern about the digital divide between developed and developing nations, and urged member states and stakeholders to "close the digital gap within and between nations and use this technology to advance shared priorities around sustainable development."
For nonprofits working at the intersection of technology and social justice, this equity focus resonates deeply. Many organizations in the sector serve communities that are already disadvantaged by structural inequalities, and the prospect of AI amplifying those disadvantages through biased systems or unequal access is a legitimate and serious concern. The resolution's emphasis on equity provides an important framework for evaluating not just whether your organization uses AI responsibly, but whether the AI tools you use and advocate for serve equity goals or undermine them.
The practical implications extend in multiple directions. Nonprofits should scrutinize whether AI tools they adopt perpetuate biases that disadvantage the communities they serve. They should consider whether AI-driven efficiency gains might inadvertently reduce access to services for people who need more human interaction, not less. And they should think about whether their own AI advocacy and practice contributes to narrowing or widening the technology access gap in their communities.
Organizations working with rural communities, people experiencing poverty, or communities with limited digital access face particular tensions. The most sophisticated AI tools often require reliable internet connectivity, device access, and digital literacy that many community members may lack. The resolution's equity framework encourages nonprofits to think carefully about whether they are implementing AI in ways that serve their full constituency or only the portions of it that happen to be most digitally connected. For deeper exploration of this challenge, see our article on serving communities on both sides of the digital divide.
AI Through a Human Rights Lens
The UN resolution is fundamentally a human rights document applied to artificial intelligence. It grounds its principles in the existing body of international human rights law, arguing that AI development and deployment must be fully consistent with established rights frameworks. This is not a new set of rights invented for AI, but an application of well-established principles to a new technological context.
Non-Discrimination
The resolution calls for protecting individuals "from all forms of discrimination, bias, misuse, or other harm from AI systems." For nonprofits, this means actively evaluating whether AI tools treat all people equitably, regardless of race, gender, disability, age, or other protected characteristics.
Nonprofits serving diverse communities should understand how AI tools were trained and what populations the training data represents. Systems trained predominantly on data from one demographic group may perform poorly or unfairly when applied to others.
Privacy Protection
The resolution explicitly calls for AI development in a "privacy-preserving manner." Nonprofits handling sensitive information about clients, particularly in health, mental health, immigration, or domestic violence contexts, must ensure AI tools do not compromise the confidentiality that vulnerable people depend on.
Privacy considerations for AI extend beyond legal compliance to ethical obligation. Even when sharing certain information with AI tools is technically legal, it may not be consistent with the expectations of trust that clients hold.
Human Autonomy
Human rights frameworks emphasize individual autonomy, the right to make meaningful decisions about one's own life. AI systems that make significant decisions about people's access to services, opportunities, or support without adequate human oversight or appeal mechanisms raise serious autonomy concerns.
Nonprofits should ensure that AI-influenced decisions in client services maintain meaningful pathways for human review, especially when AI recommendations affect access to critical services.
Right to Remedy
Human rights law recognizes that when violations occur, affected parties must have access to effective remedies. Applied to AI, this means individuals who are harmed by AI-influenced decisions must have a meaningful way to seek correction or accountability.
For nonprofits, this principle reinforces the importance of transparency about when and how AI is used, as well as clear grievance procedures that clients can access if they believe an AI-influenced decision was unjust.
Connecting AI to the Sustainable Development Goals
The resolution's full title references "Sustainable Development," and this connection is not incidental. The General Assembly explicitly recognized AI's potential to accelerate progress toward the 17 Sustainable Development Goals that the global community has committed to achieving. These goals, which address poverty, education, health, climate change, inequality, and more, represent precisely the mission domains where nonprofits work.
The connection between AI and the SDGs creates a powerful frame for nonprofit AI advocacy. When a food security organization uses AI to optimize food bank logistics, it is potentially contributing to SDG 2 (Zero Hunger). When an education nonprofit uses AI to personalize learning for struggling students, it connects to SDG 4 (Quality Education). When a health nonprofit uses AI to improve access to care in underserved communities, it advances SDG 3 (Good Health and Well-Being).
However, the resolution's framework also cautions that AI can undermine SDG progress if not deployed responsibly. AI systems that perpetuate discrimination can worsen inequality (SDG 10). AI tools that concentrate benefits among already-privileged populations can increase rather than decrease disparities. The resolution's emphasis on equity and access reflects an understanding that good intentions are not sufficient: the design and deployment of AI systems determines whether they advance or obstruct sustainable development goals.
For nonprofits seeking to make the case for AI investment to funders and boards, the SDG framework provides a useful language for connecting technology decisions to mission outcomes. Rather than presenting AI as an efficiency tool, organizations can frame thoughtful AI adoption as a strategy for advancing the specific sustainable development goals most relevant to their mission. This framing resonates with an increasing number of institutional funders who use SDG alignment as an evaluation criterion.
Practical Steps for Aligning with the Resolution's Principles
Understanding the resolution's principles is valuable; translating them into organizational practice is essential. The following steps help nonprofits move from awareness to meaningful alignment with the global AI governance framework the resolution represents.
1. Develop or Update Your AI Governance Policy
The resolution calls specifically on civil society to develop governance frameworks for AI. For nonprofits, this means creating organizational policies that address how AI tools are selected, deployed, monitored, and reviewed. An effective AI policy should address data privacy, acceptable use, human oversight requirements, and procedures for addressing AI-related harms.
Organizations that have not yet developed an AI policy now have additional impetus to do so. The resolution's global consensus on AI governance principles makes it easier to explain to skeptical board members or staff why formal AI governance is necessary, not just for your organization's protection, but as part of a broader commitment to responsible AI development. For guidance on creating a policy, see our article on learning from leading nonprofit AI policies.
2. Conduct an AI Equity and Bias Assessment
The resolution's emphasis on non-discrimination and equitable access provides a mandate for nonprofits to assess whether their current and planned AI tools serve all members of their community equitably. This assessment should examine both the tools themselves, asking whether their training data and design represent the populations you serve, and your deployment practices, asking whether implementation creates access barriers for some groups.
This kind of equity audit need not be technically complex. It can begin with questions your staff can answer: Which communities benefit most from this AI tool? Which might be disadvantaged by it? What assumptions are built into the tool's design, and whose experience do those assumptions reflect? Honest answers to these questions reveal where additional attention is needed.
3. Engage in AI Governance Advocacy
The resolution explicitly calls on civil society to participate in shaping AI governance frameworks. Nonprofits have both the opportunity and the responsibility to bring community perspectives into AI policy conversations. This might involve participating in public comment periods on AI regulations, joining coalitions advocating for equitable AI development, or contributing to sector-specific guidance on responsible AI use.
Nonprofit leaders are particularly well-positioned to bring human-centered perspectives to AI governance conversations that are often dominated by technology companies and government agencies. The communities nonprofits serve frequently bear the greatest risks from poorly governed AI systems, making nonprofit advocacy on these issues a natural extension of mission-driven work.
4. Build Internal AI Literacy and Ethics Capacity
Meaningful AI governance requires that the people making decisions about AI in your organization have sufficient understanding of both AI capabilities and AI ethics. This is not about technical training for all staff, but about ensuring that leaders, program managers, and anyone involved in AI tool selection or oversight can engage substantively with governance questions.
Investing in AI ethics education is increasingly a prerequisite for thoughtful AI governance. Organizations might consider establishing an AI ethics committee, designating an AI governance lead, or partnering with universities or other organizations with AI ethics expertise. The resolution's framework provides a useful curriculum foundation for these educational efforts.
5. Practice Transparent Stakeholder Communication
The resolution's emphasis on trustworthiness translates practically into a commitment to transparency with the people most affected by your AI decisions. This means being honest with clients about when AI influences services they receive. It means communicating openly with donors about how AI is used in their engagement. And it means reporting candidly to funders and boards about both the benefits and challenges of AI implementation.
Organizations that practice proactive transparency about their AI use are better positioned to earn and maintain stakeholder trust as AI becomes more embedded in organizational operations. This is not just good ethics. It is good organizational strategy in an environment where AI accountability is becoming an increasingly important criterion for stakeholder confidence. For guidance on donor communication specifically, see our article on transparent AI decision-making for nonprofits.
Situating the Resolution in the Broader Regulatory Landscape
The UN AI Resolution did not emerge in isolation. It was adopted just one week after the European Parliament approved the EU AI Act, the world's first comprehensive legal framework for artificial intelligence. The timing was not coincidental. Both developments reflect a global convergence around the need for formal AI governance, and the principles they establish, though one is legally binding and the other is not, show significant alignment.
The EU AI Act takes a risk-based approach to regulation, applying the most stringent requirements to "high-risk" AI applications including those used in healthcare, employment, education, and certain social services. Many nonprofit use cases fall into or near these high-risk categories, which means EU Act compliance is a live concern for US nonprofits that operate internationally or work with European funders or partners. For detailed analysis of what the EU AI Act means specifically for US-based nonprofits, see our article on EU AI Act implications for US nonprofits.
In the United States, the regulatory landscape remains more fragmented, with sector-specific regulations, state-level initiatives, and federal guidance documents rather than comprehensive AI legislation. However, the UN resolution's principles are increasingly influencing US regulatory thinking, and many organizations believe comprehensive federal AI legislation is a matter of when rather than if. Nonprofits that develop strong internal AI governance now will be better positioned to adapt to formal regulatory requirements as they emerge.
For nonprofits with global operations, the convergence of international AI governance frameworks, the UN resolution, the EU AI Act, and various national AI strategies, creates a complex compliance landscape. Understanding the UN resolution's principles provides a useful common denominator that applies across jurisdictions and can anchor an international AI governance approach even as specific legal requirements vary by country.
What Sophisticated Funders Are Asking About AI Governance
The UN resolution represents an externally validated framework that sophisticated funders are beginning to apply to their nonprofit grantees. As AI becomes more prevalent in nonprofit operations, leading foundations and institutional donors are asking increasingly sophisticated questions about how organizations govern their AI use. Understanding the resolution's principles helps nonprofits answer these questions credibly.
Questions About Governance Policies
- Does your organization have a formal AI policy?
- Who is accountable for AI governance in your organization?
- How does your board oversee AI use?
- How do you evaluate AI vendors for responsible practices?
Questions About Risk and Equity
- How do you assess whether AI tools serve all populations you work with equitably?
- What happens when an AI system makes a mistake affecting a client?
- How do you protect the privacy of the people you serve?
- How do you maintain human oversight of AI-influenced decisions?
Questions About Impact and Transparency
- How does AI contribute to your mission outcomes?
- How do you communicate AI use to the people you serve?
- How do you handle situations where AI has generated unexpected or harmful outcomes?
- Are you participating in sector-wide AI governance conversations?
Questions About Global Compliance
- How does your AI governance align with international standards?
- Are you tracking developments in AI regulation that might affect your operations?
- How do you ensure compliance with EU AI Act requirements for international work?
- How are you preparing for potential future US federal AI regulations?
The Resolution as a Governance North Star
The UN AI Resolution is best understood not as a compliance checklist but as a governance north star, a set of principles that point toward what responsible AI use should look like for any organization, including nonprofits. Its three pillars of safety, security, and trustworthiness, combined with its emphasis on human rights, equity, and sustainable development, provide a comprehensive framework for evaluating AI decisions across every dimension of organizational life.
For nonprofits already committed to ethical AI use, the resolution validates and extends principles you may already be applying. It provides external authority that can help you explain your AI governance commitments to stakeholders who might not otherwise understand their importance. And it situates your organization within a global community of practice that is working collectively to ensure AI serves humanity rather than exploiting it.
For organizations just beginning to think seriously about AI governance, the resolution offers an accessible starting point. Rather than navigating a fragmented landscape of sector-specific regulations and vendor claims, you can begin by asking a simple question: do our AI practices reflect the principles of safety, security, and trustworthiness that the global community has unanimously endorsed? Honest engagement with that question points toward the governance improvements most worth pursuing.
The organizations that will navigate the AI governance landscape most successfully will be those that treat ethics and responsibility not as compliance burdens but as natural expressions of their mission values. For nonprofits, that alignment between AI governance and mission is not difficult to achieve. It is, in fact, a natural extension of the commitment to human dignity and community wellbeing that defines the social sector at its best. For further reading on AI governance for nonprofits, explore our articles on adaptive AI governance frameworks and closing the nonprofit AI governance gap.
Ready to Build a Stronger AI Governance Framework?
Our team helps nonprofits develop practical AI governance policies, conduct equity assessments, and build the organizational capacity to use AI responsibly and effectively.
