The DOL's Five Foundational AI Literacy Areas: A Nonprofit Training Curriculum Walkthrough
In February 2026, the U.S. Department of Labor released a national AI literacy framework designed to guide workforce training across every sector and job type. For nonprofits wrestling with how to build real AI capability across their teams, this framework offers something rare: a federal blueprint with practical structure, publicly available free training resources, and access to funding through existing WIOA programs.

On February 13, 2026, the U.S. Department of Labor's Employment and Training Administration issued Training and Employment Notice No. 07-25 (TEN 07-25), distributing an AI literacy framework to every state workforce board, every American Job Center, and every community college and tribal college in the country. The framework defines AI literacy as "a foundational set of competencies that enable individuals to use and evaluate AI technologies responsibly," and it treats that definition as applying universally, not just to technical staff or technology-focused organizations, but to every worker in every industry.
For nonprofits, this document matters for several reasons that go beyond its policy significance. It provides a structured vocabulary for talking about AI training across different staff roles and skill levels. It is explicitly designed for contextual embedding, meaning organizations are expected to adapt it to their specific workflows rather than treat it as a generic corporate curriculum. It opens access to WIOA funding for qualifying training programs. And it names seven delivery principles that directly address the reasons most organizational AI training fails to produce lasting change.
This article walks through the framework in full: the five foundational content areas, the seven delivery principles, what the framework says about different staff roles, and how nonprofit leaders can use it to design a training curriculum that actually builds durable AI capability across their organizations. It also addresses the funding question that stops many nonprofits before they start: where does the money for AI training come from, and how does a resource-constrained organization access it?
This connects directly to broader capacity-building questions that nonprofits face around identifying and developing AI champions, closing the nonprofit AI training gap, and building the organizational AI learning culture that sustains long-term capability development.
The Five Foundational Content Areas
The framework organizes AI literacy into five distinct content areas. The DOL is explicit that these are not sequential stages; organizations should address all five simultaneously as part of any complete training effort, with the depth of each area varying by staff role. Every person in the organization, from the executive director to frontline case workers to administrative staff, is expected to reach foundational competency across all five.
Content Area 1: Understanding AI Principles
What AI is, how it works, and what it cannot do
The first content area builds foundational conceptual understanding: what AI is, how it is designed by humans, how it can be overseen, and how to accurately understand both its capabilities and its limitations. The framework explicitly frames AI as a "pattern engine, not a decision-maker," with probabilistic outputs and hallucination risks. This framing matters for nonprofits because it directly counters two common failure modes: either treating AI as infallible (leading to uncritical adoption of AI outputs) or treating it as unpredictably dangerous (leading to blanket avoidance that leaves real productivity opportunities on the table).
For nonprofits, building genuine understanding of AI principles requires that training goes beyond definitions. Staff need to understand why AI generates hallucinations, what kinds of tasks AI is structurally better and worse at, and how the organizational context in which they work (the populations they serve, the data they handle, the decisions they make) affects what AI tools are appropriate for which purposes. This is not a one-hour introduction; it is the conceptual foundation that makes everything else in the curriculum make sense.
Nonprofit application: Connect this to real examples from the organization's work. A fundraiser understanding why AI donor scoring models can be biased, or a case manager understanding why an AI summary of case notes may miss crucial context, builds the critical engagement this area requires.
Content Area 2: Exploring AI Uses
Matching AI tools to specific roles and workflows
The second content area moves from conceptual understanding to practical relevance: learning to identify which AI tools are applicable to a worker's specific role and industry. This area covers AI applications in productivity support, information assistance, creative work, task automation, and decision support. The goal is for staff to see AI as complementary to human expertise and to build the judgment needed to distinguish tasks where AI augmentation is genuinely valuable from tasks where it is inappropriate, inefficient, or risky.
For nonprofits, this area requires real context-specificity. Generic corporate examples of AI uses (AI for sales prospecting, AI for marketing automation) land differently than examples drawn from grant writing, volunteer coordination, case documentation, program evaluation, or donor communications. The framework's design intent, confirmed in Principle 2 on contextual embedding, is that organizations should populate this content area with examples directly from their own work, not from generic training materials.
Nonprofit application: Before training begins, collect examples of how AI is already being used across the organization, both authorized and unauthorized. These become the most relevant teaching material for this content area.
Content Area 3: Directing AI Effectively
Prompting, iteration, and getting useful results
The third content area is the most practical: learning to create inputs that produce useful results. This includes crafting clear instructions, providing necessary context, supplying relevant data, and refining outputs through iteration. The DOL framework explicitly names prompting as a baseline job competency in the modern workforce, a signal that this skill is no longer optional for any professional role, including roles at nonprofits that have traditionally not required technical skills.
Effective prompting in a nonprofit context is not the same as effective prompting in a generic corporate context. A development officer needs to know how to prompt effectively for grant research, donor communication drafts, and case for support narratives. A program manager needs to know how to prompt effectively for data analysis, logic model development, and report generation. Training that addresses prompting in the abstract, without connecting it to the specific tasks staff actually perform, produces limited durable skill.
The iterative dimension of this content area is especially important. Many nonprofit staff who have tried AI tools and found them disappointing encountered that disappointment because they expected high-quality output from a first prompt, and when that output was mediocre, they concluded the tool was not useful. Training needs to normalize the iterative process: first prompts are starting points, not final products, and skill in refining outputs is at least as valuable as skill in crafting initial prompts.
Nonprofit application: Build a shared prompt library around specific nonprofit workflows. A collection of tested prompts for grant sections, donor acknowledgment letters, board reports, and program narratives is a practical organizational asset that also teaches prompting by example.
Content Area 4: Evaluating AI Outputs
Accuracy checking, completeness, and human judgment
The fourth content area addresses the quality control function that sits between AI output and organizational action: learning to assess AI-generated content for accuracy, completeness, and appropriateness before acting on it. This includes accuracy verification (checking factual claims against authoritative sources), completeness checking (identifying what the AI missed or underrepresented), error detection (recognizing flawed reasoning, logical gaps, or inappropriate framing), and alignment verification (confirming that the output actually serves the organization's stated intent).
The framework's emphasis on this area reflects a reality that many organizations underestimate: the quality of AI output is only as valuable as the quality of human review. An organization where staff routinely submit AI-generated grant reports without verifying the accuracy of program statistics, or where donor communications go out with AI-generated personalization that the staff member never reviewed for tone and accuracy, is not using AI responsibly. It is creating liability while appearing productive.
For nonprofits, training in this area needs to address the specific failure modes most likely to cause harm in the sector: hallucinated statistics in funder reports, incorrect beneficiary data in case documentation, tone mismatches in donor communications, and outdated policy references in advocacy materials. Staff who understand what to look for, and who have practiced finding it in training exercises, provide a meaningfully different quality of review than staff who know they are "supposed to" check AI outputs but have never been taught what to check for.
Nonprofit application: Use real examples of AI errors relevant to the organization's work in training exercises. An exercise that asks staff to find errors in an AI-drafted grant section is more valuable than a generic accuracy exercise.
Content Area 5: Using AI Responsibly
Ethics, compliance, data protection, and organizational policy
The fifth content area integrates the ethical, legal, and policy dimensions of AI use: cybersecurity practices, data protection, understanding AI limitations in high-stakes contexts, complying with legal requirements, preventing unethical applications, protecting confidential information, and adhering to organizational AI use policies. This is the area most directly connected to the governance concerns that nonprofit boards and executive directors should be addressing, and it is the area most likely to be skipped or treated as an afterthought in training programs focused primarily on tool use.
For nonprofits, responsible AI use has specific dimensions that differ from the corporate context. What information about program participants is appropriate to share with an AI tool? How should staff handle AI-generated content in grant reports? What does the organization's acceptable use policy say about using AI for donor communications? What are the organization's obligations under applicable privacy law when AI tools access beneficiary data? Staff who cannot answer these questions are not equipped to use AI responsibly, regardless of how proficient they are in prompting.
This content area also requires that governance structures exist before training is delivered. Training staff on responsible AI use is impossible if the organization does not yet have an acceptable use policy, data handling guidelines, or clear escalation paths for AI-related concerns. The framework implicitly acknowledges this by positioning responsible use as an integrated content area, not an add-on, suggesting that governance and training must develop in parallel rather than sequentially.
Nonprofit application: Before delivering this content area, confirm that the organization has an AI acceptable use policy that staff can be trained on. Training without a policy leaves staff without a clear reference point for the decisions they will face.
The Seven Delivery Principles
The framework does not just define what to teach; it defines how to teach it. The seven delivery principles are where the DOL framework diverges most sharply from conventional corporate AI training approaches, and where its guidance is most directly relevant to the specific constraints and opportunities of the nonprofit context.
1Enable Experiential Learning
AI literacy is most effectively developed through direct, hands-on use, not abstract or theoretical instruction. Programs must include interactive prompt exercises, live feedback loops, and practical scenario-based tasks. The implication for nonprofits is that classroom-style AI presentations, while useful for awareness, are insufficient for skill development. Staff need protected time and space to actually use AI tools on real or realistic tasks.
2Embed Learning in Context
Training is more effective when aligned with a worker's specific job, industry, or existing training program. The framework encourages integrating AI literacy into role-specific professional development rather than treating it as a standalone subject. For nonprofits, this means development staff training should focus on grant writing and donor communications, program staff training should focus on case documentation and outcome reporting, and so on, rather than a single generic curriculum delivered to all staff.
3Build Complementary Human Skills
AI literacy efforts must demonstrate how AI augments human capabilities such as critical thinking, creativity, communication, and domain expertise, rather than replacing them. This principle guards against both overconfidence in AI outputs and the anxiety that AI training will eventually make staff redundant. For nonprofits, this means framing AI as a tool that gives mission-driven professionals more capacity to do the work that requires human judgment, relationship, and values.
4Address Prerequisites to AI Literacy
Before AI-specific training begins, programs must confirm that participants have digital literacy foundations, device access, and broadband connectivity. This principle directly acknowledges the digital equity concerns that are particularly acute in the nonprofit sector. Organizations with field-based staff, rural operations, or staff who use shared devices face real prerequisites that must be addressed before AI training is meaningful.
5Create Pathways for Continued Learning
Foundational AI literacy should be a starting point, not a destination. Organizations must establish visible routes for staff to continue building skills, whether through certifications, advanced tool access, or participation in AI learning communities. The AI skills landscape changes rapidly; organizations that treat a one-time training as sufficient will find their staff falling behind the pace of tool development within months.
6Prepare Enabling Roles
Managers, trainers, mentors, and career counselors must themselves be equipped with AI knowledge before they can support others in adopting AI tools. This principle is one of the most frequently violated in nonprofit AI training programs: organizations invest in frontline staff training while skipping manager training, creating a situation where staff are encouraged to use AI tools by supervisors who cannot evaluate the quality of AI-augmented work or guide responsible use decisions.
7Design for Agility
Training programs must be modular and built for continuous updates. AI capabilities evolve rapidly; training curricula designed as static documents will become outdated quickly. For nonprofits, this means designing training as a living program, with a named owner responsible for quarterly or semi-annual updates, rather than a project with a completion date.
Role-Based Training Requirements
One of the most practically useful aspects of the DOL framework is its explicit differentiation between workforce segments with different training depth requirements. The framework does not treat AI literacy as a single skill level that all staff should achieve; instead, it acknowledges that different roles require different depths and emphases within the five content areas. This role-based structure gives nonprofit leaders a way to design training investments that are proportionate to the AI exposure and decision-making authority of different staff segments.
Frontline Employees (All Staff)
Foundational competency across all five content areas. The framework is explicit that this is universal, not limited to technical staff. Every person in the organization should reach baseline AI literacy, meaning they understand what AI is and is not, can identify AI tools relevant to their role, can prompt effectively for basic tasks, can evaluate outputs before acting on them, and understand the organization's responsible use policies.
For nonprofits, reaching this baseline for all staff is the primary goal of an initial training investment. The AI for Nonprofits Sprint, backed by OpenAI and the Robin Hood Foundation and targeting 100,000 nonprofit staff by the end of 2026, uses 50% of all staff reaching baseline AI literacy as a benchmark for organizational readiness. This is achievable using low-cost, off-the-shelf tools with structured facilitation.
Power Users and AI Champions
Advanced prompting and evaluation skills, plus the capacity to lead small pilots and model AI-augmented workflows for colleagues. These are staff who have demonstrated high interest and medium-to-high proficiency and who can serve as the internal layer of AI support between frontline adoption and external technical assistance.
For nonprofits, identifying and investing in this cohort early produces an outsized return. Internal AI champions who understand the organization's specific work, data, and constraints are far more effective at supporting colleague adoption than external trainers who lack that context. The investment in this cohort is also a retention tool: staff who develop specialized AI skills are growing professionally, which matters in the competitive nonprofit talent market.
Managers and Supervisors
Dedicated training focused on guiding team AI adoption, evaluating AI-augmented work quality, and understanding governance considerations. Managers who lack AI literacy are one of the most common structural barriers to organizational AI adoption: they cannot coach adoption, they cannot evaluate the quality of AI-assisted work products, and they cannot make informed decisions about when AI use is and is not appropriate for their teams.
The framework's Principle 6 (Prepare Enabling Roles) is explicit that manager upskilling is a prerequisite for effective frontline training, not a follow-on. For nonprofits, this means manager training should be scheduled before or simultaneous with frontline training, not after. A manager who is learning about AI tools at the same time their staff are adopting them is not equipped to guide that adoption.
Governance Leaders and Executives
Strategic literacy covering how AI improves or risks organizational workflows, how to establish AI policies, how to make resource allocation decisions about AI investment, and how to oversee AI governance. This is not about operational proficiency; it is about the knowledge needed to fulfill leadership responsibilities in an environment where AI is a material organizational factor.
For nonprofit executives and board members, this training should directly address the governance questions raised by agentic AI systems, the liability implications of AI decisions affecting beneficiaries, and the requirements emerging from state and federal regulation. The DOL framework does not specify the content of executive AI literacy in detail, but the accountability context establishes that it must include governance and risk management dimensions that go beyond operational tool use.
Building a Nonprofit AI Training Curriculum: A Seven-Step Approach
Drawing on the DOL framework's five content areas and seven delivery principles, the following seven steps offer a practical process for nonprofit leaders designing an AI training program. Each step is calibrated to the resource constraints, staff diversity, and mission-specific considerations that characterize the sector.
Conduct an AI Skills Assessment
Before designing training, map current staff proficiency across the five content areas. This is a professional development planning tool, not a performance evaluation, and should be framed as such. Assess three dimensions per staff member: technical proficiency (can they use specific AI tools?), strategic literacy (do they understand when and why AI is appropriate?), and responsible use awareness (do they understand data protection and policy compliance?). The assessment reveals where universal baseline training ends and where role-specific depth training begins.
Address Digital Prerequisites
Before any AI-specific training, confirm that all staff have reliable device access, internet connectivity, and baseline digital literacy. Field-based staff, staff in rural or low-connectivity areas, and staff who use shared devices face real prerequisites that must be resolved before AI training is meaningful. Organizations with significant prerequisite gaps may need to phase their training rollout or seek digital equity funding alongside AI training resources.
Deliver Universal Baseline Training
Offer all staff a structured introduction covering all five content areas using hands-on exercises built around nonprofit-specific workflows. A useful target is that 50% or more of all staff complete this training within the first year, using low-cost or free tools. The AI for Nonprofits Sprint and OpenAI Academy both offer free structured curricula that can be used as starting points. Contextualize these curricula with the organization's own examples, policies, and use cases.
Run Parallel Manager and Champion Cohorts
Simultaneously with or before frontline training, deliver manager training focused on evaluating AI-augmented work, guiding responsible adoption, and understanding governance requirements. Identify high-interest, high-proficiency staff to develop as internal AI champions through more advanced training. Both cohorts require investment before frontline adoption can succeed at scale.
Layer in Role-Specific Advanced Training
After baseline training is complete, add context-specific modules for different functional roles. Grant writers: advanced prompting for research synthesis, structure generation, and narrative development. Program staff: evaluation skills for AI-assisted case documentation and outcome reporting. Finance staff: responsible use training specifically around data handling for financial processes. Communications staff: output evaluation skills for accuracy and tone in public-facing content.
Establish Governance Alongside Training
The Responsible Use content area requires corresponding organizational policies. Training without governance creates confusion about what is and is not permitted, and leaves staff without a reference point for the AI-related decisions they will encounter. An acceptable use policy, data handling guidelines, and a clear escalation path for AI-related concerns must accompany the curriculum rollout, not follow it.
Build Pathways and Design for Agility
Create visible routes for staff to continue building skills: access to certifications such as IBM SkillsBuild, Microsoft AI Fundamentals, or sector-specific training programs. Designate a named owner responsible for curriculum updates on a quarterly or semi-annual basis. Treat the training program as a living operational function, not a completed project. The AI skills landscape changes fast enough that training designed in early 2026 will be partially outdated by late 2026.
How to Fund AI Training: WIOA and Free Resources
One of the most significant practical implications of the DOL framework is the funding access it opens. The ETA's guidance explicitly authorizes WIOA Title I Adult, Youth, and Dislocated Worker funds to be used for AI literacy training. WIOA Adult funding for 2026 totals approximately $875.6 million nationally; Youth funding approximately $948.1 million. Nonprofits whose staff or program participants qualify under WIOA eligibility criteria can access these funds through their regional American Job Center.
Beyond WIOA funding, the DOL framework was released alongside a broader federal AI workforce development initiative that includes significant free training resources. Nonprofit leaders should be aware of the following options before investing in paid training programs.
Free Training Resources
- OpenAI Academy
"AI for Nonprofits 101" video training. Free, self-paced, designed specifically for the sector.
- AI for Nonprofits Sprint (FCNY / aisprint.org)
Cohort-based program targeting 100,000 nonprofit staff. Includes complimentary ChatGPT Plus licenses. Over 38,000 nonprofit staff have already completed AI 101 through this program.
- Microsoft Learn AI Skills for Nonprofits
Free training for nonprofit staff, including Microsoft Copilot fundamentals and AI literacy modules.
- IBM SkillsBuild
Free AI training with certifications. Covers AI fundamentals, responsible AI, and applied AI skills.
Funded Access Options
- American Job Centers (WIOA-funded)
Every AJC received TEN 07-25 guidance. Contact your regional AJC to access subsidized AI training for staff who qualify under WIOA eligibility criteria.
- State Workforce Development Boards
State boards have discretionary authority to fund AI literacy through WIOA. Direct outreach to your state board may surface grant programs or cost-sharing opportunities not listed publicly.
- AI Readiness Grantmakers
The AI readiness grantmaking landscape is expanding. Several community foundations and national funders have explicitly funded nonprofit AI capacity building in 2025-2026.
- DOL AI Workforce Contact
Contact [email protected] for information on DOL-sponsored training webinars and additional funding opportunities aligned with the framework.
The Most Important Insight for Nonprofit Leaders
The most important insight from the DOL AI Literacy Framework for nonprofit leaders is what the framework does not say. It does not say that AI literacy is a technical skill for technical staff. It does not say that organizations should wait until AI tools stabilize before investing in training. It does not say that a single training event is sufficient. And it does not treat AI literacy as separate from the human skills, critical thinking, communication, creativity, domain expertise, that mission-driven work has always required.
What the framework says instead is that every person in every organization, regardless of role, technical background, or industry, needs foundational competency in how to engage with AI tools responsibly and effectively. It says this is a continuous practice, not a milestone. And it says the organizations that build this capability across their full staff, rather than concentrating it in a few technically inclined individuals, will be the ones positioned to realize AI's actual value rather than its hype.
For nonprofits operating with tight margins, overburdened staff, and chronic underinvestment in technology and professional development, this is simultaneously a challenge and an opportunity. The challenge is real: building AI literacy across a diverse, often geographically dispersed team with limited budget and limited time is not easy. The opportunity is equally real: the funding access the DOL framework opens, the free training resources it coordinates with, and the structured approach it provides mean that nonprofit leaders who engage with the framework seriously have more support for AI workforce development than has ever previously existed.
Organizations that want to go deeper on connected topics will find the discussions on building AI champions internally, creating a sustained AI learning culture, and embedding AI skills in hiring and job descriptions directly applicable to the implementation work the DOL framework enables.
Conclusion
The DOL AI Literacy Framework is not a compliance checklist. Complying with it requires nothing, because it is voluntary guidance with no enforcement mechanism. It is a design guide for building durable AI literacy across an organization, and its value lies entirely in whether leaders choose to use it as one.
The five content areas give nonprofit training programs a structured vocabulary that is grounded in federal workforce development practice. The seven delivery principles address the implementation questions that most organizational AI training ignores. The role-based differentiation helps allocate training investment proportionately. And the funding mechanisms the framework opens, particularly WIOA access through American Job Centers, provide a pathway that many nonprofits did not know existed.
The gap between the 92% of nonprofits that use AI in some capacity and the 7% that report major strategic impact is not primarily a technology gap. It is a capability gap. The organizations that close that gap will be the ones that treat AI literacy as an organizational investment, not an individual initiative, and build that literacy systematically across their full teams using frameworks, like the DOL's, that are designed to make it work.
Build AI Capability Across Your Entire Team
One Hundred Nights helps nonprofits design and deliver AI training programs that are grounded in current frameworks, adapted to your specific workflows, and built to produce lasting capability rather than one-time awareness.
