Mapping the DOL AI Literacy Framework to Nonprofit Job Roles
The U.S. Department of Labor's AI Literacy Framework defines five foundational content areas every worker should master. This guide translates those areas into concrete, role-specific competencies for the people who actually run nonprofits, from executive directors to program staff to finance leads.

In February 2026, the U.S. Department of Labor's Employment and Training Administration released an AI Literacy Framework intended to guide workforce and education programs across the country. The framework is short, deliberately general, and built around a simple idea: AI literacy is a foundational set of competencies that lets people use and evaluate AI responsibly. It names five content areas every worker should be able to navigate and seven principles for how AI training should be delivered. For a deeper look at the framework itself, see our companion piece on the five foundational AI literacy areas.
The framework's strength is also its limitation. By design, it does not prescribe what AI literacy looks like for any specific job. That translation is left to industries, employers, and educators. For nonprofits, that translation has not happened yet in any consistent way. A development director and a finance director both need AI literacy, but they need very different things. A program coordinator working with clients in crisis has a different bar to clear than a communications associate scheduling social media. Without that role-level mapping, nonprofits tend to send everyone to the same generic AI training and wonder why behavior on the ground does not change.
This article does the translation work. We walk through each of the five framework areas at a high level, then map them onto the eight nonprofit roles where AI literacy makes the biggest immediate difference: executive director, development director, program manager or director, finance and operations lead, communications and marketing lead, IT or systems lead, HR and people operations lead, and frontline program staff. For each role, we describe what each of the five areas looks like in practice, including the questions a person in that role should be able to answer and the behaviors they should be able to demonstrate.
A short framing note before we begin. This mapping is meant to be opinionated and usable, not exhaustive. It is the version of role-specific AI literacy we would recommend for a small or mid-sized nonprofit putting together a real training plan, not a comprehensive curriculum design. If your organization is larger or has a sophisticated learning and development function, treat this as a starting point and refine it based on your specific roles, programs, and risk profile.
The Five Foundational Areas, Briefly
Before we map roles to competencies, a quick reminder of the five content areas the DOL framework lays out. We will reference each by its short name throughout the role descriptions.
1. Understanding AI Principles
Knowing at a working level what AI is, how machine learning and large language models actually behave, what they can and cannot do, and the basic concepts of training, inference, hallucination, and bias.
2. Exploring AI Uses
Recognizing where AI is appropriate and useful in real work, identifying tasks AI can support, and being able to spot opportunities to apply AI in one's own job.
3. Directing AI Effectively
Working with AI tools productively, including prompting, providing context, iterating on outputs, and using AI in workflows. This is the practical "how to get good results" muscle.
4. Evaluating AI Outputs
Critically assessing what AI produces. Knowing when outputs are wrong, biased, or unsafe. Understanding the limits of AI and when human judgment must override.
5. Using AI Responsibly
Applying ethical, legal, and policy considerations. Knowing what data can and cannot be used, how to handle privacy and consent, when to disclose AI involvement, and how to operate within organizational and regulatory guardrails.
In the role mappings that follow, we describe what each area looks like for that role specifically. The framework's structure is the same across roles, but the specific competencies vary substantially depending on the work the person does and the data they touch.
Executive Director
The executive director's AI literacy is less about hands-on tool use and more about judgment, oversight, and the ability to ask the right questions. The bar is "informed enough to govern AI use across the organization," not "personally able to fine-tune a model."
What AI literacy looks like for an executive director
- Principles: Can describe in plain language the difference between traditional software, machine learning, and generative AI. Knows that LLMs can hallucinate confidently and that this is a structural feature, not a bug to be patched out.
- Uses: Can identify which functions of the organization are good candidates for AI investment and which are not. Can spot when a vendor is selling "AI" that is really just rebranded automation.
- Directing: Personally uses AI for at least one strategic task, such as drafting board updates, summarizing reports, or stress-testing strategy. Cannot govern what they have never used.
- Evaluating: Can ask the questions that reveal whether AI outputs are safe to act on, such as "what was the source," "what was the prompt," "who reviewed this." Recognizes when the team has stopped checking AI work.
- Responsibly: Owns the AI use policy, the disclosure standard, and the relationship with the board on AI risk. Understands the organization's data classification well enough to know what cannot go into a public AI tool.
A useful sanity check: an executive director with adequate AI literacy can sit through a board AI risk discussion without an external advisor whispering in their ear. They may still want the advisor, but they do not need one to follow the conversation.
Development Director
Development is one of the most AI-intensive functions in modern nonprofits. The development director's literacy needs to cover both the offensive side (using AI to research, write, and personalize) and the defensive side (ensuring donor data is treated correctly and that AI-generated communications stay true to the organization's voice).
What AI literacy looks like for a development director
- Principles: Understands what donor-data AI features in CRM platforms actually do, including how propensity scores and predictive models are built and what their limitations are.
- Uses: Can articulate where AI adds value in fundraising (research, drafting, personalization, segmentation, send-time optimization) and where it does not (relationship-building, major gift cultivation, ethical solicitation decisions).
- Directing: Has a documented prompt library for high-frequency tasks like donor research summaries, grant proposal drafts, thank-you note variations, and appeal-letter testing. Can run a donor segmentation analysis with AI assistance.
- Evaluating: Can spot when AI-drafted communications drift away from the organization's voice or include subtle factual errors about donors or programs. Maintains a quality bar that does not slip as volume increases.
- Responsibly: Knows exactly which donor data fields can go into which AI tools, has clear rules around AI use in major gift work, and can defend the team's practices to a donor who asks "did a person write this."
Program Manager or Director
Program leadership is where AI literacy intersects most directly with the people the nonprofit exists to serve. The stakes are higher and the right answer is often "not here," especially in any work touching mental health, immigration status, legal matters, or other sensitive areas.
What AI literacy looks like for a program manager
- Principles: Understands that AI tools trained on general internet text may give wrong or unsafe answers to questions about specific populations, eligibility criteria, or legal rights. Knows the difference between a tool that retrieves verified content and one that generates.
- Uses: Can map program workflows and identify which steps benefit from AI (intake summaries, document translation drafts, meeting notes) and which steps should never be delegated to AI (clinical judgment, eligibility determinations, crisis response).
- Directing: Has set up at least one well-scoped AI workflow for the team, such as standardized intake-note generation or translation drafts, with explicit human-review steps before anything reaches a participant.
- Evaluating: Trained the team to flag AI outputs that feel "off," especially in language about specific populations. Has a clear escalation path when something goes wrong.
- Responsibly: Knows the consent and disclosure obligations for AI use with participants, especially in any state with new AI mental-health or interpreter regulations. Has a written list of "AI off-limits" tasks that everyone on the team understands.
Finance and Operations Lead
Finance and operations are functions where AI quietly delivers significant value through transactional and analytical work, and where literacy gaps quietly produce errors that take months to surface. The bar here is precision.
What AI literacy looks like for a finance lead
- Principles: Understands that LLMs are not calculators. Knows that any numerical analysis pulled from a chat needs to be re-computed or verified against the source.
- Uses: Can identify the right AI applications in finance: drafting variance analyses, summarizing audit memos, accelerating month-end narrative, flagging unusual transactions for review. Does not use AI for the underlying math.
- Directing: Has integrated AI into specific finance workflows with explicit "source of truth" rules. Knows how to use AI for draft and explanation while keeping authoritative numbers in the accounting system.
- Evaluating: Maintains rigor about AI hallucinations in financial language, such as a confident but wrong description of a restricted-fund rule. Treats AI summaries as drafts until they are reconciled.
- Responsibly: Has clear rules about what financial data can go into which AI tools. Understands implications for audit, donor restrictions, and grant-funder data agreements.
Communications and Marketing Lead
Communications is where AI is most visible to outside audiences, and therefore where the organization's AI practices most directly shape its reputation. The communications lead needs deep practical fluency and equally deep judgment about disclosure and voice.
What AI literacy looks like for a communications lead
- Principles: Knows how generative image, text, and video tools work at a level deep enough to anticipate failure modes, including bias in stock-style imagery and subtle factual drift in long-form copy.
- Uses: Can identify the right AI use across the content lifecycle, from brainstorming to drafting to repurposing to localization, while protecting the spaces where authenticity matters most.
- Directing: Maintains a brand-aware prompt library, including voice guidelines that the team uses consistently. Can run a content repurposing workflow that turns long-form into channel-appropriate pieces without losing nuance.
- Evaluating: Has a quality checklist for AI-generated content, including voice match, factual accuracy, sensitivity to communities portrayed, and disclosure compliance.
- Responsibly: Owns the disclosure standard for AI-assisted content. Knows the EU AI Act Article 50 disclosure rules if the organization operates internationally, and knows the U.S. state-level requirements where applicable.
IT and Systems Lead
For nonprofits with an IT lead, AI literacy moves into a different layer. This is the role responsible for integration, security, vendor management, and the technical infrastructure that makes responsible AI use possible.
What AI literacy looks like for an IT lead
- Principles: Understands the API economics, data flows, and model architectures the organization is using. Can explain to leadership the difference between using a hosted LLM and running one locally.
- Uses: Can evaluate AI-bolted-on features in existing tools against AI-native alternatives, and can identify integration patterns that will scale without locking the organization into a single vendor.
- Directing: Maintains the technical infrastructure that supports AI use, including managed accounts, single sign-on, data loss prevention rules, and approved tooling.
- Evaluating: Can run or commission red-team exercises against the organization's AI tools and chatbots. Understands prompt injection, data exfiltration, and other AI-specific attack patterns.
- Responsibly: Owns vendor due diligence on AI features, including data handling, retention, training-data use, and breach notification. Knows which AI tools are approved for which data classifications.
HR and People Operations Lead
HR sits in a particular position with AI because the function both uses AI (in recruiting, performance review drafting, policy writing) and governs how the rest of the organization uses AI (through policy, training, and accountability).
What AI literacy looks like for an HR lead
- Principles: Understands how bias enters AI systems and how that interacts with employment law. Knows the difference between AI used for drafting versus AI used to make or recommend decisions about people.
- Uses: Can identify where AI legitimately accelerates HR work (job description drafts, policy writing, onboarding content) and where it must not be used as a decision-maker (hiring, performance evaluation, termination decisions).
- Directing: Has built and maintains the organization's AI-assisted job description process, AI training rollout, and AI policy revision cycle.
- Evaluating: Reviews AI-generated HR content for tone and inclusivity. Audits any AI-assisted hiring or screening tools against fairness criteria and applicable law.
- Responsibly: Owns the AI use policy from a workforce perspective, including expectations of staff, training requirements, and consequences for misuse. Coordinates with IT on access controls.
Frontline Program Staff
Frontline staff, including case managers, coordinators, organizers, and direct-service workers, are the largest group of nonprofit employees and often the most overlooked in AI training. Their AI literacy is critical because they are closest to the people the organization serves and to the data those people share in confidence.
What AI literacy looks like for frontline staff
- Principles: Understands at a working level what AI tools can and cannot do, that they can be confidently wrong, and that they remember (or are trained on) what people put into them by default.
- Uses: Knows which AI tasks are approved for their role (note drafting, meeting summaries, translation drafts) and which are not (eligibility decisions, sensitive client conversations, anything in real time with a participant).
- Directing: Can use the documented team workflows competently, including the standard prompts, review steps, and quality checks. Does not invent personal workflows for sensitive work.
- Evaluating: Reviews AI outputs before they go anywhere, especially translations and intake summaries. Knows when to escalate something that does not look right.
- Responsibly: Knows the data rules in plain language. What goes into AI tools, what does not, what to do if a participant asks "did a person write this" or "was AI involved."
The temptation with frontline staff is to over-restrict or to over-trust. Both fail. The right posture is clear scope, clear training, and clear escalation paths. For more on how to build this kind of training in a way that actually changes behavior, see our piece on building internal AI champions.
Turning the Mapping into a Real Training Plan
A role-by-role mapping is only useful if it becomes a training plan that actually runs. Here is a pragmatic sequence for taking the framework from PDF to practice in a small or mid-sized nonprofit.
Start with a competency baseline, not a curriculum
Before designing training, find out where people actually are. A short self-assessment using the role-specific competencies above tells you whether your development director is at "directing" level four but only "evaluating" level two, or whether your finance lead has never used AI at all. This baseline shapes everything that follows.
Train in role groups, not all-hands sessions
Generic AI training for the whole staff does not work, because the competencies are different for different roles. Group people by function and run training that uses their actual work as the example material. The communications lead and the finance lead need different workshops, with different example tasks and different evaluation criteria.
Anchor every session in real workflows
The DOL framework's delivery principles emphasize experiential learning and embedded learning in context. Translation: people learn AI by doing their actual work with it, with feedback, not by sitting through theory sessions. Every training touchpoint should produce a workflow artifact the person can keep and use.
Make the AI use policy part of the training, not a separate document
The "Using AI Responsibly" area collapses if it is taught as policy compliance in a sterile context. Weave the data rules, disclosure standards, and escalation paths into the role-specific training so staff learn them as part of how the work actually happens.
Plan for the long arc, not the one-time rollout
AI capability is not a stable target. Models change, tools change, and the work changes. The DOL framework explicitly calls out the need for continued learning pathways, and any serious training plan needs a quarterly refresh, a way to capture new workflows as they emerge, and a person responsible for keeping the role mappings current.
Conclusion
The DOL AI Literacy Framework is a useful piece of public infrastructure. It gives nonprofits a common vocabulary, a defensible structure for AI training programs, and a way to align workforce development efforts with federal direction. What it does not give them is a role-by-role implementation, and that gap is where most well-intentioned training plans stall.
The mapping in this article is one attempt at filling that gap. Eight roles, five competency areas each, with the specific behaviors and questions that signal real literacy rather than generic awareness. The point is not that every nonprofit should adopt this exact mapping. The point is that every nonprofit should do the mapping work, whatever roles and competencies make sense for their specific organization, because the alternative is training that does not produce behavior change and policy that does not produce compliance.
If your organization has not yet defined what AI literacy means for each of your roles, that is the most useful thing you can do this quarter. It costs almost nothing, it produces clarity for staff, and it gives you the foundation you will need when funders, board members, or regulators start asking how your organization ensures responsible AI use. The framework is the scaffolding. The role-specific mapping is the building.
Build Role-Specific AI Literacy for Your Team
We help nonprofits translate frameworks like the DOL's into practical, role-by-role training that actually changes how people work. Let's design a literacy plan that fits your team.
