Back to Articles
    Workforce Development

    Seven Delivery Principles for Nonprofit AI Training: How to Apply the Federal Framework

    When the U.S. Department of Labor released its AI Literacy Framework in February 2026, the five content areas got most of the attention. The seven delivery principles, which describe how AI training should actually be built and run, are arguably more important for nonprofits trying to move staff from awareness to capability. This guide translates each principle into concrete design decisions for nonprofit training programs of any size.

    Published: May 10, 202615 min readWorkforce Development
    Seven Delivery Principles for Nonprofit AI Training

    Most nonprofit AI training is structured the way IT compliance training has always been structured: a webinar, a slide deck, maybe a recorded module, and a sign-off that someone watched it. By the time staff sit down at their desks two days later, they have forgotten almost everything except the name of the tool. The training was delivered, but no learning took root. The Department of Labor's AI Literacy Framework, published February 13, 2026, was written in part to break this pattern across the public workforce system. Its seven delivery principles describe what effective AI training actually has to look like, and they apply just as cleanly to a 12-person nonprofit as they do to a state workforce board.

    The seven principles are: enable experiential learning, build complementary human skills, create pathways for continued learning, design for agility, embed learning in context, address prerequisites to AI literacy, and prepare enabling roles. Each one is a design constraint. Each one rules out certain shortcuts that look efficient but do not produce capability. Taken together, they describe AI training as a continuous, embedded, practice-based activity rather than a one-time event. That framing matters because it pushes nonprofits to invest in habits and infrastructure, not slide decks.

    This article walks through each principle in turn, explaining what it requires, why the federal framework prioritized it, and how a nonprofit can apply it without the staff or budget of a federal grantee. The goal is not to make your training resemble a federal program. The goal is to use the principles as a checklist when you are designing or revising the way your team learns to work with AI. For background on the five content areas the framework also defines, see our companion piece on the DOL's five foundational AI literacy areas. For role-by-role competencies, see our walkthrough of mapping the DOL AI Literacy Framework to nonprofit job roles.

    A note before we begin. The framework is voluntary guidance, not a regulation. Nonprofits are free to adapt it. What follows is one practical interpretation of how the principles translate into nonprofit reality, not a literal restatement of the framework text. The DOL document itself is short and worth reading in full. Use this article alongside it, not in place of it.

    Why the Delivery Principles Matter More Than the Content

    When nonprofit leaders ask us what to teach staff about AI, they are usually asking the wrong question. The content of AI training is increasingly available for free from credible sources. Microsoft, Google, OpenAI, Anthropic, and dozens of education nonprofits publish curricula that cover everything from prompt construction to bias detection. What is scarce, and what determines whether training produces results, is not content but delivery.

    A 90-minute video on prompt engineering is widely available. A 90-minute video that is paired with a hands-on session in which staff rewrite their actual emails using the techniques, followed by peer review, followed by a monthly community of practice, is much harder to find because it requires institutional design. The same content delivered well or poorly produces wildly different outcomes. The delivery principles are the federal framework's attempt to specify what "delivered well" actually means.

    For nonprofits, this reframing is liberating. You do not need to develop proprietary content. You do not need a learning and development department. You need to make a series of design choices about how learning happens in your organization. Those choices are inexpensive in dollars but expensive in attention. The delivery principles tell you which choices matter most.

    The rest of this article goes through each principle and translates it into specific design moves a nonprofit can make. Several of the moves will be familiar from good adult-learning practice generally. AI training is not exotic. It is adult learning applied to a moving target.

    Principle 1: Enable Experiential Learning

    The framework places experiential learning first for a reason. AI literacy is built through direct, hands-on use. Workers develop instincts about how AI behaves only by giving it inputs, seeing what comes back, and iterating. No amount of reading about prompts substitutes for the experience of writing one, watching the model misinterpret it, and refining the wording. Experience is how the mental model forms.

    What this rules out

    Lecture-format training without practice. Video courses that staff watch passively. Training that uses generic example prompts unrelated to the participant's actual work. Demonstrations in which one person uses AI and the audience watches.

    What this looks like in a nonprofit

    Design moves for experiential learning

    • Bring real work to every session. Ask staff to bring an actual email they need to write, a real grant section they are stuck on, or a recent donor research task. Use AI on that work in the session.
    • Reserve at least 60 percent of every session for hands-on time. If a training block is 90 minutes, no more than 35 minutes should be presentation.
    • Give staff a sandbox account before the session. Logistical friction kills experiential learning. If half the participants are still trying to log in, the session is wasted.
    • Make the first task small and almost certain to succeed. Confidence is built by experiencing a win, not by being warned about hallucination in the first ten minutes.
    • Pair staff who are early on the learning curve with staff who are slightly ahead. Peer guidance during practice is more effective than instructor guidance for adult learners.

    The cheapest experiential format is a 90-minute "AI office hours" block in which staff bring real work, log into a sandbox tool together, and a facilitator floats between them. No slide deck, no curriculum. Repeated weekly for a month, this produces more capability than any single workshop.

    Principle 2: Build Complementary Human Skills

    AI training that focuses only on the AI side of the relationship misses the point. The framework is explicit that the goal is human-AI collaboration, which means training has to develop the human skills that make collaboration work: judgment, communication, creative framing, critical evaluation, and ethical reasoning. These are the skills that distinguish staff who use AI well from staff who let AI use them.

    The skills the framework points to

    Judgment

    Knowing when to trust AI output, when to verify, and when to override. This is the single most important complementary skill and the hardest to teach directly. It develops through repeated cycles of using AI and discovering what it does and does not know.

    Communication

    The ability to give clear instructions, provide useful context, and translate organizational knowledge into the kind of input AI can act on. Staff with strong writing skills tend to become strong AI users.

    Critical evaluation

    Reading AI output carefully, noticing what is missing, identifying claims that need verification, and rejecting outputs that miss the mark. This is essential and surprisingly underdeveloped in many adults.

    Ethical reasoning

    Recognizing when an AI use crosses a line, when consent or disclosure is required, and when a task should not be delegated to AI at all. This is mission-specific for nonprofits and cannot be outsourced to general training.

    Design moves for complementary skills

    • After every hands-on exercise, build in a structured review step where staff identify one thing the AI got right and one thing they had to fix. This trains evaluation.
    • Include at least one "AI got this wrong, what would the consequences have been" exercise per training cycle, drawn from real anonymized examples.
    • Ground ethical discussions in the organization's actual mission and constituents rather than abstract principles. The staff working with survivors of violence think about disclosure differently than the staff running a museum gift shop.
    • Pair AI training with a refresher on writing fundamentals. Staff who learn to write clearer instructions for humans become better at writing prompts.

    Principle 3: Create Pathways for Continued Learning

    The framework treats AI literacy as a continuous trajectory rather than a credential. The technology changes month to month. The tools that staff used in early 2026 are different from the tools they will use in late 2026. A training program that ends with a certificate ceremony assumes a static skill, which is exactly wrong for this domain.

    What pathways look like in practice

    A pathway is a sequence of touchpoints that move a staff member from beginner to capable to expert over months and years. For nonprofits, it does not need to be elaborate. The components that matter most are repetition, gradual increase in difficulty, and recognition of progress.

    A simple nonprofit pathway

    Designed for a small or mid-sized organization with no learning and development staff

    • Month one: Onboarding. Two 90-minute hands-on sessions covering the organization's AI tools, the AI use policy, and a small set of common workflows.
    • Months two and three: Practice with support. Weekly "office hours" sessions, a shared Slack or Teams channel for questions, and a buddy system pairing new and experienced users.
    • Months four through six: Workflow ownership. Each staff member documents at least one AI-enabled workflow they have built and shares it with the team. This builds the shared prompt library.
    • Ongoing: Monthly community of practice. A 30-minute organizational meeting where one or two staff members share what they tried, what worked, and what surprised them.
    • Annual: Refresher and reset. An organization-wide session that reviews policy changes, updates the tool stack, and revisits goals.

    The pathway above can be run by a single person with a calendar and a few hours a month. It outperforms most multi-thousand-dollar training contracts because it is continuous rather than episodic.

    Principle 4: Design for Agility

    AI is a moving target. The framework explicitly warns against treating AI literacy as a fixed curriculum, because by the time a fixed curriculum is approved, it is already out of date. Training has to be built with mechanisms for adaptation so content stays current with the technology landscape.

    What agility requires

    Agility in training is mostly a question of who owns the content and how often it gets refreshed. Heavy slide decks created once and used for years cannot be agile. Light, modular materials that a single person can update in an afternoon can be.

    • Build training around short, modular blocks. A library of 30-minute focused modules is easier to update than a 4-hour course.
    • Name a content owner with explicit time allocated. Without a named person, refreshes never happen. The minimum viable allocation is two to four hours per month.
    • Build feedback into every session. Ask participants what felt outdated, what was unclear, and what they wanted more of. Use that input to refresh.
    • Plan a quarterly review of the tool list. If the organization has switched from one AI tool to another, the training has to reflect that within weeks, not next year.
    • Treat the AI use policy as a living document. The policy that worked in February will need revision by August. Build in a review cadence at least every six months.

    For more on how policies and workflows evolve over time, see our piece on the workflow problem that holds most nonprofits back.

    Principle 5: Embed Learning in Context

    The framework emphasizes that AI training should be embedded into day-to-day tasks rather than treated as a separate activity. The phrase "in context" is doing a lot of work here. Context means using AI on real work, in real workflows, against real constraints, with real stakes. Training that happens in isolation from actual work produces knowledge that does not transfer.

    What embedded learning looks like

    The deepest form of embedded learning is to redesign workflows so that AI is naturally part of how the task gets done, and then to support staff through the transition. Instead of training development associates on prompt engineering as a separate topic, you redesign the grant research workflow to use AI, and the training happens in service of doing the actual work better.

    Design moves for embedded learning

    • Pick three workflows per function and AI-enable them deliberately. Train staff on those workflows specifically, with the actual templates and prompts the workflow uses.
    • Build training assignments around real tasks the participant owes someone else. If a communications associate is going to draft a newsletter on Tuesday, the Monday training uses next week's actual newsletter draft.
    • Position AI features inside existing tools. If your CRM has AI features, train staff in the CRM itself rather than in an external sandbox. Context cues memory.
    • Make supervisors part of training. When a supervisor sets the expectation that AI is part of how the team does its work, embedded learning happens automatically.
    • Connect AI training to performance goals. If an employee's annual goals include workflow improvements, AI training becomes obviously relevant rather than an extra burden.

    Principle 6: Address Prerequisites to AI Literacy

    The framework recognizes that AI literacy assumes prerequisites that not every worker has. Comfort with computers and basic software, reading and writing fluency at a working level, willingness to experiment and accept ambiguity, and basic digital literacy all sit underneath AI literacy. Training programs that skip past prerequisites lose participants invisibly. The participant who cannot figure out how to copy text between two browser tabs is going to fall behind regardless of how good the prompt engineering lesson is.

    How to address prerequisites without singling people out

    For nonprofits, this principle is especially important because workforce diversity often means staff range widely in digital fluency. The wrong move is to require a digital literacy test before AI training, which embarrasses experienced staff and tells less fluent staff they are not ready. The right move is to build prerequisites into the training itself in a way that benefits everyone.

    • Pre-session setup support. Offer optional 30-minute "tech check" sessions before the main training where anyone who wants help getting logged in or finding the right tab can come.
    • Slow the first session deliberately. Spend more time than feels necessary on the basic mechanics of accessing the tool. This protects everyone and frames the work as a shared exploration.
    • Use the buddy system intentionally. Pair staff in a way that exchanges complementary strengths. The longtime case manager who is rusty with new tools pairs well with the newer staff member who is fluent with software but less familiar with the work.
    • Provide written materials in plain language. Avoid jargon. Define every acronym. The first time someone sees "LLM," it should be spelled out and explained, not assumed.
    • Offer alternative formats. Some staff learn better from short videos, others from written guides, others from in-person walkthroughs. Multiple formats catch different learners.
    • Normalize asking questions. The facilitator should ask the first "dumb" question themselves, so that no participant feels they are the only one confused.

    For deeper guidance on building literacy across staff with different starting points, see our piece on building AI literacy across multilingual nonprofit staff.

    Principle 7: Prepare Enabling Roles

    The seventh principle is the one most often overlooked. The framework recognizes that AI training does not just need participants, it needs people who can deliver it. Trainers, coaches, supervisors, and peer mentors are all enabling roles, and they need development too. A training program that depends on a single staff member who learned AI on their own and is now expected to teach everyone else is fragile and unsustainable.

    Who counts as an enabling role in a nonprofit

    AI champions

    Staff who are early adopters and have informal influence with peers. With light support, they can run office hours, document workflows, and answer questions. See our piece on building AI champions.

    Supervisors and managers

    The single biggest predictor of whether training translates into changed behavior is whether the participant's supervisor expects AI use as part of the job. Supervisor preparation is therefore an enabling investment.

    Trainers and facilitators

    Whoever runs the training sessions, whether internal staff or external partners. They need both AI fluency and adult learning facilitation skills. The combination is rarer than either alone.

    Policy and governance leads

    The staff who own the AI use policy, vendor relationships, and risk decisions. Their fluency determines how well the rest of the organization is supported with infrastructure.

    Practical moves for preparing enabling roles

    • Train supervisors first, before their teams. Their comfort or discomfort with AI will propagate.
    • Give AI champions explicit time on their calendar for coaching peers, not just an honorary title.
    • Pay for facilitator skills development when needed. A staff member who knows AI but has never run a workshop will struggle, and that is fixable with a one-day course.
    • Recognize enabling work in performance reviews and compensation conversations. Otherwise it becomes unpaid extra work and the people best positioned to do it burn out.
    • Build a small community across enabling roles, internally if you are large enough and externally if you are not. A nonprofit AI lead with no peers gets lonely and stuck.

    Putting the Seven Principles Together

    Read together, the seven delivery principles describe a particular kind of training operation. It is hands-on rather than lecture-based. It uses real work rather than examples. It is continuous rather than episodic. It is modular and updatable rather than fixed. It is embedded in workflows rather than separated from them. It meets staff where they are on prerequisites. And it invests in the people who deliver it, not just the people who receive it.

    A nonprofit that operationalizes all seven principles ends up with something that does not look like traditional training at all. It looks like a community of practice with an embedded curriculum, run by people who are themselves continually learning, anchored in the actual work of the organization. That is what good adult learning has always looked like. The framework's contribution is to make explicit that AI training has to take that shape, because shortcut approaches that work for routine compliance topics do not work here.

    For an organization just starting out, you do not need to implement all seven at once. The most effective first move is usually to combine two: experiential learning and embedded context. Run a weekly hands-on session for one team using their real work, and within a few months you will have changed how the team operates. From there, the other principles become easier to layer in because the basic habit of practice-based AI learning is already established.

    Common Failure Modes to Avoid

    Even nonprofits that take the framework seriously can stumble in predictable ways. Knowing the failure modes in advance makes them easier to avoid.

    • The big-bang launch. A single elaborate kickoff event followed by silence. Energy peaks, then the work goes back to normal. The principles all point toward continuous activity, not events.
    • The orphan curriculum. Training materials built by a consultant, then handed to internal staff who have no time and no incentive to update them. Within six months the materials reference tools the organization no longer uses.
    • Overemphasis on tooling. Spending the training budget on tool subscriptions and almost nothing on instruction, expecting staff to figure things out from licenses alone.
    • Mandatory training without supervisor follow-through. Staff complete the sessions and immediately return to a manager who has not changed any workflows or expectations. Nothing changes.
    • Skipping the policy. Training staff to use AI without first writing an AI use policy. Staff develop habits that may violate policies that are written later, and the team then has to unlearn.
    • Treating everyone the same. Running an identical curriculum for executive directors, frontline program staff, and the bookkeeper. Each role needs the framework's content areas at different depths and emphases.

    Budget and Scale Considerations

    A common reaction to the framework is that it sounds expensive. It is not, especially compared to the alternative of paying for AI tooling that staff cannot use effectively. The cost of implementing the seven principles is mostly in attention and named ownership, not in dollars.

    A small nonprofit can run a credible AI training program for under five hundred dollars a year by combining free curriculum from public sources, an internal AI champion who runs weekly office hours, a supervisor expectation that AI is part of how the work gets done, and a simple shared library of prompts and templates. The investment is mostly time, and most of that time pays back within weeks through faster work.

    A larger nonprofit with a dedicated training function should still resist the urge to build a complex curriculum upfront. The framework's emphasis on agility and context argues for starting small, observing what works, and expanding from there. Spending fifty thousand dollars on a learning management system before knowing what the curriculum needs to cover is almost always a mistake.

    For practical guidance on resources you can use without budget, see our roundup of free AI training resources for nonprofits.

    Conclusion

    The seven delivery principles are not a curriculum, they are a design philosophy. They describe the kind of training operation that actually moves staff from being aware of AI to being capable with AI. None of the principles are unique to AI. Effective adult learning has always required practice, context, continuous reinforcement, prepared facilitators, and attention to prerequisites. The framework's value is in stating these requirements explicitly for a domain where many organizations are tempted to rely on the shortcut of a one-time workshop.

    For nonprofits, the principles are also a financial argument. They explain why elaborate one-time training events tend to produce disappointing results, and why simpler embedded approaches tend to outperform them. An hour-a-week office hours session run for a year almost always changes more behavior than a single full-day workshop, and it costs almost nothing. The framework gives leadership the language to make that case internally and to redirect training budgets toward the practices that actually work.

    The most important move is to start. Pick one team. Pick one workflow. Run one weekly hands-on session with that team using their real work. Pay attention to what helps and what does not. Adjust. The seven principles are scaffolding for that process, not a prerequisite for beginning it. Organizations that take the framework as permission to plan for another six months before doing anything will fall further behind. Organizations that use it as a checklist for the work they are already starting will find their teams capable, confident, and effective by the time the rest of the sector is still building slide decks.

    Ready to Build AI Training That Sticks?

    We help nonprofits design AI training programs that follow the principles in this article: hands-on, embedded, continuous, and aligned with the way your team actually works. Let's design yours.