Back to Articles
    Leadership & Strategy

    When Tech Giants Slash Staff for AI: What Nonprofit Leaders Must Consider

    Jack Dorsey's Block cut nearly half its workforce in February 2026, citing AI as the reason. Within hours, commentators declared this the moment AI workforce restructuring went mainstream. Before nonprofit leaders draw lessons from Silicon Valley's latest pivot, there is a great deal worth thinking through carefully.

    Published: February 27, 202612 min readLeadership & Strategy
    Nonprofit leader reviewing AI workforce strategy at a desk

    In late February 2026, Jack Dorsey announced that Block, the financial technology company he leads, would cut nearly 4,000 employees, reducing its workforce by almost half. The reason, according to Dorsey, was straightforward: AI models had become so capable so quickly that a significantly smaller team using the right tools could do more and do it better. He went further, predicting that most companies would reach the same conclusion within a year. Investors responded enthusiastically, pushing Block's stock up more than 24% in after-hours trading.

    The story ricocheted across the business world almost immediately. Op-eds appeared within hours. LinkedIn filled with hot takes. And nonprofit leaders, who have watched the AI conversation intensify throughout 2025 and 2026, began asking a predictable but important question: should we be doing the same thing?

    The honest answer is more complicated than yes or no. The Block story is genuinely significant and contains real signals worth paying attention to. AI is getting more capable faster than most people expected, and organizations that ignore that reality do so at their peril. But the way a fintech company structured around software products and profit margins uses AI is categorically different from the way a nonprofit providing direct human services, advocacy, or community support should use it. The framework that makes sense for one may be deeply damaging for the other.

    This article examines what the Block layoff story actually tells us, where the tech industry model diverges from nonprofit reality, and how mission-driven organizations can think clearly about AI and workforce capacity without importing frameworks that were never designed for them.

    What the Block Story Actually Tells Us

    Before critiquing the model, it is worth understanding what Block actually did and why it matters. Dorsey's statement was unusually candid for a CEO announcement. He did not dress the layoffs in the usual corporate euphemisms about "streamlining" or "refocusing on core competencies." He said directly that AI models had become an order of magnitude more capable and that a smaller team with better tools could now accomplish what a larger team had required before. That level of transparency, whatever one thinks of the decision itself, is clarifying.

    The signal embedded in this story is real: AI is no longer just a productivity enhancer sitting alongside existing workflows. For some categories of knowledge work, it is now genuinely substituting for human capacity at scale. The categories where this is happening most rapidly tend to involve codified, repeatable processes: software development support, customer communication triage, data analysis, content generation, and similar tasks. These are areas where AI can now handle a substantial portion of the volume without a corresponding increase in headcount.

    For nonprofits, this signal matters in a specific way. Many organizations carry administrative burden that AI could meaningfully reduce. Grant reporting, donor acknowledgment workflows, meeting documentation, compliance tracking, internal knowledge management, and communications production all involve significant human time on tasks that are increasingly automatable. The Block story is a reasonable prompt to ask whether your organization is using AI aggressively enough in these areas.

    Where the Block story stops being useful as a model is when you move from "AI can handle more administrative work" to "therefore, we should reduce headcount." Those two conclusions do not follow automatically from each other, and for nonprofits, the second one deserves much more scrutiny than the first.

    The Real Signal

    AI is now genuinely substituting for human capacity on codified, repeatable knowledge work, not just augmenting it.

    The Faulty Leap

    "AI can do more work" does not automatically mean "we should employ fewer people." For nonprofits, those are separate questions.

    The Right Question

    Where can AI reduce administrative burden so our people can do more of the work only humans can do?

    Why the Tech Industry Model Doesn't Transfer to Nonprofits

    The most important structural difference between Block and a typical nonprofit is what the work actually is. Block's employees largely build and maintain software products, manage financial transactions, handle customer support at scale, and run the internal operations that support a tech company. A significant portion of that work is already mediated by systems and screens. The relationship between labor and output is largely indirect.

    For a nonprofit providing direct services, the work is fundamentally different. A case manager who builds relationships with clients, a counselor who sits with someone in crisis, a youth program coordinator who earns the trust of teenagers over months of consistent presence, a community health worker who navigates a neighborhood on foot: these roles are not a delivery mechanism for software. They are the service. The human relationship is the intervention. AI can support these roles in valuable ways, but it cannot replace the relational core of direct service work without changing what the service actually is.

    Even in nonprofit roles that look more like office work, the comparison breaks down. A development director who has spent years cultivating relationships with foundation program officers, a communications manager who understands the organization's history and community ties, an executive director whose credibility with the board is built on institutional knowledge and personal relationships: these roles carry value that lives in human judgment, trust, and context in ways that are genuinely difficult to replicate with AI systems, even very good ones.

    The Relational Core of Nonprofit Work

    Where AI fundamentally cannot substitute for human presence

    • Crisis counseling and mental health support, where trust and presence are the intervention itself
    • Case management with complex populations, where the relationship builds safety, engagement, and behavior change over time
    • Community organizing and advocacy, where trust is earned through sustained presence and shared risk
    • Youth development, where consistent adult relationships are the mechanism of positive outcomes
    • Major gift fundraising, where donor relationships are built through personal understanding, not volume

    The Accountability Difference

    Nonprofits answer to communities and missions, not shareholders

    When Block announced its layoffs, the stock rose 24%. That is the investor market expressing approval. Nonprofits have no equivalent mechanism that rewards workforce reduction. What they have instead is a community of constituents, a board representing public trust, funders measuring mission impact, and staff whose commitment often reflects genuine belief in the organization's purpose.

    This doesn't mean nonprofits should be inefficient or resistant to change. It does mean that the decision to reduce staff is evaluated through a completely different lens, one where the primary question is not "does this improve our margin?" but "does this strengthen or weaken our ability to serve the people who depend on us?"

    The Capacity Growth Reframe: AI Enables More Mission, Not Less Staff

    The most productive frame for nonprofits thinking about AI and workforce is not subtraction but expansion. If AI can handle more of the administrative and transactional work that currently consumes staff time, what becomes possible with that freed capacity? This question leads to very different strategic conclusions than "how many positions can we eliminate?"

    Consider a case manager who currently spends roughly 40% of her time on documentation, reporting, and administrative tasks. If AI tools reduce that to 15%, she now has 25% more time available. For a for-profit company, that might be an argument for reducing headcount by 25%. For a nonprofit, it could be an argument for serving 25% more clients, or for deepening the support provided to existing clients, or for allowing the case manager to finally have time for the reflective practice and professional development that makes her better at her work.

    This reframe matters because most nonprofits are not overstaffed relative to their mission; they are understaffed relative to community need. The problem is not that they have too many people doing work that AI could do. The problem is that they have too few people doing work that only humans can do, and the people they do have are too burdened with tasks that prevent them from doing that work well. AI offers a genuine opportunity to address that problem, but only if leadership frames the opportunity correctly.

    This is not a naive or anti-technology position. It is a recognition that the scarcity nonprofits face is usually scarcity of human relational capacity, not scarcity of software productivity. The right response to AI's expanding capabilities is to use them to multiply that human capacity, not to reduce it. Organizations already thinking along these lines, exploring how to use AI to strengthen direct service work, can find a useful frame in our discussion of the AI-augmented nonprofit model for 2030.

    The Subtraction Frame (Tech Industry)

    • AI does more work, therefore we need fewer people
    • Efficiency gains go to the bottom line
    • Success measured by stock price and margin improvement
    • Investor approval validates the decision

    The Expansion Frame (Mission-Driven Orgs)

    • AI reduces administrative burden, freeing human capacity for deeper mission work
    • Efficiency gains go to more clients, better services, or staff wellbeing
    • Success measured by mission impact and community outcomes
    • Community trust and program quality validate the decision

    The Ethical Weight of Nonprofit Workforce Decisions

    There is a dimension to nonprofit workforce decisions that rarely appears in tech industry analysis: the ethical relationship between the organization, its staff, and the communities it serves. Nonprofit staff are frequently mission-driven in a way that transcends their compensation. Many accept lower pay than they could earn elsewhere precisely because they believe in the work. They are not just employees; they are often deeply embedded in the communities the organization serves, sometimes having come from those communities themselves.

    When a nonprofit reduces staff citing AI, it is not just making a business decision. It is making a statement about what it values, what it believes the work is, and how it understands its accountability to the people it employs and the people it serves. If the staff being displaced served communities that already face unemployment, housing instability, or economic precarity, the ethical stakes are compounded. An organization whose mission includes economic justice or community wellbeing that displaces workers in its own community to optimize costs has a particular obligation to think carefully about what it is actually doing.

    This does not mean nonprofit leaders should never make difficult workforce decisions, including reductions in force when financial or operational reality requires it. It means that AI productivity gains are a thin justification for those decisions on their own. If the real driver is budget pressure, a loss of funding, or a shift in program priorities, leaders should name that honestly rather than attributing it to technological inevitability. The Block-style framing, where AI advancement makes staff reduction not just sensible but necessary, is a form of technological determinism that removes human agency and moral responsibility from decisions that actually involve a great deal of both.

    Questions Nonprofit Boards Should Ask Before Any AI-Driven Workforce Decision

    • Is the real driver of this decision AI capability, financial pressure, or program changes? Are we naming that honestly?
    • Have we genuinely exhausted the "expansion" frame, asking how freed capacity could serve more people rather than reduce headcount?
    • What is the relationship between the staff we would reduce and the communities we serve? What does this decision communicate to those communities?
    • If AI is now enabling the same mission delivery with fewer people, how are we ensuring the people affected are supported in transition?
    • What happens to program quality, client relationships, and institutional knowledge if we reduce the staff who carry them?
    • How will this decision affect our ability to recruit mission-driven talent in the future?

    Where Nonprofits Should Be Using AI Aggressively Right Now

    None of the above is an argument for caution, timidity, or slow adoption. Quite the opposite. The Block story is a reasonable signal that the pace of AI capability growth has accelerated, and organizations that are not actively and ambitiously integrating AI into their operations are falling behind in meaningful ways. The question is not whether to use AI but where and how.

    For nonprofits, the areas where aggressive AI adoption creates the most value are generally the ones where significant human time is spent on work that does not directly require human judgment, relationship, or creativity. Grant writing research and first drafts, meeting documentation and action tracking, donor acknowledgment and communication workflows, compliance monitoring and reporting, internal knowledge management, and communications production are all areas where AI tools can now handle substantial portions of the workload with high quality.

    The organizations that will look back at this period and feel they made the right choices are the ones building AI capacity now, not as a cost-cutting exercise but as a mission-amplification strategy. They are thinking about which tasks should be AI-handled, which should be AI-assisted, and which should remain fully human. They are training staff to use these tools well. They are building governance structures to ensure AI is used responsibly. And they are reinvesting the time AI frees into the work that only their people can do. You can read more about developing this kind of structured thinking in our overview of building an AI strategy for your nonprofit.

    High-Value AI Automation for Nonprofits

    Where AI should be doing the heavy lifting

    • Grant research, prospect identification, and first-draft proposal writing
    • Meeting transcription, note-taking, and action item extraction
    • Donor acknowledgment letters and stewardship communication templates
    • Program documentation, report generation, and compliance tracking
    • Internal knowledge management and staff onboarding materials
    • Social media content, email drafting, and communications repurposing
    • Data analysis, outcome tracking, and impact report generation

    Work That Should Stay Human-Led

    Where AI supports but does not replace human judgment

    • Client intake, assessment, and ongoing case management relationships
    • Crisis intervention, counseling, and therapeutic support
    • Major donor cultivation and relationship management
    • Community organizing, advocacy, and coalition building
    • Strategic planning, mission alignment, and organizational leadership
    • Board governance and fiduciary decision-making
    • Community partnership development and trust-building

    Managing Staff Anxiety When Headlines Like Block's Appear

    One immediate and practical consequence of stories like Block's is that nonprofit staff notice them. The headlines are not confined to business publications. They filter through social media, family conversations, and the general background anxiety that many workers already carry about AI and job security. When a story with "AI causes 4,000 layoffs" in the headline appears, nonprofit staff will read it and wonder what it means for them.

    Nonprofit leaders who stay silent when these moments arise are not protecting their staff from worry. They are simply leaving staff to form their own conclusions in an information vacuum, which tends to produce more anxiety, not less. The more effective approach is to address these stories directly, acknowledge that they are real, and explain clearly how your organization is thinking about AI and what it means for staff.

    That conversation does not require overpromising. Leaders cannot truthfully guarantee that no role will ever change. What they can do is explain the values guiding their approach, share how AI is being used to reduce administrative burden rather than replace people, and make clear that their goal is to strengthen the team's capacity, not to shrink it. Transparency and consistency on this point builds the trust that makes AI adoption go more smoothly. Our article on talking to staff about AI and job security offers specific frameworks and language for those conversations.

    It is also worth involving staff in AI adoption rather than doing it to them. When people understand how AI tools can make their work less burdensome and more impactful, they tend to become advocates for thoughtful adoption rather than resistors. When they see AI introduced without explanation or input, they reasonably read it as a prelude to something threatening. The process matters as much as the outcome, and for nonprofits, where staff commitment often runs deep, getting the process right is both ethically important and strategically smart. More on creating that kind of inclusive adoption process is available in our coverage of overcoming AI resistance in nonprofits.

    A Framework for Nonprofit Leaders Navigating This Moment

    The Block story, and the wave of similar stories that will follow it, puts nonprofit leaders in a position that requires clarity about who they are and what they are trying to do. The tech industry will continue to set a pace and a frame that is designed for tech industry conditions. Nonprofit leaders who import that frame wholesale will make decisions that are misaligned with their organizations' actual nature, needs, and accountabilities.

    That does not mean ignoring the signals. It means reading them carefully and translating what is genuinely useful while discarding what does not apply. AI is getting more capable faster than most people expected. That matters for nonprofits. How it matters, and what follows from it, is something leaders need to work out for themselves, based on their mission, their community, their staff, and their values, not by taking cues from fintech companies answering to Wall Street.

    A Practical Leadership Framework for AI and Workforce

    Guiding principles for mission-driven organizations navigating AI workforce questions

    1. Lead with mission clarity

    Every AI and workforce decision should be evaluated through the lens of "does this strengthen or weaken our capacity to serve our mission?" That question should come before any productivity or cost analysis.

    2. Default to the expansion frame

    When AI creates capacity, ask first how to invest it in more mission before asking how to reduce costs. This does not preclude difficult decisions, but it ensures they are made thoughtfully rather than reflexively.

    3. Invest aggressively in administrative AI

    Do not let caution about workforce implications prevent aggressive adoption of AI in administrative and operational areas. The goal is to free human capacity, not protect task portfolios.

    4. Protect the relational core

    Identify clearly what work in your organization depends on human relationship, trust, and presence. Protect investment in that work even as you automate everything that surrounds it.

    5. Communicate proactively and honestly

    When AI stories dominate the news cycle, address them directly with your staff. Name your values, explain your approach, and be honest about what you can and cannot promise.

    6. Build staff into the process

    Staff who understand the rationale for AI adoption and have input into how it happens are far more likely to become capable users and advocates than those who have it imposed on them.

    Conclusion

    The Block story is significant. Jack Dorsey's decision to cut nearly half his company's workforce citing AI capabilities is not a stunt or an outlier. It is an early and very visible data point in a pattern that will become more common as AI continues to advance. Nonprofit leaders are right to pay attention.

    But paying attention does not mean copying the playbook. What happened at Block reflects the values, accountability structures, and nature of work specific to a for-profit fintech company. Nonprofits operate under a fundamentally different set of conditions, where mission accountability, community trust, relational service delivery, and ethical responsibility to staff all create constraints and opportunities that the Block model simply does not account for.

    The right lesson for nonprofits from this moment is not "AI means we can reduce headcount." It is "AI is advancing rapidly, and we need to be deliberately and ambitiously using it to free our people from administrative burden so they can do more of the work that only humans can do." That is a meaningfully different frame, and it leads to meaningfully different decisions.

    Organizations that build AI capacity thoughtfully right now, starting with the administrative and operational areas where the gains are clearest, will be better positioned to serve their communities, support their staff, and sustain their missions through whatever comes next. The window to build those foundations intentionally and on your own terms is open. The question is whether you will use it.

    Build Your AI Strategy on Mission, Not Headlines

    We help nonprofit leaders develop AI strategies grounded in their values, their communities, and the real nature of mission-driven work. Not silicon valley frameworks applied wholesale, but approaches built for how nonprofits actually operate.