Back to Articles
    Legal & Compliance

    Employment Law and AI: What Nonprofit HR Teams Need to Know About Automated Hiring

    Nonprofit organizations using AI to screen resumes, score candidates, or assist with hiring decisions are now navigating a rapidly expanding body of employment law. From Illinois to California to New York City, new regulations are creating compliance obligations that most HR teams have not yet fully grasped, and the stakes for getting this wrong include discrimination claims, civil penalties, and reputational damage.

    Published: April 11, 202613 min readLegal & Compliance
    Employment law and AI automated hiring for nonprofits
    Disclaimer: This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your organization's situation.

    Somewhere in your HR software stack, AI is probably already playing a role in how your organization finds, screens, and evaluates candidates. It might be the applicant tracking system that filters resumes before a human ever sees them. It might be the scheduling tool that determines which candidates get interview slots. It might be a video interview platform that scores candidates on vocal patterns and word choice. Or it might be something as simple as ChatGPT helping a manager draft interview questions.

    For years, these tools operated in a relative regulatory vacuum. Employers could use them without disclosing their existence to candidates, without auditing them for bias, and without any specific legal framework governing how they affected employment decisions. That era is ending. In 2026, employers in multiple U.S. jurisdictions face specific legal obligations around AI in hiring, and more states are expected to follow throughout the year.

    Nonprofits have particular reasons to pay attention to this shift. The sector has historically positioned itself as a values-driven employer committed to equity and inclusion. An organization that discovers its AI hiring tools have been systematically disadvantaging candidates of color, candidates with disabilities, or candidates from particular socioeconomic backgrounds faces not just legal exposure but a fundamental contradiction with its stated values. The reputational damage from such a finding can be severe, particularly for organizations whose missions directly relate to equity and social justice.

    This article walks through the current legal landscape governing AI in hiring, explains the specific compliance obligations that apply to many nonprofits, describes the risks of algorithmic bias and how it occurs, and offers a practical framework for responsible AI use in nonprofit hiring. Like any area of evolving law, the specific requirements in your jurisdiction should be confirmed with qualified legal counsel.

    The Rapidly Expanding Legal Landscape for AI in Hiring

    As of early 2026, the regulatory environment for AI in employment is a patchwork of state and local laws, with federal action expected but not yet enacted. Understanding which laws apply to your organization requires knowing where your employees and applicants are located, not just where your organization is headquartered.

    Illinois HB 3773 (Effective January 1, 2026)

    Illinois has enacted one of the most comprehensive state-level AI hiring laws in the country. Under HB 3773, employers cannot use AI in ways that result in bias against protected classes under the Illinois Human Rights Act, whether that bias is intentional or not. The law applies to the full employment lifecycle, covering hiring, promotions, and other employment decisions.

    • Employers must notify candidates and employees when AI is used in employment decisions
    • Covers any AI that makes or influences significant employment decisions, including resume screeners, interview scoring, and scheduling tools
    • Disparate impact (unintentional discrimination affecting protected groups) is explicitly addressed, not just intentional discrimination
    • Applies to all employers hiring in Illinois regardless of organizational size or nonprofit status

    California Fair Employment and Housing Act Updates (Effective October 2025)

    California amended its Fair Employment and Housing Act to specifically regulate Automated Decision Systems used in employment. California's approach prohibits employers from using automated systems to engage in discriminatory hiring or employment practices, and applies to systems that assist in making consequential employment decisions.

    • Covers any automated system that meaningfully influences hiring, firing, or promotion decisions
    • Discrimination through automated systems is explicitly actionable under state civil rights law
    • Employers cannot escape liability by arguing they simply used a tool provided by a third party
    • Applies to employers with operations or applicants in California regardless of organizational headquarters

    Colorado SB 24-205 (Effective June 30, 2026)

    Colorado's law is among the most expansive state AI regulations, covering "high-risk" AI systems that make or substantially influence consequential decisions in employment contexts. The law imposes specific obligations on both AI developers and the employers who deploy their tools.

    • Employers using high-risk AI must conduct impact assessments and document results
    • Candidates adversely affected by AI decisions must be notified and offered an opportunity to appeal
    • Vendors must provide documentation of how their AI systems work and what safeguards are in place
    • Takes effect June 30, 2026, giving Colorado-based nonprofits a short window to achieve compliance

    New York City Local Law 144

    New York City was among the first jurisdictions to regulate automated employment decision tools, and its law has been in effect long enough to generate real compliance experience. NYC's approach is particularly notable for its emphasis on independent auditing and public disclosure requirements.

    • Requires annual independent bias audits for any automated employment decision tool used in hiring or promotion
    • Audit summaries must be posted publicly on the employer's website
    • Employers must notify candidates at least 10 business days before using an automated decision tool
    • Candidates must be offered an alternative selection process if they request one

    Several additional states have legislation pending or recently enacted that will further expand these requirements. The overall trajectory is clear: employers will increasingly need to be able to document what AI tools they use in hiring, how those tools work, whether they've been audited for bias, and what they've told candidates about AI's role in the process. Nonprofits that hire in multiple states face the additional complexity of managing compliance across different jurisdictional frameworks.

    Federal action is anticipated but uncertain. Both the EEOC and the Department of Labor have issued guidance on algorithmic bias, and federal legislation has been proposed in multiple sessions. Many experts expect some form of federal standard by late 2026 or 2027, likely harmonizing some elements of the state patchwork. In the meantime, the most important step is understanding which existing state and local laws apply to your organization.

    How Algorithmic Bias Occurs in Hiring Tools

    Understanding why AI hiring tools can discriminate, even when no one intends them to, is essential for nonprofit HR leaders who want to use these tools responsibly. Algorithmic bias is not a technical glitch or an unusual edge case. It is a predictable consequence of how AI systems are built and what data they learn from.

    Training Data Bias

    AI hiring tools learn from historical data about who was hired, promoted, or rated highly. If that historical data reflects past discrimination or bias, the AI learns to replicate those patterns.

    Amazon's widely cited case is instructive: the company's AI resume screener trained on ten years of hiring data from a workforce that was predominantly male in technical roles. The system learned to penalize resumes that included the word "women's" (as in women's college) and downgraded graduates of certain universities with high female enrollment. The system was eventually discontinued because it could not be reliably corrected.

    Proxy Variable Discrimination

    AI systems often identify patterns between variables that appear neutral but correlate strongly with protected characteristics. These "proxy variables" can cause discrimination without any explicit reference to race, gender, disability, or other protected traits.

    Examples include: geographic location correlating with race; gaps in employment history correlating with disability or caregiving (often gendered); certain educational institutions correlating with socioeconomic background; and name or email address patterns correlating with national origin. An AI that learns to prefer candidates without these characteristics can systematically disadvantage protected groups.

    Video Interview AI Problems

    Tools that score candidates on facial expressions, vocal characteristics, eye contact, or speaking patterns have generated substantial concern among researchers and regulators.

    These systems may disadvantage candidates with certain disabilities, candidates whose communication styles reflect cultural backgrounds different from those in the training data, or candidates with accents or dialects associated with particular racial or ethnic groups. Several major vendors have faced regulatory scrutiny and litigation over these tools. Nonprofits using video screening should scrutinize them carefully.

    Amplification Effects

    Even when individual bias in an AI system is small, automated processes apply it to every single candidate at scale. A slight disadvantage for candidates from certain backgrounds, applied to thousands of applications, produces large aggregate discrimination effects.

    This amplification problem means that biases too subtle to detect in any individual decision can produce statistically significant disparate impact across a hiring cohort. This is precisely why bias auditing at scale, looking at aggregate outcomes across protected groups, is so important and why regulators are increasingly requiring it.

    For nonprofits committed to equitable hiring, the uncomfortable reality is that deploying AI hiring tools without bias auditing may actually undermine equity goals. An organization that believes it is using AI to remove human bias from hiring may in fact be introducing algorithmic bias at a scale and consistency that human biases rarely achieve. The intention to be fair does not guarantee a fair outcome when the tools themselves encode historical inequities.

    You Cannot Outsource Legal Responsibility to Your Vendor

    One of the most important principles in the emerging legal framework for AI hiring is that employers cannot escape liability by pointing to a vendor. If your applicant tracking system discriminates against candidates, the legal exposure falls on your organization as the employer, not on the software company. This principle is firmly established in both state law (particularly in California and Illinois) and in federal EEOC guidance on algorithmic bias.

    This accountability structure has significant practical implications. It means that vendor due diligence before signing a contract is not just a procurement best practice, it is a legal compliance requirement. Organizations need to ask vendors specific questions about how their tools work, what bias testing has been conducted, and what documentation is available about disparate impact testing results. Vendors that cannot or will not answer these questions represent a legal risk for the organizations that deploy their products.

    The nonprofit AI vendor evaluation process should include employment-specific questions for any HR tools. Key questions include: Has the tool been audited for bias according to the EEOC's four-fifths rule or other recognized methodologies? Will the vendor provide audit results? What is the vendor's contractual commitment if their tool is found to have discriminatory disparate impact? Does the vendor maintain transparency about what the tool uses to score candidates?

    Reviewing your existing contracts is equally important. Many vendors added AI functionality to existing HR tools through updates, sometimes without specific notification to customers. If your organization has been using an applicant tracking system, talent management platform, or HR analytics tool for several years, it's worth auditing what AI features may have been added and whether you have any contractual protections around bias auditing and liability for those features.

    Disclosure and Transparency Requirements

    Multiple state and local laws now require employers to notify job candidates when AI is being used to evaluate them. This disclosure obligation extends the principle that candidates should know how consequential decisions are being made about them. For nonprofits operating across multiple jurisdictions, developing a consistent disclosure practice that meets the most stringent applicable requirements is the most practical approach.

    Elements of an Effective AI Hiring Disclosure

    What candidates should be told when AI is used in your hiring process

    • What AI is being used: Identify the specific tool or function (e.g., "We use an automated resume screening system to evaluate candidate qualifications")
    • What factors the AI evaluates: Describe the characteristics or criteria the AI uses in its assessment, to the extent known
    • How the AI's output is used: Explain whether AI scores are definitive or advisory, and when humans review AI assessments
    • How to request an alternative: Where required by law (and as a matter of best practice), inform candidates how to request human review instead of AI assessment
    • Timing of disclosure: Under NYC law, disclosure must occur at least 10 business days before the AI tool is applied; early disclosure in the application process is better practice generally

    Beyond legal compliance, there is a values argument for disclosure that resonates particularly strongly for mission-driven organizations. Candidates who discover that AI was used in ways they weren't told about, especially if they were not selected, may feel their dignity was not respected in the process. This is especially true for candidates from communities with legitimate concerns about algorithmic bias. Transparent disclosure is an expression of respect for candidates as full human beings deserving of honest information about how decisions affecting them are made.

    Transparency also connects to your organization's credibility as a values-aligned employer. If your nonprofit works on issues of equity, justice, or technology access, using AI hiring tools without disclosure and bias auditing creates a visible tension with your stated commitments. Donors, funders, board members, and prospective employees may reasonably question whether your organization practices what it preaches.

    Maintaining Meaningful Human Review

    One consistent principle across all major employment AI regulations is the importance of meaningful human oversight. Purely automated hiring decisions, where no human reviews individual cases, represent the highest legal and ethical risk. Organizations should ensure that AI tools assist human judgment rather than replace it in consequential employment decisions.

    Where Human Review Is Non-Negotiable

    • Final hiring decisions should always involve human judgment, never be made solely by AI
    • Rejection decisions, particularly for candidates who passed initial thresholds, warrant human review
    • Any candidate who requests human review must be able to receive it (required by NYC and best practice elsewhere)
    • Accommodation requests and disability-related modifications should always be handled by a human

    Making Human Review Meaningful

    • Train hiring managers to critically evaluate AI recommendations rather than defer to them automatically
    • Create structured processes for reviewing AI-rejected candidates before they are definitively excluded
    • Document the human decisions made when overriding or confirming AI assessments
    • Ensure reviewers understand enough about how the AI works to evaluate its outputs critically

    A common failure mode is what researchers call automation bias: the tendency for human reviewers to over-trust AI recommendations without genuine independent evaluation. If your hiring managers are consistently confirming AI scores without meaningful review, the "human in the loop" is providing only nominal oversight. Genuine human review requires that reviewers have the time, information, and organizational permission to disagree with the AI when the evidence warrants it.

    Training is essential here. Staff who conduct human review of AI-assisted hiring decisions need to understand the basic mechanics of how the AI tool works, what kinds of bias it may have, and how to spot cases where its recommendations deserve scrutiny. This is part of the broader work of building internal AI literacy that enables your team to work with AI tools critically rather than passively.

    Bias Auditing: What It Is and Who Needs It

    Bias auditing is the process of systematically examining whether an AI hiring tool produces different outcomes for candidates from different protected groups. Under NYC Local Law 144, independent annual bias audits are legally required. Other jurisdictions are moving in this direction. But even absent legal requirements, bias auditing is a fundamental element of responsible AI use in hiring for any organization that cares about equitable outcomes.

    The most commonly used framework for bias auditing draws on the EEOC's "four-fifths rule," which holds that a selection rate for any protected group that is less than four-fifths (80%) of the rate for the group with the highest selection rate indicates adverse impact warranting further examination. Applied to AI hiring tools, this means comparing pass rates, interview invitation rates, or other selection metrics across racial, gender, and other protected groups to identify statistically significant disparities.

    For most nonprofits, conducting formal bias audits requires either engaging a vendor that conducts such audits or working with an external consultant who specializes in employment equity analysis. The cost and complexity of auditing varies significantly depending on the scale of hiring and the sophistication of the tools being used. Nonprofits with modest hiring volumes using simple resume screening tools face a different situation than organizations with hundreds of annual hires using multiple AI-powered assessments.

    One practical starting point is to ask your current AI vendors whether they have conducted bias audits, and to request the results. Reputable vendors should be willing to share this information. If a vendor cannot provide documentation of bias testing, or if audit results reveal significant disparate impact, that is an important signal about whether to continue using the tool. The red flags in AI vendor relationships include reluctance to discuss bias auditing or claims that their tools are inherently neutral without evidence to support the claim.

    A Practical Compliance Framework for Nonprofit HR Teams

    The following framework is organized by priority, recognizing that most nonprofits have limited HR bandwidth. Start with the highest-priority steps and build from there.

    Phase 1: Understand What You're Using (Immediate)

    • Inventory all HR tools and platforms your organization currently uses, including any that have added AI features
    • For each tool, determine which functions involve AI and what employment decisions they influence
    • Identify which jurisdictions your hiring activity touches, including states where applicants or employees are located
    • Map which state/local AI hiring laws apply based on your geographic footprint
    • Consult with employment counsel to confirm specific compliance obligations

    Phase 2: Address Immediate Compliance Gaps (30-90 Days)

    • Develop and implement candidate disclosure language for any AI tools you're required to disclose
    • Request bias audit documentation from all current AI HR vendors
    • Review vendor contracts for AI-specific provisions on liability, bias testing, and data rights
    • Establish a process for candidates to request human review of AI-assisted decisions
    • Train hiring managers on the AI tools being used and their limitations

    Phase 3: Build Ongoing Governance (3-12 Months)

    • Develop an internal AI-in-HR policy that sets standards for tool selection, use, and monitoring
    • Establish a process for reviewing hiring outcome data by protected group on a regular basis
    • Develop a protocol for adding new AI tools that includes bias assessment requirements before deployment
    • Include AI hiring tool review in any annual HR compliance audits
    • Stay current on regulatory developments through HR professional organizations and employment counsel

    Beyond Compliance: The Values Dimension for Nonprofits

    Employment law sets a floor for AI hiring practices, not a ceiling. For nonprofits with missions centered on equity, social justice, or community service, the question isn't just whether AI hiring tools are legally permissible, but whether they align with the organization's values and its obligations to the communities it serves.

    Many nonprofit organizations actively recruit from the communities they serve, both because it's the right thing to do and because it produces better outcomes. Staff who share the lived experiences of program participants bring insight, credibility, and relationships that can't be replicated through credentials alone. AI hiring tools trained on conventional professional credentials may systematically undervalue these candidates, precisely because their backgrounds don't fit the profiles of historically successful hires in more traditional organizations.

    This tension between conventional hiring metrics and community-centered hiring values is not unique to AI, but AI can amplify it dramatically. A human reviewer can exercise judgment about the value of community ties, lived experience, or unconventional career paths. An AI system optimizing for proxies of historical success may systematically screen out exactly the candidates your mission requires.

    The most thoughtful nonprofits are approaching AI in hiring not just as a compliance challenge but as a strategic question: where does AI genuinely help us find better candidates more efficiently, and where does it risk encoding biases that undermine our mission and values? The answer requires honest assessment of both the capabilities and the limitations of the specific tools you're considering, informed by the kind of equity-centered hiring practices your organization aspires to.

    This is also a board governance issue. Boards that oversee organizations with equity-centered missions should be asking leadership whether AI hiring tools have been evaluated for bias, whether they align with the organization's values, and how the organization plans to comply with applicable law. The board's role in AI oversight includes employment practices as much as program delivery or financial management.

    Acting Now, Not After a Problem Occurs

    The pattern in emerging technology regulation is consistent: requirements develop after harm becomes visible, but liability can attach to practices that predate formal requirements if they violate existing civil rights protections. EEOC guidance on algorithmic bias has been clear since 2022 that Title VII and the ADA apply to AI hiring tools just as they apply to human decision-making. The new state laws create additional requirements, but the core legal risk from discriminatory AI has existed throughout the period when these tools were proliferating.

    The good news is that the steps needed for responsible AI use in hiring are achievable for most nonprofits. They require attention, some legal guidance, and honest conversations with vendors, but they don't require abandoning AI tools that genuinely improve hiring efficiency. The goal is AI-assisted hiring that is both effective and equitable, transparent with candidates, and aligned with the values that make your organization worth working for.

    Organizations that get ahead of this issue, rather than waiting for a complaint or regulatory action, will be better positioned to attract the talent they need, maintain their reputation as equitable employers, and demonstrate to funders and communities that their commitment to equity extends to how they build their own teams. In the AI era, how you hire says something important about who you are.

    Navigate AI Compliance with Confidence

    One Hundred Nights helps nonprofit organizations build responsible AI practices that meet legal requirements and align with organizational values. From vendor evaluation to policy development, we guide you through the evolving landscape.