Back to Articles
    Security & Compliance

    How to Evaluate AI Vendor Security Claims: A Non-Technical Guide for Nonprofit Leaders

    AI vendors routinely pitch compliance certifications, encryption standards, and data protection frameworks to win your business. This guide demystifies what those claims actually mean, how to verify them, and what questions to ask before you sign anything.

    Published: March 26, 202618 min readSecurity & Compliance
    Nonprofit leader reviewing AI vendor security documentation

    A vendor walks into your conference room, or more likely joins a video call, and within the first five minutes they have mentioned SOC 2 compliance, end-to-end encryption, HIPAA readiness, and a "zero-trust architecture." They use these terms with confidence, and the implication is clear: their platform is completely secure, and you have nothing to worry about.

    The reality is considerably more complicated. In 2025, 99 percent of organizations experienced at least one security incident tied to their SaaS or AI ecosystem, despite widespread vendor claims of comprehensive protection. That statistic is not a failure of sophisticated enterprise security teams. It reflects a fundamental gap between what vendors say about their security posture and what actually happens when their systems are tested by real-world conditions.

    For nonprofit leaders, this gap creates a genuine dilemma. Your organization likely holds sensitive data about donors, beneficiaries, clients, volunteers, and partners. You may operate under legal obligations tied to health information, student records, or financial data. You almost certainly have a board and funders who expect you to be responsible stewards of that information. But you probably do not have a full-time security officer, a legal team fluent in data protection law, or unlimited time to wade through dense compliance documentation.

    This guide is designed to close that gap. It explains what common security certifications actually mean, how to verify vendor claims without being a technical expert, what red flags to watch for during the sales process, and what questions to ask before you commit to any AI platform. You do not need a computer science degree to become a smarter buyer. You need a framework, the right vocabulary, and a willingness to push back when answers feel incomplete.

    What Security Certifications Actually Mean

    Before you can evaluate a vendor's claims, you need to understand what the most common certifications actually guarantee, and what they do not. These frameworks were not designed with AI in mind, so they carry meaningful limitations in this context.

    SOC 2: The Gold Standard That Isn't

    What the report says, and what it leaves out

    SOC 2, which stands for Service Organization Control 2, is an audit framework developed by the American Institute of Certified Public Accountants. It evaluates a vendor against five trust service criteria: security, availability, processing integrity, confidentiality, and privacy. A vendor can be audited on all five or just the security criterion, which is the only mandatory one.

    There are two types of SOC 2 reports. A Type I report is essentially a snapshot in time. It says "as of this date, the vendor had these controls in place." A Type II report covers an observation period, typically six to twelve months, and confirms that those controls were actually operating as described throughout that period. When a vendor mentions SOC 2, always ask which type they have. A Type I report is significantly weaker evidence of ongoing security than a Type II.

    Critically, SOC 2 does not establish a universal security standard. Vendors define their own control objectives, and auditors verify whether the vendor met its own stated objectives. Two vendors can both be SOC 2 compliant while having dramatically different security practices. A vendor with minimal stated controls that consistently meets them will pass the same audit as a vendor with rigorous controls.

    • Always request a Type II report, not just an attestation letter
    • Check the report's issue date. Reports older than 12 months without a bridge letter are considered stale
    • Verify the auditing firm is an independent CPA firm, not affiliated with the vendor
    • Ask to review the report under NDA. Vendors who refuse should be treated with skepticism

    ISO 27001: International but Not Comprehensive

    A more rigorous framework with its own limitations

    ISO 27001 is an international standard for information security management systems. Unlike SOC 2, ISO 27001 certifications are issued by accredited third-party certification bodies, and the list of certified organizations is publicly searchable. This makes it somewhat easier to verify than SOC 2.

    ISO 27001 covers a broad set of controls around information security governance, risk management, access control, incident management, and business continuity. It requires organizations to establish, implement, maintain, and continually improve a formal Information Security Management System. The ongoing nature of the requirement is a genuine strength. Companies cannot simply pass an initial audit and forget about security.

    That said, ISO 27001 was not designed with AI-specific risks in mind. It does not address prompt injection attacks, AI model poisoning, or the specific risks of large language models processing sensitive queries. Some vendors now combine ISO 27001 with the newer ISO 42001 standard for AI management systems. If a vendor mentions ISO 27001 specifically in the context of their AI platform, it is worth asking whether they have also pursued ISO 42001.

    • You can verify ISO 27001 certification through accreditation body databases such as UKAS, ANAB, or the certification body's own directory
    • Check the certificate's scope. It may only cover specific products or office locations, not the full platform you are evaluating
    • Ask whether the certificate covers the specific AI product or service you plan to use

    HIPAA Compliance: A Relationship, Not a Certificate

    Why "HIPAA compliant" is not the same as "HIPAA safe"

    If your nonprofit handles protected health information, whether because you run a health clinic, provide social services with health components, or work as a business associate to a healthcare provider, HIPAA compliance from your AI vendors is not optional. But the phrase "HIPAA compliant" is one of the most misused in the industry.

    Unlike SOC 2 or ISO 27001, HIPAA has no certifying body and no formal third-party verification process. When a vendor says they are "HIPAA compliant," they are self-declaring. The critical legal mechanism is the Business Associate Agreement, or BAA. Any vendor that handles protected health information on your behalf is legally required under HIPAA to sign a BAA with you. If a vendor declines to sign a BAA, or makes signing one difficult, that is a significant warning sign.

    The January 2025 proposed update to the HIPAA Security Rule, the most significant revision in 20 years, removes the distinction between required and addressable safeguards. For nonprofits deploying AI in healthcare contexts, this means stricter expectations around encryption, access controls, and incident response, even from vendors you hire to process data on your behalf.

    A BAA is necessary but not sufficient. The agreement must clearly outline permitted uses and disclosures of protected health information, require that any subcontractors the vendor uses meet the same obligations, and define each party's responsibilities for preventing unauthorized access. Ask your vendor to walk you through what their BAA actually covers before signing.

    • Require a signed BAA before sharing any protected health information
    • Verify that the BAA covers all subprocessors, not just the primary vendor
    • Ask how the vendor de-identifies health data before using it for any AI model training or fine-tuning

    Encryption Claims: What "Encrypted" Really Means

    The difference between encrypted storage and true data protection

    Nearly every vendor will tell you their platform uses encryption. This claim is almost always technically true and almost always incomplete. There are meaningfully different kinds of encryption, and knowing which applies to your situation matters.

    Encryption at rest means data is encrypted when stored on the vendor's servers. Encryption in transit means data is encrypted as it moves between your browser or application and the vendor's servers. Both are baseline expectations in 2026, not differentiating security features. A vendor who leads with "we encrypt your data" without further specifics is telling you the minimum, not the whole story.

    The more important question is who holds the encryption keys. If the vendor manages the keys, they can technically access your data. If your organization manages the keys, the vendor cannot read your data even if their systems are compromised. Customer-managed keys or bring-your-own-key arrangements offer stronger protection but are typically only available in enterprise tiers and require some technical setup.

    • Ask whether the vendor uses AES-256 encryption at rest and TLS 1.2 or higher in transit
    • Ask who manages the encryption keys and whether customer-managed key options are available
    • Understand that data may be decrypted during AI inference, meaning while the model is processing your query

    Red Flags in Vendor Security Pitches

    The security claims that should raise your concern are not always obvious. Some of the most misleading language sounds professional and confident. Learning to recognize these patterns will make you a considerably more effective evaluator.

    Language and Certification Red Flags

    • Vague compliance language without documentation. Phrases like "bank-level security," "military-grade encryption," or "enterprise-grade protection" sound impressive but carry no specific technical meaning. Ask for the documentation behind any claim.
    • Offering only a compliance badge or attestation letter instead of the actual report. SOC 2 reports are typically shared under NDA with serious prospects. If a vendor offers only a one-page letter or a badge from their website rather than the full audit report, treat that as a meaningful gap in transparency.
    • Stale reports with no bridge letter. A SOC 2 report with an audit period ending more than twelve months ago, with no bridge letter confirming controls have remained in place, indicates the vendor may not be actively maintaining their compliance posture.
    • Claiming "AI compliance" without specifics. AI systems face unique security risks, including prompt injection, model inversion attacks, and training data leakage, that standard compliance frameworks do not address. Vendors who claim compliance without acknowledging these AI-specific risks may not be assessing them at all.
    • Resistance to answering specific questions. Genuine security maturity is accompanied by a willingness to answer detailed questions. A vendor who deflects, becomes vague, or escalates to legal review every time you ask a practical question is signaling something worth investigating further.
    • No mention of subprocessors. Most AI platforms use third-party services for infrastructure, model hosting, analytics, and support. If a vendor cannot identify their subprocessors or declines to disclose them, your data may be flowing to organizations you have never evaluated or approved.
    • Training data opacity. If a vendor cannot clearly answer whether your organization's data will be used to train their AI models, and under what conditions, that gap represents both a privacy risk and a potential competitive risk if your strategies or beneficiary information inform a shared model.

    The "AI Washing" Problem

    A related phenomenon that has accelerated since 2024 is what regulators and analysts call "AI washing," which refers to vendors overstating or misrepresenting the role of AI in their products, and sometimes extending this to security claims. A vendor might claim their platform uses "AI-powered threat detection" when the underlying mechanism is a basic rules engine with an AI-sounding marketing layer on top.

    The Federal Trade Commission has taken enforcement action against companies making inflated AI claims. For nonprofits evaluating vendors, the practical implication is to treat any security feature described primarily in terms of AI with healthy skepticism unless the vendor can explain specifically how the AI component works and what it is actually detecting or preventing.

    Beyond marketing language, there is a documented gap between organizational perceptions of AI security and reality. Organizations consistently overestimate how well their AI governance controls are functioning, often by a factor of five to ten times compared to independent assessments. When evaluating a vendor, assume that their internal confidence in their own security posture may be higher than what an independent assessment would find.

    Key Questions to Ask Every AI Vendor

    You do not need to be a security expert to ask good security questions. The following questions are designed to be accessible, direct, and revealing. A vendor with strong security practices will welcome them. A vendor with weak practices will often struggle to answer them clearly.

    About Certifications and Documentation

    • "Can you share your most recent SOC 2 Type II report under NDA? How old is the audit period?"
    • "What Trust Service Criteria does your SOC 2 cover? Is it just Security, or does it also include Confidentiality and Privacy?"
    • "Is your ISO 27001 certificate publicly verifiable? Does its scope include the specific product we would be using?"
    • "Have you pursued ISO 42001 for AI management systems?"
    • "Can you provide a summary of your most recent penetration test? How often do you conduct them, and who performs them?"

    About Your Data

    • "Will our data, including prompts, queries, and any uploaded documents, be used to train your AI models? Under what circumstances?"
    • "Where is our data stored and processed? In which countries or regions?"
    • "What is your data retention policy? How long do you keep our data after our contract ends?"
    • "When our contract ends, what happens to our data? What is your deletion process and timeline?"
    • "Can you provide a complete list of subprocessors who may handle our data? How are they vetted?"

    About Incidents and Breach Response

    • "Have you experienced any security incidents or data breaches in the past two years? What happened, and what did you change as a result?"
    • "How quickly would you notify us in the event of a breach affecting our data? What does your notification process look like?"
    • "Do you have a documented incident response plan? Can we review it or a summary of it?"
    • "What is your SLA for responding to and communicating about security incidents?"
    • "Does your contract include specific language about breach notification timelines and your liability in the event of a data breach?"

    About Access and Controls

    • "Who at your company can access our data? Under what circumstances?"
    • "Do you support multi-factor authentication for all user accounts on our side?"
    • "What role-based access controls are available to limit what different staff members can see or do within the platform?"
    • "Do you maintain audit logs of who accessed what data and when? Are those logs available to us?"

    Data Privacy and Sovereignty: Why Location Matters

    Data sovereignty refers to the principle that data is subject to the laws of the country or region where it is stored or processed. For nonprofits operating across borders, or serving communities whose data is protected by specific legal frameworks, this is more than an abstract concern.

    A vendor might store your data in Europe while being a US-based company subject to US government data requests under laws like the CLOUD Act. Your organization might process data about EU citizens that is subject to GDPR requirements regardless of where your offices are. If your AI platform processes personal data from EU citizens and transfers it to a US-based model provider without appropriate safeguards, your organization bears the regulatory consequences, not the vendor.

    The subprocessor question is especially important for AI platforms. Many AI vendors use third-party model providers, cloud infrastructure, and specialized analytics services. Your data may travel through several organizations' systems before a response reaches your screen. Each of those organizations is governed by its own security practices, legal obligations, and data residency policies.

    A contractual "do not train on our data" clause is one protective measure, but it is important to understand that some jurisdictions do not recognize these clauses as fully enforceable. More reliable protection comes from technical controls, such as data isolation at the infrastructure level, combined with contractual obligations that specify remedies if the clause is violated.

    Data Sovereignty Questions for Vendors

    • "In which countries are your servers located? Where is data actually processed during AI inference?"
    • "What legal jurisdiction governs your data handling obligations, and does it align with our regulatory requirements?"
    • "Can you offer data residency options that keep our data within a specific region?"
    • "Does your contract explicitly prohibit using our data for AI model training, and what technical measures enforce that prohibition?"
    • "What is your process for responding to government or law enforcement requests for our data?"

    What to Look for in Security Documentation and Contracts

    You do not need to read every line of a vendor's security documentation to identify the most important elements. Focus on these areas when reviewing any documentation you receive.

    Five Contract Clauses You Cannot Skip

    Based on current best practice guidance for AI vendor agreements

    According to current guidance from legal experts working in this space, five contractual provisions represent the minimum viable set of protections for any AI vendor relationship that involves your organization's data.

    • Data Use Limitation. Specifies precisely how the vendor may use your data. Does it restrict use to service delivery only? Does it prohibit using your data to train shared or general-purpose AI models? Any ambiguity here should be resolved in writing before you sign.
    • Data Isolation. Clarifies whether your data is logically or physically separated from other customers' data. Shared infrastructure is common and not inherently problematic, but the contract should specify what isolation measures exist.
    • Subprocessor Transparency. Requires the vendor to disclose all subprocessors who may access your data and to notify you when they add new ones. Without this, your data could flow to any number of third parties without your knowledge.
    • Data Portability and Deletion. Guarantees that at contract end, you can export your data in a usable format and that the vendor will delete all copies, including from backup systems, within a defined timeframe. Confirm this explicitly, as many vendor defaults treat backups separately.
    • Breach Notification. Sets a specific, legally binding timeframe within which the vendor must notify you of a data breach. Industry standard is 72 hours for GDPR-regulated data. For any platform handling sensitive beneficiary information, 24 to 72 hours is a reasonable baseline to negotiate.

    What to Look for in a Security Policy Document

    Most vendors publish some version of a security policy, security whitepaper, or trust center page. When reviewing these documents, look beyond the marketing layer to find evidence of operational practices.

    • Specific mention of vulnerability management processes and timelines for patching critical vulnerabilities
    • Evidence of a formal incident response plan, not just a promise to notify you if something goes wrong
    • Description of employee security training and background check requirements, especially for staff who may access customer data
    • Information about business continuity and disaster recovery plans
    • Details about access control procedures, including how vendor employees' access to customer data is restricted and monitored

    Evaluating Incident Response and Breach Notification Policies

    How a vendor responds when something goes wrong is often more revealing of their security culture than anything they say when everything is going well. Nonprofit donor data breaches cost the sector more than $49.5 million in settlements in 2023 alone, with state-level enforcement actions continuing through 2025 and 2026. The financial consequences of a breach extend well beyond the breach itself: reputational damage, donor attrition, and regulatory investigations can persist for years.

    When evaluating a vendor's incident response posture, start by asking whether they have a documented incident response plan at all. This should not be a controversial question, and a vendor who becomes defensive or vague in response to it is telling you something important.

    Ask specifically about their communication process during an incident. Who will contact you? How quickly? Through what channel? Many nonprofits have learned the hard way that a vendor's generic support email or a buried dashboard notification is not an adequate way to learn that their beneficiary data may have been compromised.

    The best vendors will offer a dedicated security contact or account-level escalation path rather than routing security incidents through the same queue as billing questions. If the vendor you are evaluating cannot clearly explain their incident communication hierarchy, consider that a gap worth negotiating before you sign.

    Incident Response Evaluation Checklist

    • Vendor has a documented incident response plan they can share in summary form
    • Contract specifies breach notification timeline (72 hours or less is reasonable for most nonprofits)
    • A named security contact or escalation path exists specifically for your account
    • Vendor can describe what a breach notification to you would actually contain (scope, nature of data affected, recommended actions)
    • Vendor has disclosed past incidents openly and can describe what they changed as a result
    • Contract includes vendor liability provisions for breaches caused by their negligence
    • Vendor's breach notification procedures align with the regulatory requirements governing your data (HIPAA, GDPR, state breach notification laws)

    Common Gaps Between Marketing Claims and Actual Security Practices

    Understanding where vendor marketing most commonly diverges from operational reality will help you ask sharper questions and interpret answers more accurately.

    The Compliance-Security Conflation

    The most pervasive gap is between compliance and security. These are genuinely different things. Compliance means meeting a defined set of requirements. Security means effectively protecting data and systems against evolving threats. A company can be fully compliant on paper while having significant security vulnerabilities in practice.

    This gap is not hypothetical. Many of the major data breaches of the past several years have occurred at organizations that held valid SOC 2 or ISO 27001 certifications. The certifications described what controls were in place at audit time. They did not predict how those controls would perform against novel attack methods.

    When a vendor leads with compliance certifications, follow up by asking about their security culture: How often do they conduct unplanned security reviews? Do they have a bug bounty program? How quickly did they patch the most recent major software vulnerability that affected their infrastructure? These questions probe operational security rather than documented compliance.

    Certification Scope Misrepresentation

    Both SOC 2 and ISO 27001 apply to defined scopes. A vendor might hold a valid ISO 27001 certificate that covers their corporate IT infrastructure but not the specific cloud platform you would be using. They might have a SOC 2 report that covers one product line but not the AI feature you are evaluating.

    Always ask specifically whether the certification covers the product or service you will be using, not just the organization generally. Request documentation of the certification scope, which should be included in the actual report or certificate.

    The Subprocessor Blind Spot

    When you sign a contract with an AI vendor, you are implicitly also entering into a security relationship with every subprocessor they use: the cloud provider hosting the infrastructure, the AI model provider, the analytics platform, the customer support system. Most vendor security pitches focus entirely on the primary vendor's controls while saying little or nothing about subprocessor oversight.

    Request a complete and current list of subprocessors before signing. Understand which of those subprocessors may have access to your data, in what form, and under what contractual obligations. Ask whether the vendor conducts security reviews of their subprocessors, and on what schedule.

    This is especially important for AI platforms, where the model itself may be provided by a third party such as a major foundation model company. The vendor's SOC 2 compliance does not extend to their AI model provider's handling of your data unless that provider is explicitly covered in the vendor's compliance scope and agreements.

    AI-Specific Security Gaps

    Traditional security frameworks were designed for conventional software systems, and they do not adequately address the unique risks of AI platforms. These risks include prompt injection attacks, where malicious input manipulates the AI to take unintended actions; training data leakage, where the model can be coaxed into revealing information from its training data; model inversion, where an attacker can reconstruct sensitive training data from a model's outputs; and adversarial inputs, where carefully crafted data can cause the AI to make incorrect decisions.

    Vendors with genuinely mature AI security practices will be able to speak to these risks specifically. They will describe the adversarial testing they conduct, the input validation measures they have implemented, and the output filtering that prevents data leakage. Vendors who respond to AI-specific security questions by redirecting to their general compliance documentation may not have adequately addressed these risks.

    A Practical Evaluation Framework for Nonprofit Leaders

    Rather than approaching vendor security as a checklist to complete once and file away, treat it as an ongoing relationship with defined expectations. Here is how to structure that evaluation process practically.

    Before You Begin the Sales Process

    The most effective security evaluation starts before you talk to any vendor. Spend time clarifying what data the AI tool will access, what regulatory frameworks govern that data, and what your organization's specific risk tolerance is. Involve your board and leadership in that conversation, since security decisions made during procurement can have significant governance implications.

    If your organization handles health information, student records, financial data, or personal information about vulnerable populations, document the specific compliance requirements that any vendor must meet before the sales conversation begins. This prevents vendors from shaping those requirements through their sales narrative.

    During Vendor Evaluation

    Send security questions in writing before your first substantive meeting and ask for written responses. Verbal assurances during a sales call are meaningless. Written responses become part of your evaluation record and create accountability.

    Request the actual SOC 2 report, not a summary. Request the actual certificate with scope documentation if ISO 27001 is claimed. Ask for a penetration test summary. If the vendor's security team is not available to participate in a thirty-minute call to answer questions, that is itself a data point about their security posture and organizational priorities.

    For higher-stakes tools, such as platforms that will handle beneficiary health information or donor financial data, consider engaging a consultant or volunteer with security expertise to review documentation. This does not require a full security audit; even a single experienced reviewer can identify significant gaps in a vendor's documentation.

    During Contract Negotiation

    Security and data handling provisions are often buried in vendor master service agreements or data processing addenda written by the vendor's legal team with the vendor's interests in mind. Review these documents specifically for the five critical clauses described earlier: data use limitation, data isolation, subprocessor transparency, data portability and deletion, and breach notification timelines.

    Most vendors will negotiate reasonable security provisions if you ask. What they will not do, absent your initiative, is volunteer protections they are not contractually required to offer. The act of asking for specific security provisions signals that your organization takes these matters seriously and establishes a baseline for the relationship going forward.

    After You Sign

    Security evaluation should not end when the contract is signed. Schedule an annual review of the vendor's security posture, request updated SOC 2 reports or ISO 27001 certificates each year, and ask to be notified proactively when the vendor changes subprocessors or makes significant changes to their data handling practices.

    Subscribe to the vendor's security notifications or status page if they offer one. When high-profile vulnerabilities affect major software components that your vendor likely uses, such as when a major cloud library is found to have a critical flaw, proactively ask your vendor how they have responded. Their speed and transparency in those moments reveals more about their security culture than any certification.

    Related reading: For guidance on building your organization's broader AI governance structure, see our article on AI risk registers for nonprofit boards, and for help developing organization-wide AI policies, see our guide to creating an AI policy for your nonprofit.

    The Security Conversation Is Never Finished

    Evaluating AI vendor security is not a one-time task you complete before signing a contract. It is an ongoing responsibility that reflects your organization's commitment to the people whose data you hold. Every piece of information about a donor, a beneficiary, a volunteer, or a client represents a real person's trust in your organization's stewardship. The vendors you choose are extensions of that stewardship, for better or worse.

    The practical reality is that security is imperfect across the industry. No vendor can guarantee zero incidents. What you can evaluate, and what matters most, is whether a vendor takes security seriously enough to be transparent about their practices, to invest in continuous improvement, and to respond quickly and honestly when things go wrong.

    The questions in this guide are not designed to disqualify vendors for imperfection. They are designed to distinguish vendors who have genuinely invested in security from those who have invested primarily in marketing language about security. That distinction is worth making carefully, because the cost of getting it wrong falls not on the vendor but on the communities your organization exists to serve.

    For further reading on protecting your organization's data while adopting AI tools, explore our articles on data privacy risk assessments for nonprofits and implementing zero trust security on a nonprofit budget.

    Ready to Evaluate Your AI Vendors More Confidently?

    One Hundred Nights works with nonprofit leaders to build AI adoption strategies that account for security, compliance, and the specific data responsibilities your mission creates. Let us help you ask the right questions.