Back to Articles
    Sector-Specific Applications

    AI for Domestic Violence Shelters: Safety Planning, Resource Allocation, and Confidential Services

    Domestic violence organizations operate under unique constraints that make most AI guidance irrelevant or even dangerous. This article examines where AI genuinely helps, where it introduces unacceptable risk, and how DV shelters can implement technology that protects rather than endangers the people they serve.

    Published: April 14, 202614 min readSector-Specific Applications
    AI tools for domestic violence shelters and survivor support organizations

    Most AI adoption guidance for nonprofits begins with the assumption that greater efficiency and automation are universally positive outcomes. For domestic violence organizations, that assumption breaks down immediately. A shelter that exposes its location through a cloud-connected AI tool, an advocate who trusts an algorithm's lethality assessment over professional judgment, or a survivor whose chat history is preserved in a vendor's data logs against their knowledge, these are not theoretical risks. They are the kinds of failures that kill people.

    That said, the DV sector is genuinely under-resourced in ways that AI can address responsibly. Organizations face devastating funding cuts, with federal grants to domestic violence programs reduced or frozen in 2026, leaving many shelters operating with skeleton staffing while demand increases. The administrative burden on advocates, which has been documented at levels high enough to drive burnout and turnover, is exactly the kind of problem AI tools can help solve without putting survivors at risk. The key is understanding which applications are safe and which are not.

    This article draws on guidance from the National Network to End Domestic Violence (NNEDV), research on AI applications in crisis services, and the experiences of organizations that have already begun integrating technology thoughtfully into their programs. The goal is to help DV organizations evaluate AI opportunities with a framework that centers survivor safety as a non-negotiable constraint, not an afterthought.

    The DV sector is in early-to-middle stages of AI adoption, which means organizations making thoughtful decisions now will shape norms for the entire field. How your organization approaches AI, what data you protect, what tools you refuse, and what policies you put in writing, will influence not just your survivors but potentially the standards other shelters follow.

    Why DV Organizations Face Unique AI Challenges

    Before evaluating any specific AI tool, DV organizations need to understand the ways their context differs from other nonprofits. These differences are not edge cases to be acknowledged and moved past; they fundamentally shape which AI applications are viable and which must be avoided entirely.

    Privacy in DV work is not just a compliance matter, it is a direct safety variable. Many survivors face abusers who will go to significant lengths, including filing civil suits, submitting public records requests, or hacking organizational systems, to locate them. Shelter addresses are often held as strict secrets. Client records, including names, case notes, and communication logs, could endanger survivors if exposed. This means that any AI tool that stores data in third-party systems, logs conversations, or creates new data records requires a level of security scrutiny that most general-purpose AI tools cannot meet.

    The VAWA (Violence Against Women Act) confidentiality provisions legally protect victim information held by DV organizations receiving federal funding from civil and criminal process. However, these protections may not clearly extend to data stored in external AI vendor systems, creating a legal gray area that organizations must address proactively through contracts and data governance policies before any AI tool is deployed.

    Unique Risk Factors

    • Shelter locations are often confidential for physical safety reasons
    • Client records are protected under VAWA confidentiality provisions
    • Abusers may actively attempt to locate survivors through data systems
    • Survivors may avoid seeking help entirely if they cannot trust privacy
    • HMIS participation requires a separate comparable database for DV providers

    Organizational Context

    • Many DV organizations operate with skeleton staffing and limited IT capacity
    • Advocate burnout is severe; administrative burden reduction matters enormously
    • Federal funding cuts in 2026 have left many shelters in financial crisis
    • Complex grant reporting requirements with confidentiality constraints
    • Organizations serve survivors from diverse backgrounds requiring culturally responsive approaches

    AI for Administrative Work and Advocate Support

    The safest and often most valuable AI applications for DV organizations are those that reduce the administrative burden on advocates without touching client data at all. Advocacy staff in DV settings frequently report spending enormous portions of their time on documentation, reporting, and compliance work rather than direct service, and this administrative pressure is a documented contributor to burnout and turnover in the field.

    AI tools can help advocates draft case documentation more quickly by converting voice notes to structured text, suggest language for difficult correspondence, and help with grant reporting in ways that do not require exposing client information. When used strictly for organizational content that does not contain survivor data, general-purpose AI tools can dramatically reduce the time advocates spend on paperwork, freeing capacity for the work that requires human presence and judgment.

    Grant writing is a particularly high-value AI application for DV organizations. Many shelters operate with development staffing that is stretched thin, and the grant landscape for DV work is highly competitive. AI tools can help identify funding opportunities through platforms like Instrumentl, assist with application drafting, and help organizations structure the outcome data they can share within VAWA confidentiality constraints. The key restriction is that grant applications and supporting documentation should be generated from aggregate, de-identified data rather than specific client records or case files.

    Low-Risk, High-Value AI Applications

    These uses do not involve client data and can generally be implemented with standard AI tools

    • Grant writing and prospect research: AI tools to identify funding opportunities, draft applications, and develop compelling narratives using aggregate program data
    • Staff training content development: Generating training materials, scenario libraries, and onboarding resources for new advocates
    • Policy and procedure documentation: Drafting, reviewing, and updating organizational policies with AI assistance
    • External communications: Social media content, donor communications, and community awareness materials that do not reference specific clients
    • Research summaries: Synthesizing sector research, best practices, and emerging DV trends to inform program development
    • Budget and financial analysis: Scenario planning, expense analysis, and financial modeling that does not include identifiable client information

    AI-Powered Hotlines and Crisis Support Tools

    One of the most significant recent developments in the DV sector is the deployment of AI-assisted chat tools to extend hotline capacity. The National Domestic Violence Hotline deployed "Ruth," an AI chat assistant that provides information and support when live advocates are unavailable, helping bridge wait times for survivors reaching out digitally. DomesticShelters.org launched Hope Chat AI in early 2025, trained on over 1,200 articles, videos, and webinars covering the full range of DV-related topics.

    These tools address a genuine and serious capacity problem. Webchat accounts for a growing share of hotline contacts, and wait times for live advocates can be lengthy during high-volume periods. An AI tool that can provide immediate, informed information about safety planning, shelter availability, and local resources while a survivor waits for a human advocate can meaningfully reduce the chance that someone gives up before getting help.

    The Sophia chatbot developed by Spring ACT and Microsoft represents perhaps the most privacy-conscious approach in the field. Sophia operates through WhatsApp, Telegram, and Viber, requires no app download, leaves minimal digital trace, stores files in encrypted Swiss servers, and operates in over 85 languages. Sophia won the UN Global AI for Good Impact Award in 2025 and has served more than 30,000 users in 143 countries. The decision to operate through existing messaging apps rather than a dedicated platform reflects a sophisticated understanding of survivor behavior: survivors will use channels they already know and trust rather than adopting new tools that could be discovered by an abuser.

    However, AI crisis tools carry real limitations that organizations must understand before deploying or recommending them. Research from the Social Science Research Council's Just Tech project has documented concerns that AI systems in crisis contexts may fail to recognize suicidal ideation, may provide generic safety information that does not fit a survivor's specific circumstances, and cannot exercise the professional judgment that a trained advocate brings to a complex, individual situation. These tools should be positioned as supplements to human advocacy capacity, not replacements for it.

    AI Crisis Tools in Practice

    • Ruth (National DV Hotline): Bridges wait times during high-volume periods with immediate information and support
    • Hope Chat AI (DomesticShelters.org): Trained on 1,200+ resources, created in collaboration with DV experts
    • Sophia (Spring ACT): Operates via messaging apps, no digital trace, 85+ languages, evidence vault capability
    • Aimee Says: Helps survivors recognize abuse patterns; chats are private from the development team

    Critical Limitations to Communicate

    • AI cannot replace trained advocates for complex, individualized safety planning
    • AI systems may not recognize suicidal ideation or escalating danger signals
    • Generic safety advice may not fit a survivor's specific, highly individual situation
    • Chat logs may be preserved in vendor systems regardless of stated privacy policies

    Resource Allocation and Shelter Operations

    One of the most concrete operational problems AI can help solve for DV shelters is the coordination of bed availability and resource allocation. The Grove, a real-time national shelter availability platform, secured 12 months of free access for every DV, sexual assault, and human trafficking shelter in the country through a 2025 NNEDV partnership. This kind of tool, which allows advocates to see bed availability in real time without requiring survivors to disclose their location publicly, addresses a genuine coordination problem that has historically meant survivors were turned away from one shelter without knowing another had space available nearby.

    For shelters that use Salesforce-based CRMs, which is common in the human services sector, AI tools like Salesforce Einstein can help with tracking service delivery metrics, generating outcome reports for grant compliance, and analyzing patterns in service demand over time. The critical boundary for these applications is ensuring that the data feeding AI analysis has been de-identified and aggregated at the organizational level, not passed to AI tools in a way that preserves individual survivor records.

    Demand forecasting is another area where AI analysis can support better resource planning. By analyzing patterns in hotline call volume, shelter bed requests, and community referrals over time, organizations can anticipate seasonal fluctuations or event-driven spikes in demand and plan staffing accordingly. This kind of analysis works best when done at an aggregate level with historical data that has been stripped of individual identifiers, and can meaningfully help organizations make the case to funders for increased capacity during high-demand periods.

    Operational AI Applications with Appropriate Data Controls

    These applications require careful data governance but can be implemented safely

    • Real-time shelter availability coordination: Platforms like The Grove enable bed coordination without exposing survivor location or identity
    • Aggregate demand analysis: Identifying seasonal patterns and demand drivers using historical, de-identified data to improve capacity planning
    • Outcome reporting automation: Generating grant compliance reports from aggregate service data within VAWA confidentiality constraints
    • Referral network mapping: Analyzing community referral patterns to identify gaps in the local support ecosystem
    • Staff scheduling optimization: Using AI tools to optimize advocate schedules based on demand patterns without linking schedules to specific client data

    Technology-Facilitated Abuse: AI as a Threat, Not Just a Tool

    DV organizations in 2026 cannot limit their AI awareness to the tools they might deploy internally. AI is also reshaping the tactics abusers use to surveil, control, and harm survivors, and advocates need to understand this landscape to provide informed safety planning guidance.

    Stalkerware, software covertly installed on a device to track location, monitor communications, and record activity, remains the most pervasive form of technology-facilitated abuse. While regulatory pressure has reduced the most overt commercial stalkerware offerings, cheap Bluetooth trackers have expanded abusers' location monitoring capabilities significantly. The critical challenge with stalkerware detection is that removing it can alert the abuser and escalate danger. The SpyGuard tool and similar network monitoring approaches can detect stalkerware without triggering that alert, but detection must always be paired with a safety plan that accounts for the abuser's likely response.

    AI-generated deepfakes represent an escalating threat, particularly for survivors who have left an abuser or are engaged in custody or legal proceedings. Non-consensual intimate imagery, including AI-generated synthetic images, is now used for coercion, reputation destruction, and retaliation. UN Women has documented that the vast majority of deepfake content online targets women. The Take It Down Act, which became federal law in 2025, established criminal penalties for publishing non-consensual deepfakes, but enforcement is uneven and removal from platforms remains difficult. Advocates need to be prepared to help survivors document this form of abuse and understand their legal options.

    Organizations should also be aware that abusers increasingly use AI tools to monitor public communications about DV organizations, track news mentions that might reveal shelter locations, and, in some cases, attempt to extract information through deceptive interactions with AI-powered chat tools. Ensuring that AI tools deployed by your organization cannot be manipulated to reveal sensitive operational information is as important as protecting your client data.

    AI-Enabled Abuse Tactics

    • Stalkerware: Covert device monitoring tools, now supplemented by Bluetooth trackers
    • Deepfake intimate imagery: AI-generated non-consensual images used for coercion and retaliation
    • AI-assisted location tracking: Using publicly available data and AI analysis to locate survivors
    • Social media monitoring: AI tools scanning public platforms for mentions that might reveal survivor location

    Resources for Tech Safety Advocacy

    • NNEDV Safety Net Project: Sector guidance on AI and technology safety for DV organizations
    • CETA at Cornell: Free cybersecurity services for tech abuse survivors, distributed through DV shelters
    • Coalition Against Stalkerware: Resources for safe stalkerware detection and removal planning
    • Take It Down Act: Federal law creating criminal penalties for publishing non-consensual deepfakes

    Why AI Lethality Assessment Is Not Ready for DV Practice

    Perhaps the most important "do not" guidance for DV organizations in the AI era concerns algorithmic lethality and risk assessment. There is significant interest in using AI to predict dangerousness in intimate partner violence situations, and some jurisdictions have deployed algorithmic scoring tools for law enforcement purposes. DV organizations should understand why this application remains deeply problematic and resist pressure to adopt AI-based lethality assessment, regardless of how it is marketed.

    The evidence against current AI lethality tools is stark. Spain's Viogén algorithm, deployed by police to classify DV cases by risk level, classified 55 of 98 domestic violence-related homicide victims as "negligible or low risk" before they were killed. This is not a failure of one particular tool; it reflects structural limitations in how AI risk assessment works in contexts where the most dangerous situations are also the most unpredictable, the data is shaped by historical biases, and the consequences of misclassification are irreversible.

    Research published in the Berkeley Journal of Criminal Law further documents the problem of automation bias in high-stakes decision contexts: even when humans have explicit authority to override algorithmic recommendations, they defer to the algorithm approximately 95% of the time. In a lethality assessment context, this means a well-trained advocate with relevant information may be unconsciously overriding their professional judgment because a scoring tool indicates low risk. This failure mode can be fatal.

    The evidence-based alternative, structured lethality assessment tools developed collaboratively with DV researchers and practitioners, such as the Maryland Lethality Assessment Program, have documented strong outcomes (approximately 40% reduction in intimate partner homicides in jurisdictions using the Maryland LAP) precisely because they are designed to inform and support advocate judgment, not replace it. Organizations should continue using these validated instruments and advocate for their continued use over AI-based alternatives.

    High-Risk AI Applications to Avoid

    These applications introduce unacceptable risk and should not be implemented

    • AI lethality or dangerousness scoring: Algorithmic risk assessment for intimate partner violence is demonstrably unreliable and creates automation bias that can override advocate judgment
    • General-purpose AI for case notes containing client data: Systems like ChatGPT may retain conversation data in ways that violate VAWA confidentiality, regardless of stated privacy policies
    • Any cloud AI tool without explicit data retention policies: If a vendor cannot specify exactly how data is stored, retained, and deleted, it should not be used for any purpose touching client information
    • AI tools without coercive control training: General-purpose AI systems may produce dangerous recommendations in DV child welfare contexts by misidentifying the non-offending parent as the safety concern

    A Data Privacy Framework for DV AI Adoption

    Rather than evaluating AI tools one by one, DV organizations benefit from establishing a consistent data privacy framework that can be applied to any new tool or system. The NNEDV Safety Net Project has published guidance for victim service providers on AI adoption, and their framework emphasizes "mission-aligned decisions" and careful evaluation of data practices as a prerequisite for any AI tool deployment. Building this framework into your organizational decision-making process is more durable than relying on case-by-case judgment.

    The core principle for DV organizations is a clear data classification system that defines what information can and cannot be shared with external AI tools. Information about specific survivors, including names, case notes, correspondence, location data, and service histories, belongs in the highest protection tier. Aggregate program data (number of bed nights provided, hotline call volume, types of services requested) without individual identifiers can be used more broadly. Organizational administrative data (finances, grant documents, staff communications) falls in between and should be evaluated tool by tool based on sensitivity.

    Before deploying any AI tool, organizations should require vendors to provide clear, written answers to specific questions about data storage, retention, access controls, and how data is used for model training. If a vendor cannot answer these questions clearly or hedges on data practices, that is a disqualifying response for a DV organization. Many general-purpose AI tools, regardless of their stated privacy policies, may preserve conversation data in ways that could be accessible to third parties through legal process, a fact that NNEDV highlighted in 2025 when a court order in the New York Times v. OpenAI lawsuit required preservation of user output logs from OpenAI systems.

    Questions to Ask Any AI Vendor Before Adoption

    • Where is data stored and for how long? Require specific answers about data retention periods and deletion processes, not vague assurances
    • Is our data used to train AI models? Verify whether your organizational data contributes to model training that could expose information to other users
    • What happens to data in response to legal process? Understand whether vendor data could be subject to subpoena or court order, and how the vendor would respond
    • Who has access to our data within your organization? Understand internal access controls and employee access policies
    • What is your breach notification process? Require specific timelines and notification procedures in writing as part of any contract
    • Do you provide data export and deletion? Ensure you can extract all organizational data and request complete deletion if you end the relationship

    Survivor-Centered AI Design and Implementation

    Research involving survivors themselves offers important guidance on what technology features matter most to the people DV organizations serve. Studies published in 2025 document that survivors place the highest value on anonymity, flexible privacy controls, the ability to exit quickly if needed, minimal data retention, and platforms they already use and trust. Tools that require account creation, persistent login, or extensive onboarding are less likely to be used by survivors in acute danger.

    The quick-exit button, a feature on DV websites that immediately navigates the browser away from the current page to something benign if the survivor needs to hide their activity, is the kind of design detail that reflects genuine understanding of survivor context. Any digital tool deployed for survivor-facing use should be evaluated with similar attention to the realities survivors navigate: devices may be monitored, time may be limited, and the cost of discovery is potentially severe.

    Algorithmic bias is a serious concern for DV AI tools, especially in communities where systemic biases shape how abuse has been historically reported and documented. Most training data for DV-related AI systems reflects a male perpetrator/female victim pattern that fails LGBTQ+ survivors, male victims of intimate partner violence, and situations involving cultural contexts that differ from mainstream representations in training data. Organizations serving these communities should specifically evaluate whether AI tools they consider have been tested for accuracy and relevance across the populations they serve.

    Finally, DV organizations should recognize their potential role in shaping sector norms around AI adoption. The NNEDV has historically played a significant leadership role in helping the DV sector navigate technology challenges, and their guidance resources are invaluable. Connecting with NNEDV's Safety Net Project, participating in peer learning networks with other DV organizations, and contributing to sector-wide conversations about AI standards is how individual organizations help ensure that AI adoption in the DV sector as a whole centers survivor safety.

    Design Principles for Survivor-Facing Tools

    • No registration required for initial access to information and support
    • Prominent quick-exit functionality to allow immediate navigation away
    • Minimal data retention with clear explanation of what is stored
    • Available through trusted channels (messaging apps) rather than requiring new downloads
    • Multilingual support that reflects the communities served

    Sector Resources and Connections

    • NNEDV Safety Net Project publishes updated guidance on AI and technology safety
    • NNEDV Virtual Tech Summits provide peer learning with approximately 500 participants from the sector
    • The Grove / ReloShare provides free shelter bed coordination for all DV shelters through NNEDV partnership
    • Safe and Together Institute offers guidance on coercive control and AI in child welfare contexts

    Building a Safety-Centered AI Approach

    The domestic violence sector's relationship with AI will be defined by the decisions organizations make in the next few years. Those decisions will determine whether AI becomes a genuine tool for expanding access to services and reducing advocate burnout, or whether it introduces new vulnerabilities that put survivors at greater risk. The path forward requires holding both of these possibilities in mind simultaneously.

    The good news is that the highest-risk AI applications, algorithmic lethality assessment, general-purpose AI tools for case documentation, and any system with unclear data retention practices, are also not the applications that address DV organizations' most pressing needs. The applications that do address genuine needs, administrative burden reduction, grant writing assistance, shelter coordination, and thoughtfully designed crisis chat tools, can be implemented in ways that protect survivors when organizations apply rigorous data governance standards.

    The organizations best positioned to use AI well are those that establish clear policies before adopting any new tool, maintain strong relationships with sector resources like NNEDV's Safety Net Project, and treat every AI adoption decision as a question of survivor safety first and organizational efficiency second. That sequencing matters. An organization that gets it right helps build the trust that the DV sector as a whole will need as AI becomes more pervasive in how survivors seek help.

    For related reading on responsible AI implementation in direct service contexts, see our guides on AI for hospice and palliative care organizations, ethical AI service allocation, and AI for reentry and criminal justice organizations.

    Get Expert Guidance on AI for Your DV Organization

    One Hundred Nights works with human services organizations to design AI implementations that center client safety and organizational mission. We can help you build a framework that captures the benefits of AI while protecting the survivors you serve.