Chat with us, powered by LiveChat

Love, Breakups, and Harm in the Digital Age

Published on: May 11, 2026

A Joint NABITA and ATIXA Tip of the Week by Mikiba Morehead, Ed.D.

Technology has significantly expanded the reach, permanence, and amplification of student harm. As students navigate relationships in digital spaces, emerging concerns such as online threats, image-based sexual abuse (IBSA), Artificial Intelligence (AI)-generated exploitation, chatbot-enabled intimacy, and modern romance scams now routinely intersect with Title IX and behavioral intervention processes. These realities often blur the lines between private conduct and institutional responsibility, particularly when digital behavior disrupts educational access.

NABITA and ATIXA recently held a joint public virtual event on the evolving dynamics of digital relationships and harm. This Tip of the Week highlights key guidance and policy considerations from the discussion, providing civil rights professionals and behavioral intervention/CARE teams with clear direction as digital behaviors evolve.

The Scale and Speed of Digital Harm is Alarming and New

Institutions must respond to harm based on impact and risk, not discomfort with the medium or the technology involved. Whether an image is authentic, altered, or entirely fabricated using artificial intelligence, the institutional analysis remains largely the same. What matters is the experience of harm, fear, coercion, or reputational damage.

Investigations and response processes should remain focused on conduct, context, and effect, rather than being sidetracked by debates over the novelty of the technology or the authenticity of the image. Focusing too heavily on technical proof at the expense of timely intervention can compound harm, particularly when digital content spreads quickly and beyond an institution’s control. To put a fine point on it, if an image of a student’s face is circulated, attached to a digitally altered or generated nude body that is not theirs, the policy violation is the same as if the student’s image was shared with their actual nude body. Most of those who view the image won’t know whether it is authentic or AI-generated, and that difference may impact the sanction, but not whether the conduct is an offense. It is in both variations.

For deeper guidance on this issue, see ATIXA’s blog post, When the Image Isn’t Real: Addressing AI-Generated Explicit Photos, and listen to our More Likely Than Not podcast episode, Brink of Horror: Navigating Tech-Facilitated Sexual Abuse.

Non-Consensual Image Sharing is a Policy Gap

The sharing or threatening to share intimate images without consent of the individual depicted continues to expose gaps in institutional policy, particularly in K-12 settings. When policies are nonexistent, outdated, or overly narrow, institutions risk inconsistent responses that undermine the educational community’s safety and trust.

Effective policies account for evolving digital behaviors while remaining grounded in established principles of consent, safety, and access to education. Clear definitions, explicit prohibitions, and aligned procedures are essential.

Model policy guidance is available in ATIXA’s blog post, Addressing Non-Consensual Intimate Image Sharing in K-12 Settings: A Model Policy That Fills the Gap.

Title IX Analysis Starts with Jurisdiction, Impact, and Access

Technology does not replace Title IX frameworks, but it does add complexity, especially in potentially broadening Title IX’s scope to encompass “off-campus” conduct. Institutions should continue to ask foundational Title IX analysis questions related to jurisdiction, conduct, impact, and supportive measures. While digital environments may change how and where harm occurs and how broadly it impacts, they do not alter an institution’s obligation to respond promptly and equitably.

Institutions should resist the urge to create parallel or siloed processes for digital harm. Instead, they can integrate digital conduct into existing Title IX, conduct code, and behavioral intervention frameworks to ensure consistency, fairness, and coordination.

AI Tools Do Not Replace Professional Judgment

AI is becoming more common in institutional workflows, but it cannot replace human decision-making, contextual analysis, or trauma-informed responses. While AI may assist with administrative efficiency, responsibility for conclusions, findings, and outcomes must remain with trained professionals. As with many things in life, the devil is in the details in many Title IX complaints, and AI is not yet adept at dancing with that devil.

For a broader discussion on the role and limits of AI in this work, listen to ATIXA’s podcast episode, Can AI Write Title IX Reports?

Respond Early and Thoughtfully

Across all scenarios, early intervention, strong policies, and clear communication matters. Most importantly, institutions must protect the safety and educational access of impacted individuals. Digital harm is a current operational reality that demands clarity, preparation, and confidence in existing frameworks, especially as legal protections often lag the development of technology.

Institutions should review policies, protocols, and training related to digital conduct, AI-generated content, and image-based abuse now. Proactive planning and alignment can reduce confusion, risk, and harm before incidents escalate.

NABITA and ATIXA offer tailored training on preventing and addressing tech-facilitated digital harm through their parent consulting firm, TNG Consulting. TNG provides well-informed investigations of such complaints as necessary. Prepare for the evolving digital landscape by contacting inquiry@tngconsulting.com.