When AI first started showing up in harassment cases, it left lawyers with several questions. Is AI-related harassment even “workplace” conduct? Do anti-harassment laws reach something generated by software rather than actions directly taken by human beings? Do we need new statutes before courts can touch this phenomenon?
The answer is that generative AI is simply changing the form of conduct that anti-harassment laws—such as Title VII on the federal level and Illinois’ Human Rights Act—were already equipped to analyze. This blog explores some of the issues AI can present workplace harassment claims, as well as practical tips for practitioners.
Defining AI Harassment
What does AI-generated harassment look like? In reality, it can take many forms, but let’s take the example of an employee who learns that coworkers have circulated a sexually explicit image of her. The image looks real, but in fact someone prompted AI to generate the image. Management’s initial response might focus on questions such as who accessed which system and whether a vendor bears responsibility. Only later does the matter get routed, if at all, through the harassment framework. This is a misstep.
‘Deepfakes’ refer to highly realistic images, videos, or audio recordings created using generative AI that replace one person’s likeness with another. Because these tools can create anatomically explicit content without the victim’s consent, they present a uniquely invasive form of harassment.
Digital Work Environments
The recently-rescinded, but still instructive, 2024 EEOC’s Enforcement Guidance on Harassment is explicit about how to think about technology-mediated conduct. The agency emphasizes that the “work environment” is not limited to physical spaces, and includes conduct occurring outside the workplace that nonetheless affects the terms and conditions of employment. Importantly, the guidance specifically states that given “the proliferation of technology, it is increasingly likely that the non-consensual distribution of real or computer-generated intimate images, such as through social media, messaging applications, or other electronic means, can contribute to a hostile work environment, if it impacts the workplace.”
This guidance reflects how courts have long handled harassment via email, text messages, instant messaging channels (such as Slack and Teams), and video conferencing (including artificially generated backgrounds), as well as social media spillover from the personal to the professional. The proliferation of generative AI doesn’t disrupt how courts have approached harassment in new technologies, but rather supplies an additional fact pattern.
Is AI Harassment More or Less Severe?
Under Title VII, the core inquiry remains: whether unwelcome conduct based on sex was sufficiently severe or pervasive to alter the conditions of employment and create an abusive working environment. In fact, controlling jurisprudence never hinges liability on the medium of the conduct. Instead, these cases hinge on context, severity, frequency, power dynamics, and employer response to complaints of harassment.[1]
Thus, the “AI did it” defense likely will not fly with courts. A sexually explicit deepfake such as the example provided above does not become less severe because it was generated rather than photographed, or because the creator claims experimentation rather than animus.
From a liability perspective, AI fits comfortably within existing categories. If a supervisor generates or circulates the content, the Faragher/Ellerth framework, which provides for strict employer liability for a supervisor’s harassment, is implicated. If coworkers are at fault, the case turns on whether the company was on notice of the violation and the reasonableness of the employer’s corrective action. If a third-party tool or vendor is involved, the analysis looks much like customer or contractor harassment cases in which the employer’s response and its control over the vendor still matter.
Establishing Liability in Illinois
Illinois law further strengthens a plaintiff’s hand. The Illinois Human Rights Act defines sexual harassment as “unwelcome conduct of a sexual nature that has the purpose or effect of substantially interfering with work performance or creating an intimidating, hostile, or offensive working environment.” This tracks federal law but is often applied more broadly, and Illinois courts and agencies have historically been receptive to evolving fact patterns.
The statutory and regulatory backdrop in Illinois is also significant. The state requires most employers to provide annual sexual harassment prevention training, and Chicago’s ordinance goes further by imposing annual training obligations (including supervisor-specific content and bystander training) and record-keeping requirements. In litigation, those requirements can become relevant to what an employer anticipated, what it trained for, and whether its response to a novel incident was reasonable. Treating AI-generated harassment as an unforeseeable anomaly will likely be frowned upon in a jurisdiction that emphasizes prevention and preparedness.
Deepfakes also tend to sharpen the “severity” analysis in a way that’s worth taking seriously. Courts have long recognized that pornographic or sexualized imagery in the workplace can carry significant weight in hostile work claims, particularly when it is targeted and humiliating. The federal Seventh Circuit’s emphasis on context, including who the speaker is, how directly the conduct is aimed at the plaintiff, and how it affects workplace interactions, translates cleanly to the medium of generated AI.
Gates v. Board of Education of the City of Chicago, while addressing race-based harassment, illustrates the court’s broader approach.[2] (“[L]anguage in our earlier opinions indicating that an environment must reach the point of ‘hellishness’ before becoming actionable is impossible to reconcile with [the Supreme Court’s decision in] Harris v. Forklift Systems, Inc.”). A realistic sexual deepfake depicting an employee can collapse what might otherwise be debated as “pervasiveness” into a single, severe event, especially once dissemination, commentary, and management response are factored in.
Practical Takeaways for Practitioners
For lawyers representing employers and employees alike, the practical takeaway is not that AI creates a new category of harassment law, but rather that it raises the stakes on familiar ones. At the end of the day, AI-generated sexual harassment only arises when someone prompts it to be created and it finds its way into the workplace. These are familiar patterns in more traditional harassment claims, facilitated by new technology, and many similar conversations occurred among practitioners at the advent of social media.
Illinois, in particular, is well positioned to see early, consequential rulings in this space. This is not because its statutes are particularly futuristic but because the state has mandatory and ongoing training requirements in place which have a strong impact on mitigating harassment claims in evolving workplace landscapes.
When investigating and litigating claims of AI-generated sexual harassment, here are some practical takeaways:
- Don’t rely on the defense that AI is “new technology.” Courts will analyze AI-generated harassment under the familiar sexual harassment frameworks.
- Deepfakes heighten liability. Realistic sexual images of real employees will likely be treated as inherently degrading and harassing, even if there is only a single instance.
- Employer notice will be important. Be vigilant in reviewing IT tickets and security reviews as they may establish knowledge or impute notice during the discovery process in litigation involving generative AI.
- Training is key. Particularly in states such as Illinois which have mandatory and ongoing sexual harassment training requirements, employers should be integrating new technology such as AI into their training curricula.
Ultimately, while generative AI is a novel tool, the legal exposure it creates is rooted in age-old principles of workplace integrity. Employers who wait for “AI-specific” laws to catch up before addressing misconduct risk significant liability under existing frameworks. By treating AI conduct with the same gravity as all other conduct, and by staying informed with updated training that reflects the realities of modern technology, practitioners can ensure that the age of AI does not become an era of unchecked harassment.
[1] See, e.g., Strickland v. City of Detroit, 995 F.3d 495, 506-07 (6th Cir. 2021); Tammy S. v. Dep’t of Def., EEOC Appeal No. 0120084008, 2014 WL 2647178, at *12 (June 6, 2014); Knowlton v. Dep’t of Transp., EEOC Appeal No. 0120121642, 2012 WL 2356829, at *1-3 (June 15, 2012).
[2] 916 F.3d 631, 637 (7th Cir. 2019).