The use of artificial intelligence is rapidly growing in employment practices, especially in the areas of recruiting and candidate sourcing. While AI can help improve efficiency and more precise targeting of candidates, it can also land employers in legal trouble. Understanding how AI is being regulated in the hiring process is crucial for finding the right candidates while also remaining compliant and liability-free.
In the hiring context, traditional AI (which makes decisions based on pre-programmed rules, formulas, or data sets) is most commonly used to source, screen, and prioritize candidates by following algorithms (step-by-step instructions designed to support decision-making). This differs from Generative AI, which creates new content, like job descriptions or candidate summaries, and presents unique risks regarding data hallucinations and the accidental disclosure of confidential applicant information.
What Risks Do Employers Take When Using AI in the Hiring Process?
Using AI and algorithms to source and prioritize candidates carries risks for employers, including potential bias and discrimination and challenges to legal compliance.
- Biases Based On Historical Data: AI systems can unintentionally perpetuate existing biases in historical hiring data, leading to discriminatory outcomes based on race, gender, age, or other protected characteristics. If the majority of a company’s administrative assistant roles are filled by women, then AI may learn to favor female candidates’ resumes. In another example, graduation years listed on resumes could result in discrimination risks if an AI algorithm concludes a candidate’s age and makes hiring decisions based on those conclusions.
- Legal Compliance Challenges: Employers who use AI in the hiring process must ensure such use complies with both federal and state anti-discrimination laws. Certain employment laws may require employers to justify hiring decisions. For example, in a discrimination claim under Title VII (and state equivalent laws), an employer must show that hiring criteria are job-related and consistent with business necessity.
Employers that cannot explain why a certain algorithm screened out certain protected groups may be unable to defend itself against a discrimination claim. Therefore, employers must understand how the AI tools that they use in hiring operate and ensure that they function in a non-discriminatory manner.
Are There Laws that Govern Employers’ Use of AI in the Hiring Process?
While there are no federal statutes that govern AI usage in employment practices, various states have enacted laws governing the use of AI in the workplace, and a growing number of states are expected to follow. Currently, Colorado, California, Illinois, Maryland, and New York all have some sort of legislation that regulates employers’ use of AI when screening or sourcing candidates.
What Laws Does Illinois Have in Place?
As of January 1, 2026, the Illinois Human Rights Act (IHRA) prohibits employers from using AI in recruitment, hiring, promotion, or any other term or condition of employment if it results in discrimination against protected classes. These protected characteristics include but aren’t limited to race, religion, age, sexual orientation, physical or mental disability, and citizenship status.
Specifically, the IHRA prohibits employers from using zip codes as a proxy for a protected class such as race. For example, an AI algorithm that automatically rejects candidates living in certain zip codes within 20 miles of the workplace but allows applications from other areas within the same radius could inadvertently result in discrimination based on race.
Further, employers must notify applicants and employees of the employer’s use of AI. Employers who fail to comply with these new amendments to the IHRA have committed civil rights violations and are subject to penalties such as fines of up to $16,000 for first offenses, $42,500 for second offenses within a five-year period, and $70,000 for those with two or more offenses within a seven-year period.
How Can Employers Protect Themselves?
There are several ways employers can be smart when it comes to using AI in the hiring process. First, ensure you understand how your tools actually work. When vetting a new provider or reviewing a current one, require vendors to provide the following documentation:
- Training Data Transparency: Detailed records of non-proprietary data that was used to train the AI model and its original source.
- Bias Safeguards: A description of the specific filters and guardrails applied to reduce discriminatory outcomes.
- Audit Results: Evidence of any bias or "disparate impact" testing conducted on the tool.
- Data Usage Statement: A written explanation of exactly how the vendor uses and stores your company’s specific data.
- Compliance Certification: (Especially for Illinois) Confirmation that the tool allows for the required disclosures and does not use prohibited proxies like zip codes.
Maintaining Privacy & Oversight
Beyond vendor management, employers should also strive to maintain applicant privacy by ensuring AI tools do not improperly capture candidate data. To stay protected:
- Periodic Audits: Employers should review their AI tools regularly to ensure no impermissible information is being saved, and that permissible information is being securely stored and data is only available to those with a “need to know.”
- Train Your Team: Instruct your employees on how to use these tools and how to maintain privacy and reduce the risk of confidentiality breaches.
- Minimize Collection: Only collect data that is necessary to the hiring process.
Ultimately, employers must remember that AI does not replace human judgment. Employers are responsible and accountable for the outcomes of any AI tools that it utilizes. These tools should support human judgment, but not fully replace it.