When the ATS Becomes the Defendant

Inside the Workday lawsuit that could redefine accountability in AI hiring

A Discrimination Case Focused on Software, Not People

Employment discrimination lawsuits typically name employers. The Mobley v. Workday, Inc. case is different. It centers on whether an AI-driven hiring platform can produce unlawful age discrimination through automated screening.

The plaintiff, a job seeker in his forties, alleges that he was rejected more than 100 times by companies using Workday’s applicant tracking system. According to the complaint, the pattern of rejections was not random. It argues that automated screening tools disproportionately excluded older applicants, raising claims under the Age Discrimination in Employment Act (ADEA).

In 2024, a federal judge ruled that the case could proceed, granting conditional certification under the ADEA. That ruling allows the lawsuit to move beyond a single claimant and potentially encompass a broader group of similarly affected applicants.

The decision signals increased judicial scrutiny of how automated hiring systems operate at scale.

How Algorithmic Disparate Impact Enters the Legal Picture

The lawsuit does not accuse Workday of intentionally coding age bias. Instead, it relies on a disparate impact theory, a long-standing principle in employment law.

Disparate impact occurs when a practice disproportionately harms a protected group, even without intent. Courts have applied this standard to testing requirements, educational thresholds, and subjective hiring practices for decades. Its application to algorithmic hiring systems is relatively new.

Machine-learning models identify statistical patterns in historical data. When that data reflects past hiring preferences, such as favoring recent graduates or uninterrupted career paths, the resulting system may indirectly disadvantage older workers. Research has shown that age can be inferred through proxies like graduation dates, employment gaps, and career length, even when age itself is excluded from model inputs.

Under the ADEA, once disparate impact is established, the burden shifts to the defendant to demonstrate that the practice is job-related and consistent with business necessity.

Regulatory Pressure Is Already Building

Federal regulators have anticipated this moment. The Equal Employment Opportunity Commission has repeatedly warned that employers remain responsible for the outcomes of algorithmic tools they deploy, even when those tools are provided by third-party vendors.

As automated screening becomes more common, courts are being asked to clarify whether software functions merely as an administrative aid or as an active decision-making mechanism within hiring workflows.

That distinction may prove critical for determining liability.

What Employers and Talent Leaders Should Take Away

Vendor tools do not shield employers from responsibility.
Using third-party AI does not transfer legal accountability. Organizations remain responsible for monitoring outcomes and ensuring compliance.

Algorithm audits are becoming a baseline requirement.
Employers must evaluate selection rates, demographic impact, and pass-through ratios across automated hiring stages. Passive reliance on vendor assurances is unlikely to satisfy regulators.

Transparency is increasingly expected.
Organizations may need to explain how hiring algorithms are trained, what data they rely on, and how bias is identified and addressed.

Scale intensifies risk.
Workday has publicly reported processing more than one billion job applications. At that volume, even minor statistical skews can result in widespread exclusion.

A Turning Point for Automated Hiring Systems

Applicant tracking systems were once treated as operational infrastructure. Increasingly, they function as gatekeepers, determining which candidates advance and which are filtered out before human review.

The Mobley v. Workday case underscores a broader shift. Automated hiring tools are no longer viewed as neutral conduits. They are becoming subjects of legal and ethical evaluation in their own right.

Organizations now face difficult but necessary questions: Do they understand how their systems screen candidates? Can they explain and justify those outcomes? And are they prepared to defend algorithmic decisions in court?

What Comes Next

Automation remains a central feature of modern hiring. However, automation without oversight is no longer sustainable.

If courts ultimately determine that AI-driven screening tools can produce unlawful disparate impact, the implications will extend beyond a single platform. The ruling would influence how hiring technologies are built, purchased, audited, and governed across industries.

The future of hiring will still be automated. It will also need to be accountable.

Previous
Previous

Feature: The Psychology of Becoming

Next
Next

Press Release: AI-Ready Hiring Systems for a Changing Workforce