The ATS Black Hole: Why Great Candidates Never Get Seen
Most job applications disappear. The candidate submits, waits, and hears nothing. The company says they couldn't find anyone. Both are telling the truth, and neither is lying about the same reality.
TALENT ACQUISITIONHIRING STRATEGY
Paulo Barrelas
4/19/20266 min read


Most job applications disappear. The candidate submits, waits, and hears nothing. The company says they couldn't find anyone. Both are telling the truth, and neither is lying about the same reality.
The Applicant Tracking System sits between them. Nearly 99% of Fortune 500 companies use one, and adoption is spreading fast to mid-size and smaller organisations. What started as a way to organise candidate data has evolved into the first, and often only, filter between a person and a recruiter. For many candidates, it is where their application ends without a human ever reading a word.
The filter is only as good as what defines it
An ATS does not decide who is qualified. It decides who matches a set of criteria configured by whoever set up the system, often a recruiter working from an outdated job description written by someone who no longer holds the role. The filter reflects the assumptions baked into it: the right keywords, the right degree, the right number of years in the right job title.
Research shows:
A 2021 Harvard Business School and Accenture report, "Hidden Workers: Untapped Talent," found that 88% of employers surveyed acknowledged their ATS was screening out qualified high-skilled candidates because they did not match the exact criteria. Among middle-skilled roles, that figure rose to 94%. Nine in ten executives knew their system was rejecting people who could do the job. They knew it, and the system kept running.
The report estimated that as many as 27 million workers in the US alone fall into the category of "hidden workers": people who are actively seeking work, whose applications consistently fail not because of what they lack, but because of how the filter is calibrated. Veterans with non-standard job titles. Caregivers with employment gaps. Candidates with transferable skills and non-linear career paths. The ATS is not finding the best candidates. It is finding the ones who know how to write for an algorithm.


The keyword problem is structural
The most common form of ATS filtering is keyword matching. A recruiter searches for a term from the job description. Resumes that contain that exact term surface. Resumes that describe the same capability using different language do not. A candidate who lists "talent acquisition" when the filter searches for "recruiting" may never appear, despite being equally or more experienced.
This is not a technical glitch. It is the system functioning as designed. The problem is that job descriptions are imprecise, criteria are set by people without deep knowledge of the role, and the keywords being searched often reflect what past hires looked like rather than what the job actually requires. The ATS faithfully executes a flawed brief, at scale, thousands of times per search.
Research shows:
Jobscan's State of the Job Search report found that 76.4% of recruiters search and rank candidates by skills drawn directly from the job description. The filter is not built around what the role needs; it is built around what the job description says, which is a different thing entirely.


When AI screens candidates, bias gets automated
The next wave of ATS technology goes beyond passive keyword matching. It is active AI-powered screening. Large language models embedded in recruiting tools now score resumes, rank applicants, and in some cases generate shortlists without human review. The promise is speed and objectivity. The evidence suggests something else.
Research shows:
A 2024 study from the University of Washington, published through the Brookings Institution, tested three large language models across resume screening tasks. The results were stark: resumes with white-associated names were preferred in 85.1% of cases, while resumes with Black-associated names were preferred in just 8.6%. The two groups were selected at equal rates in only 6.3% of tests. Gender bias was also present across all three models, with men's resumes favored in over half of the unequal outcomes.
These are not edge cases. They are the output of systems trained on historical hiring data, which encodes decades of structural bias. When an AI learns from past decisions, it learns past preferences. Zip codes that correlate with racial segregation. University names that correlate with socioeconomic background. Employment gaps that correlate with caregiving, which correlates with gender. None of these features are labelled "bias" in the training data. They appear as patterns. The model learns the patterns.
Amazon made this concrete in 2018, when it discontinued an internal AI recruiting tool after discovering it was systematically downgrading resumes containing terms associated with women, including the names of all-women's colleges. The system had learned from a decade of past hires, which skewed heavily male. It was optimizing for what had worked before, which meant optimizing against the people who had been excluded before. The tool was scrapped, but the dynamic it illustrated has not gone away. It has been packaged and sold as a feature.


Efficiency is not neutrality
ATS vendors sell their products partly on the promise of reducing human bias. Remove the human from initial screening, the argument goes, and you remove the prejudice. This framing is appealing, but is also wrong, or at least incomplete.
AI systems do not start from scratch. They are trained on human decisions, human-written job descriptions, and hiring outcomes produced by human beings operating inside organizations with their own histories, cultures, and blind spots. The bias is not removed. It is abstracted, automated, and deployed at a scale that makes it harder to see and harder to challenge.
Research shows:
A 2023 study published in Humanities and Social Sciences Communications found that algorithmic bias in AI-enabled recruitment stems from two primary sources: limited and unrepresentative training datasets, and the biases embedded by the people who design the systems. Both are structural problems that cannot be resolved by faster processing or a more sophisticated model. The problem is in what the model is optimizing for, which is shaped by decisions made before a single resume is ever screened.
The companies deploying these tools are often not aware of the bias in their systems. Many vendors claim to offer "debiased" models. Independent audits consistently find otherwise. In the UK, an audit of AI recruitment software uncovered multiple fairness and privacy vulnerabilities, prompting the Information Commissioner's Office to issue nearly 300 recommendations. Proprietary systems remain largely opaque to scrutiny, meaning the companies using them have limited ability to understand what decisions they are making or who they are excluding.


The problem is not the tool. It is the unexamined process
An ATS, used well, is a reasonable way to manage volume. Large organizations cannot manually review thousands of applications for every open role. The problem is not that companies use software to organize candidate pipelines. The problem is that the criteria inside those systems are rarely interrogated, and the assumption that the system is objective has replaced the obligation to check.
When 88% of executives know their ATS is rejecting qualified candidates and continue using it unchanged, the issue is not ignorance. It is a set of priorities that values hiring speed over hiring quality, and treats the exclusion of qualified people as an acceptable side effect of efficiency. The hidden workers the Harvard study describes are not hidden because they are hard to find. They are hidden because the filter was never designed to find them.
Building a hiring process that finds the best people requires reviewing filter criteria with the same rigor applied to any other business decision: what are we actually looking for, what is this criterion a proxy for, and who does it exclude? The ATS will execute whatever it is told. The question is whether the people configuring it have thought carefully enough about what they are asking it to do.


Key Sources & Further Reading
Fuller, J. B., Raman, M., Sage-Gavin, E., & Hines, K. (2021). Hidden workers: Untapped talent. Harvard Business School & Accenture. https://www.hbs.edu/managing-the-future-of-work/Documents/research/hiddenworkers09032021.pdf
Wilson, K., & Caliskan, A. (2024). Gender, race, and intersectional bias in AI resume screening via language model retrieval. Brookings Institution / University of Washington. https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/
Köchling, A., & Wehner, M. C. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-02079-x
Jobscan. (2025). State of the job search 2025. Jobscan. https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems/
Cave, S., & Dihal, K. (2022). Does AI debias recruitment? Race, gender, and AI's "eradication of difference." Philosophy & Technology, 35(4). https://doi.org/10.1007/s13347-022-00543-1
Contact
Reach out to start building your team culture.
About
Stay updated
© 2025. All rights reserved.
