Getting laid off in 2025 comes with a new kind of exhaustion - not just from losing a job, but from knowing exactly what comes next: an instant rejection email and recruiter ghosting. 75% of resumes never even make it past ATS (Applicant Tracking System) gatekeepers.
Automation algorithms found a promised land in recruiting. AI now writes job ads, screens applicants, schedules interviews, and even predicts who’s likely to accept an offer. Yet the moment a candidate is merely suspected of using AI, their application becomes untouchable. And this double standard is growing fast.
The result is a strange labor-market paradox: job seekers rely on AI tools to survive an increasingly AI-driven hiring process, yet the very systems meant to help recruiters often reject them for doing so. Here’s a closer look at how the cycle works and why it’s breaking down.
The Unwanted Necessity
On the employer side, AI has become a practical necessity. With firms like Goldman Sachs receiving over 315,000 internship applications and Google surpassing three million, no recruiting team can manually process the volume. So AI makes the initial screening, filters applicants, runs assessments, and engages candidates through chatbots. Recruiters often frame these tools as light automation - just a way to handle routine tasks. In reality, it defines which candidates ever make it to a human reviewer.
As a co-founder of a platform that helps job seekers make AI their go-to career assistant, I see that applicant tracking systems have become a barrier, allowing only about 2% of candidates to reach the interview stage, which is their first real chance to speak with a human.
Quicksands of Hiring
While many candidates remain skeptical towards AI tools for job-search, according to our survey, preferring to have everything under control, the market situation makes more people understand that they need AI to adapt to the recruiting practices.
So they turn to the tools like AI résumé tailoring, keyword optimization, LinkedIn profile rewrites, and interview simulations. Such tools don’t fabricate experience; they are meant to help “speak the language” of the machines that are going to read them. And this works: optimized résumés dramatically increase callback rates, while AI-guided interview practice improves structure and confidence.
Yet the moment recruiters suspect that a candidate used AI, the tone shifts. On Reddit, stories circulate about immediate rejections triggered by nothing more than “pausing for 2-3 minutes during a technical assessment” or producing writing that seems “too polished.” In one case, a hiring team admitted they planned to ghost a candidate because their video interview included brief gaps they interpreted as AI-assisted prompting, even though there was no evidence of cheating. Recruiters reject applicants for “AI vibes” while relying on opaque AI-scored assessments themselves. What counts as assistance for one side is misconduct for the other.
Creativity or Cheating
Sometimes, to get a competitive edge, applicants turn to more extreme tactics that can be viewed as unethical shortcuts. For example, a popular TikTok piece of embedding white-ink keywords to manipulate resume parsers. Some applicants admit it was the only way they could get noticed. While recruiters haven’t made a common opinion on it, it definitely signals that excessive automation can backfire on recruiters.
Other practices, especially the use of deepfake tools during video screenings and browser plugins that help candidates answer recruiters’ questions in real time, clearly cross ethical lines by undermining trust and distorting the hiring process.
But not all AI use is dishonest. Tools that polish grammar, tailor a resume to a job description, clarify achievements, or help candidates practice interview answers simply improve communication as a digital equivalent of a career coach or a skilled editor.
The real distinction is simple: ethical AI enhances your genuine experience; cheating tries to fabricate it. Until companies define these boundaries more transparently, candidates will continue navigating this grey zone on their own.
Navigating the Unknown
Whether people like it or not, AI is now a permanent element of hiring, and there’s no sense denying it. What is worth discussing, though, are the acceptability of certain practices used by both candidates and recruiters.
Recruiting bias has always been a thorny issue, but algorithmic screening raises it to the next level. Models, trained on tons of biased historical data, quietly learn to sideline the same groups that have long been marginalized: women, older workers, candidates with disabilities, or anyone for whom English isn’t a first language.
What looks like a neutral metric on the surface, for example an employment gap, can become a proxy for gender, age, disability, or linguistic background. This is where things get slippery: AI becomes a convenient mask for unethical behavior that’s nearly impossible to prove or challenge, leaving job seekers confused, defeated, and shut out of opportunities they’re fully qualified for.
Conclusion
It takes time for any major technology to find its place, and AI in hiring is no different. Right now, the tools are maturing faster than the norms around them, with both candidates and recruiters guessing about what’s acceptable. And that uncertainty fuels mutual mistrust.
Of course, the rules, both official and unspoken, will eventually settle. The challenge is using this transition period wisely, giving space both to experiments, and not forgetting that AI should assist the human-to-human interaction, not break it. We can either let unclear rules harden into systems that amplify bias, or take the time to build transparent, fair, and human-centered hiring practices before the new status quo sets in.
