When AI Gives a Waiter a 92% Match for Finance Director
I'd like to preface this article as nothing against anyone who works in hospitality (Former Bartender of craft cocktails here!). The message here applies to the broader issue of over-reliance on AI to determine candidate fit. Let's dive in.
This has been my concern for months, but now we have the data that should make every hiring leader stop and reconsider their entire approach. A recent study by ERE found that AI recruiting tools gave a waiter a 92% compatibility score for a Finance Director position. Not a typo, friends. Ninety-two percent.
Let that sink in for a moment.
The Uncomfortable Truth About Our AI Obsession
Here's what happened: Researchers took a waiter's resume and stuffed it with financial jargon in completely nonsensical ways. Phrases like "served beverages at corporate luncheons for PepsiCo's financial leadership, catching phrases like gross margin expansion and portfolio rationalization."
One AI model (Grok-fast) confidently concluded that overhearing finance conversations was equivalent to years of experience. Another model (ChatGPT-fast) saw right through it, scoring the same resume a 9 out of 100. ERE
The gap between 92 and 9 isn't just a technical glitch—it's a competence crisis that reveals how dangerously we've automated away human judgment.
We've Become Keyword Merchants, Not Talent Advisors
The brutal reality is that most recruiting has devolved into pattern matching. We've outsourced critical thinking to algorithms that can't distinguish between keyword optimization and actual competence. The same AI creating fake candidates is making our screening processes objectively worse.
This isn't about being anti-technology. It's about recognizing that when most companies use AI daily in their recruiting processes, we've created a massive blind spot in our ability to assess genuine capability. We are susceptible to laziness.
The uncomfortable question: How many of your recent "high-scoring" candidates were sophisticated keyword manipulations rather than qualified professionals?
The Real Cost of Algorithmic Shortcuts
When I talk to clients about quality of hire metrics, they're often shocked by the hidden costs of speed-obsessed recruiting:
Legal exposure: AI tools that can be easily fooled create compliance nightmares under emerging regulations
Security risks: Incompetent screening processes let bad actors infiltrate organizations
Talent inversion: You're actively filtering out your best people while advancing the worst
But here's the part that keeps me up at night: We're incentivizing these failures through our partnership models.
Rethinking Recruiter Relationships in the AI Era
If you're still paying recruiting partners based on speed-to-fill metrics, unfortunately, this might be contributing to the problem. Contingency models that reward resume volume over candidate validation create the perfect environment for AI-optimized madness.
The math is simple:
Average cost per hire: anywhere from $4300-$4700 from various sources
Average cost of a bad hire: $240,000
Time to recover from a failed placement: 6-12 months
Yet we're optimizing for the thousands variable instead of protecting against the hundreds of thousands risk.
Better partnership models prioritize:
Quality of hire over speed of placement
12-month success metrics instead of 30-day fills
Thorough vetting processes over resume sourcing volume
The Human Intelligence Advantage
Here's where experienced talent advisors create irreplaceable value: We can implement assessment techniques that can't be gamed by AI.
Instead of asking standard behavioral questions, try sequential probing:
"Walk me through your biggest technical challenge in your last role."
"What specific information did you wish you had before tackling that?"
"How did you decide which approach to take when you had multiple options?"
"If someone had disagreed with your approach, how would you have defended it?"
"What would you do differently if faced with the same situation today?"
Each question builds on the previous response, making scripted answers impossible. You're not testing what they know—you're understanding how they think.
Bring Back the Fundamentals
While everyone's chasing the latest AI recruiting tool, I'm doubling down on practices that work:
Reference checks are for fraud prevention. If a candidate can't provide three people who will vouch for their actual work, that's your answer.
Live problem-solving beats skills assessments (I know this isn't for everyone; for engineering hiring, let's assume they've passed a coding challenge here). Watch how candidates think through challenges in real-time rather than relying on tests that can be completed by ChatGPT.
Trust your instincts. If something feels off about a candidate's responses, dig deeper. Your human intuition is detecting inconsistencies that algorithms miss.
The Bottom Line: Slow Down to Speed Up
Somewhere I've heard speed kills. The industry's obsession with time-to-fill has created this mess. Companies optimizing for speed over verification will continue getting burned by sophisticated fakes, whether they're AI-generated candidates or keyword-stuffed resumes that fool screening algorithms. Not to mention the damage to your employer brand and candidate experiences (Convo for another day).
The talent advisors thriving in 2025 understand that their value isn't in resume sourcing—it's in human verification and judgment. We're not keyword matchers; we're authenticity detectors. I like that. Authenticity Detectors. Perhaps a new job title?
The question isn't "How quickly can you fill this role?"
It's "How certain are you that this person can do what they claim they can do?"
That certainty requires human intelligence, intentional questioning, and thorough validation. It takes longer than algorithmic shortcuts, costs more than automated screening, and demands more skill than keyword matching.
But it's the only approach that works when anyone can game the system in 70 minutes with the right prompts.
Your choice: Continue optimizing for speed and deal with the expensive consequences, or invest in processes that identify genuine talent.
The math isn't complicated. The implementation requires discipline.
But getting it right is what separates talent advisors from resume vendors.
Key Takeaways:
• AI recruiting tools are dangerously incompetent - They can't distinguish between keyword stuffing and real experience
• Speed-obsessed recruiting creates expensive problems - Bad hires cost hundreds of thousands vs. thousands in hiring costs
• Human judgment beats algorithmic shortcuts - Sequential questioning and reference checks can't be gamed by AI
• Partnership models need restructuring - Reward quality over speed, outcomes over volume
• Trust your instincts - If something feels off, dig deeper—your intuition catches what algorithms miss