A "senior backend engineer" we ran into last year passed the recruiter screen at three different top-tier agencies. He had a clean LinkedIn, the right keywords, eight years listed at three recognizable companies, and a convincing phone manner. He made it to panel rounds at two Series C startups before anyone asked him to narrate an actual system he had owned. When they did, the story fell apart inside ten minutes. He had spent most of his career on internal tooling that never shipped externally, had never been on-call for anything a customer used, and had been kept away from production deploys by his own team. Three recruiter screens missed it. The cost of those misses, for the companies that interviewed him, was roughly a day of senior engineering time each, plus whatever they lost by not interviewing someone real in that slot.
That is not an unusual story. It is the normal outcome of how recruiter screens are currently run.
What most recruiter screens sound like
Here is a composite of a real recruiter screen, reconstructed from call notes candidates have forwarded to us over the years. Names changed, structure faithful:
"Hi, thanks for taking the call. I see you have about seven years of experience, is that right? Great. What's your favorite programming language? Python, got it. Are you open to remote or hybrid? And what are you looking for comp-wise? Okay, I'll take that back to the team. One last thing, can you tell me a little about yourself and what you're passionate about?"
That call lasted fourteen minutes. At the end of it, the recruiter knew the candidate's preferred language, desired salary, and location preference. They had learned nothing about whether the candidate could do the job. The candidate went forward because none of the disqualifiers triggered: they had the years, the keywords, and a number that fit the budget. Every real question was passed to the hiring panel to figure out on hour-long interviews with staff engineers.
That is an expensive way to fail. Engineering time is the most constrained resource at most companies. Burning four hours of a staff engineer's week to discover something that could have been caught in thirty minutes by a recruiter who knew what to listen for is a process failure, not a screening stage.
What a technical recruiter screen is actually for
The recruiter screen is not a technical interview. We are not trying to replicate the panel. The panel exists to assess depth: system design ability, code quality, how the candidate collaborates with engineers under mild stress. We cannot and should not do that in thirty minutes over the phone.
What the recruiter screen is for is something different and arguably more valuable: filtering for signal quality. Can this person describe real work they have done, in real detail, in a way that holds together under follow-up questions? That single capability predicts the outcome of almost every later stage. Engineers who have actually built and operated systems talk about them in a particular way. Engineers who have mostly watched other engineers build things talk about them in a different way. The gap is obvious if you know what to listen for, and nearly invisible if you do not.
Our job, one stage before the panel, is to hear the gap. The output of a good recruiter screen is not a grade. It is a paragraph the hiring manager can read that says: here is what this engineer has actually done, in their own words, with the level of specificity they were capable of, and here is what I noticed. The hiring manager reads it and knows in two minutes whether to spend the panel time.
The five question categories that actually matter
We use five categories. Each one produces a different dimension of signal, and each one is designed to be hard to fake without having done the work.
1. Systems built and operated
Start here. The goal is to get the engineer describing, in the first person and in concrete detail, a system they owned end to end. Not contributed to. Owned.
Tell me about the most complex system you have been on-call for. Walk me through an incident you personally handled, start to finish.
- Strong signal
- Names the system, the traffic pattern, the dependencies. Describes an incident with specificity: what alerted, what they checked first, what turned out to be wrong, what they did in the moment versus what they fixed afterward. Mentions things that were hard or embarrassing. Has a clear view of what they would change about the system now.
- Weak signal
- Describes the system at a marketing level ("it was a high-throughput payments service"), stays in the passive voice ("the issue was resolved"), and cannot name specific tooling, specific queries, specific dashboards. Uses "we" for everything.
- Red flag
- Has never been on-call, or says they were "technically on the rotation" but cannot recall a single incident. Claims every outage was caused by another team.
When that system was at its worst, what did the latency and error rate look like, and what was the SLO you were measured against?
- Strong signal
- Answers in numbers without apologizing. Knows p50, p99, error budget, the SLO that mattered to the business. Can explain why that SLO was the right one.
- Weak signal
- Vague numbers, no SLO, says "the team tracked that but I wasn't really in those meetings."
2. Tradeoff reasoning
Good backend engineers live in tradeoffs. Consistency versus availability. Latency versus cost. Build versus buy. Shipping the ugly thing now versus the clean thing in six weeks. An engineer who has operated at any level of seniority should be able to narrate a specific choice and the reasoning behind it.
Tell me about a technical decision where you had to choose between two approaches and it was genuinely close. Why did you pick the one you picked?
- Strong signal
- Presents the two options, names what each was optimizing for, acknowledges what they gave up by choosing one. Has a clear model of the constraints (team size, timeline, operational load) that drove the answer. Will usually concede, unprompted, that they might pick differently now.
- Weak signal
- Frames it as "the right choice vs the wrong choice" instead of a real tradeoff. No mention of what was sacrificed. Reasons given are post-hoc slogans ("we wanted to move fast") rather than specifics.
- Red flag
- Cannot think of a close call. Every decision in their career was obvious in retrospect, and they always picked correctly.
3. Scope and ownership
This is the question that catches resume inflation faster than any other. A team shipped a thing. What, specifically, did you do? Not what the team did. Not what the company announced on its blog. What did your hands touch.
You listed that you "built the payments platform." Walk me through which parts of that you personally wrote, versus what your team built, versus what existed before you got there.
- Strong signal
- Answers without flinching. Gives a clear split: "I owned the webhook ingestion layer and the retry logic. The idempotency key design came from another engineer. The UI and reconciliation tooling existed before I joined." Honest about inherited work.
- Weak signal
- Restates the team's accomplishment in the first person. Cannot separate their contribution from the team's. Gets defensive at the framing.
- Red flag
- Claims sole ownership of something obviously built by a team. Or, inversely, cannot name a single line of code or a single design decision that was definitively theirs.
4. Debugging mindset
Shipping a feature is one skill. Debugging a production system you did not expect to be debugging is a different one. The best engineers we place are distinguished less by what they know than by how they behave when they do not know.
Walk me through the hardest production bug you have ever debugged. Before you knew the answer, what was your approach? What did you check, in what order, and why?
- Strong signal
- Describes a process, not a punchline. Talks about what hypotheses they formed early, which ones they ruled out, what surprised them. Names the tools they reached for (logs, traces, specific profilers, a carefully-written SQL query against the audit table). Admits the wrong turns.
- Weak signal
- Jumps to the answer. "Turned out to be a race condition." No narrative of investigation, no mention of what they checked first or why.
- Red flag
- The "debugging" story is actually someone else discovering the bug and the candidate implementing the fix. Or, the bug was found by the customer reporting it and nothing interesting happened in between.
5. Growth and self-awareness
The last category is subtle and most recruiters skip it. We ask for it explicitly because it produces information no other question does: whether the engineer has a model of their own development, or is still narrating their career as a series of wins.
Think about a technical decision you made two or three years ago. What would you do differently now, and what changed in your thinking?
- Strong signal
- Answers specifically. Names the decision, what they believed at the time, what evidence or experience changed their view. Usually the answer reveals a piece of engineering maturity (learning that operational simplicity beats elegance, or that the interface boundary matters more than the implementation).
- Weak signal
- Gives a generic answer about "communication" or "working more with stakeholders." Nothing technical, nothing specific.
- Red flag
- Cannot name anything they would do differently. Their career has been a straight line of correct calls.
Questions to skip entirely
A few questions have become standard in recruiter screens and produce no useful signal for engineering hires. We do not ask them, and neither should you:
- "What's your favorite programming language?" Tells you nothing about ability. The best backend engineers we have placed work in whatever language the problem requires and have mild, unremarkable preferences.
- "Describe yourself in three words." Tests whether the candidate has rehearsed a LinkedIn bio. That is not a skill you are hiring for.
- "Tell me about a time you had a conflict with a coworker." Almost nobody answers this honestly in a first call with a stranger. You will get a canned story. Save conflict-style questions for the behavioral round with the hiring manager, where stakes make honesty more likely.
- "What's your greatest weakness?" You will be told the candidate works too hard. Move on.
- "Where do you see yourself in five years?" Engineers good enough to have a real answer will give you a rehearsed one anyway. Engineers who do not care will give you a shrug. Either way, useless.
- Anything that can be answered with a single word or a salary number. If the question does not invite the candidate to narrate, it is a checkbox, not a question. Put it in the intake form.
What the call should feel like
A good recruiter screen lasts 30 to 45 minutes. The engineer talks for at least 70% of that time. The recruiter's job, mostly, is to listen, pick the most interesting thread in each answer, and probe one level deeper. "You mentioned you moved that service off Postgres. What forced the move?" That follow-up is where the signal actually lives. The surface answer is easy to fake. The answer-to-the-follow-up almost never is.
A good screen also feels like a conversation to the engineer. Candidates tell us, often, that our screens do not feel like recruiter calls. That is deliberate. An engineer who is treated like an engineer will answer like one. An engineer who is treated like a resume will answer like a resume. The format of the call shapes the data you get from it.
The call should not feel like an interrogation. There is no trap, no gotcha, no trick question. We are not trying to catch the candidate out. We are trying to get them talking about work they are proud of, at the level of detail that proud work usually comes with. The engineers who have done real things enjoy these calls. The ones who have not, tend to get shorter and vaguer as the call goes on. That itself is the signal.
How this connects to what we do
We built Engineers in AI around this screen because we were tired of seeing both sides lose. Companies pay 25% of first-year salary to agencies that send shortlists a staff engineer could have assembled in a weekend with LinkedIn Recruiter. Candidates get their time wasted in cycles that start with a keyword match and end with a rejection nobody explains. The broken piece, almost always, is the first call.
We charge 20% because the work we do at the screening stage means every candidate we forward is worth the hiring manager's time. Not "probably worth", not "worth checking". Actually worth it. When our shortlists arrive, the hiring manager reads a short narrative from us about what the engineer has built, in their own words, and decides in a few minutes whether to bring them on-site. That is what a technical recruiter should produce. It is harder than the current industry default, and it is the only version of this job we think is defensible.
If your recruiter screens currently sound like the composite we opened with, it is worth fixing, whether you fix it with us or on your own. The cost of missing a bad candidate is a few hours of engineering time. The cost of missing a good one is a quarter of productivity. The screen is where most of the leverage lives.