Stop Wasting Everyone's Time in Technical Interviews

| 6 min read |
hiring interviews engineering management

Most technical interviews test the wrong things. After hiring engineers at the fintech startup, here's what I've learned actually predicts job performance.

Last month I sat across from a candidate who’d built distributed systems handling millions of financial data points. Impressive resume. Real production experience. Then we asked him to reverse a binary tree on a whiteboard, and he froze. We almost passed on him. That would have been a terrible mistake.

He’s one of our best hires.

That experience broke something in how I thought about interviewing. At the fintech startup we were growing the engineering team fast, and I kept seeing the same pattern: candidates who crushed whiteboard problems but couldn’t debug a real issue, and candidates who struggled with algorithmic puzzles but wrote clean, thoughtful code every day.

Something was deeply wrong with how our industry interviews people.

The stuff that doesn’t work

Whiteboard algorithms. I get why they’re popular. They feel rigorous. But think about what you’re actually testing. You’re testing whether someone recently crammed LeetCode. You’ve stripped away their IDE, their docs, their ability to Google something. You’ve removed every tool they’d use on the job and then judged them on the result. That’s not an interview. That’s hazing.

Brain teasers. “How many golf balls fit in a school bus?” If you’re still asking these in 2018, please stop. Google dropped them years ago after their own data showed zero correlation with job performance. Zero.

Trivia questions. “What’s the time complexity of HashMap.get() in Java?” Cool, they memorized it. Or they didn’t. Either way you’ve learned nothing about whether they can build software.

Massive take-home projects. I’ve seen companies send 8-hour assignments and call it “a small exercise.” You’re not testing skill. You’re filtering for people with no life outside work. Single parents, people with side commitments, anyone who values their free time – they just opt out. And you never even know what you missed.

What actually predicts performance

I’ve been iterating on our process at the fintech startup for a while now. Here’s what’s worked.

Give them a laptop and a real problem

This is the single biggest improvement we made. Instead of a whiteboard, we hand candidates a laptop with an IDE, internet access, and a small codebase. The task is something close to what they’d actually do on the job. Add a feature. Fix a bug. Review a pull request.

You learn so much more this way. How do they read unfamiliar code? Do they check edge cases? How do they use their tools? Do they ask good questions when stuck?

One of our best interview tasks is a broken API endpoint with a failing test. The candidate gets logs, the codebase, and 45 minutes. We’re not looking for a perfect fix. We’re watching how they think.

Pair with them

Pair programming sessions are gold. You pick a small problem, sit next to the candidate, and work on it together. Not as an examiner – as a collaborator.

This tells you things no other format can. How do they handle suggestions? Do they explain their thinking? Can they take feedback without getting defensive? These are the things that actually matter when you’re shipping code with a team.

Talk about what they’ve already built

For senior roles especially, I’ve started spending more time on deep dives into past work. Pick a system they built. Ask them to walk you through the architecture. Then start pulling threads. Why this database? What would you change? What broke in production?

Good engineers light up during these conversations. They remember the tradeoffs. They’ll tell you about the decision they regret. The ones who padded their resume get vague fast.

Use rubrics, not vibes

Early on at the fintech startup, our debriefs were a mess. “I liked her.” “He seemed smart.” “I don’t know, something felt off.” That’s not a hiring decision. That’s a gut check, and guts are biased.

Now every interviewer fills out a rubric before the debrief. Same questions, same scale. You write down specific evidence, not impressions. It’s not glamorous, but it’s the difference between a process and a coin flip.

Designing the loop

Keep it tight. Nobody needs a six-round interview for a backend role.

Here’s roughly what we do:

  • Phone screen (30-45 min): Can this person communicate? Do they have the baseline?
  • Work sample (60-90 min): Laptop, real problem, real tools.
  • Deep dive (45 min): Walk me through something you built.
  • Team fit (45 min): Values, collaboration style, how they handle disagreement.

For senior roles, swap the deep dive for a system design conversation. Let them drive. Ask about tradeoffs, not trivia.

Train your interviewers

This one’s overlooked constantly. Most engineers are terrible interviewers. Not because they’re bad people, but because nobody taught them. They default to whatever they experienced as a candidate, which was probably also bad.

At the fintech startup we started having new interviewers shadow experienced ones. We review rubrics together. We debrief on the debrief. It takes time, but the signal quality goes way up.

Candidate experience is your reputation

I can’t count how many times a candidate told me about a horrible interview experience at some other company. Ghosted after three rounds. Left waiting in a lobby for 40 minutes. Given a problem that had nothing to do with the role.

Every candidate who walks out of your office talks about it. To their friends, to their colleagues, sometimes on Glassdoor. At a startup where we’re competing with bigger names for talent, that reputation matters enormously.

Be clear about the process upfront. Respect their time. If you’re not making an offer, tell them why. It costs you nothing and it’s the right thing to do.

Watch out for these traps

Hiring for similarity. If your whole team went to the same three schools and thinks the same way, you don’t have a team. You have an echo chamber. Diverse panels, evidence-based decisions, focus on complementary strengths.

Credential worship. A degree from a top university tells you someone got into a top university. It doesn’t tell you they can debug a production outage at 2am or write code their teammates can actually read.

Letting one bad interview tank a candidate. Single data points are noisy. If four interviews went great and one was mediocre, look at the full picture. Maybe the interviewer had an off day. Maybe the question was bad. Aggregate the signal.

The actual point

Technical interviewing isn’t some unsolvable problem. It’s just that most companies copy what everyone else does without asking whether it works. Test real work. Use consistent evaluation. Treat candidates like humans. The bar isn’t that high, and yet most of our industry still can’t clear it.