What I Learned About Code Reviews the Hard Way

| 4 min read |
code-review engineering teams collaboration

Most code reviews are theater. Here's how we fixed ours at the fintech startup and what actually made a difference.

Most code reviews are theater. Someone opens a PR, someone else skims it, leaves an “LGTM,” and everyone moves on feeling productive. I’ve been guilty of it. You probably have too.

At the fintech startup, we went through a painful period where our reviews were either rubber stamps or nitpick wars. Neither helped. We’d ship bugs that a real review would have caught, then overcorrect into hour-long debates about variable naming. It took us a while to find the middle ground, and I want to share what actually worked.

Reviews exist to manage risk

That’s it. Not to prove you’re clever. Not to enforce your preferred bracket style. Risk.

When I review code now, I’m asking myself a few things: Will this break something in production? Will someone three months from now understand what this does? Are the tests actually testing the right behavior?

The highest-value areas to focus on:

  • Data correctness. Anything touching migrations or money gets extra scrutiny. Full stop.
  • Security and access control. One missed auth check can undo months of good work.
  • Error handling. The happy path is easy. The question is what happens when things go wrong.
  • Performance. That innocent-looking query inside a loop? It will bite you.

Everything else – formatting, import order, naming conventions – automate it. Seriously. Every minute a human spends arguing about tabs vs spaces is a minute not spent catching a real bug.

How I actually read a PR

I have a simple routine. Read the description first to understand what the change is supposed to do. Then jump to the tests – they tell you what the author thinks the behavior should be. Then trace the main code path. Finally, look at error handling and edge cases.

This order matters. If you start reading code line by line without context, you’ll waste time on details that don’t matter.

Giving feedback that doesn’t suck

We had a problem at the fintech startup where review comments were either too vague or too aggressive. “This is wrong” helps nobody. “This function does three things, consider splitting parsing from validation so each has one job” – that’s actionable.

A few things that changed our review culture for the better:

Label your comments. We started tagging things as [blocker], [suggestion], or [nit]. Sounds small. Made a huge difference. Suddenly people knew which comments were “fix this or we don’t merge” versus “take it or leave it.” Removed so much unnecessary back-and-forth.

Ask questions instead of making demands. “Is there a reason we check this condition twice?” lands differently than “Remove the redundant check.” Maybe there’s a reason. Maybe the author knows something you don’t.

Explain the why. Don’t just say “use a map here.” Say “this lookup is O(n) inside the loop, a map makes it O(1), and this list will grow.” Now the author learns something instead of just obeying.

Making your PR easy to review

This is the part most people skip, and it drives me nuts. If your PR is 800 lines with no description and no tests, you’re not going to get a good review. You’re going to get a tired reviewer clicking approve to clear their queue.

Keep changes small and focused. Self-review your own diff before asking someone else to look at it. You’d be amazed how many issues you catch yourself. Write a description that explains what problem you’re solving, what tradeoffs you made, and how you tested it.

At the fintech startup we started using a dead-simple PR template:

## What does this solve?
## Key decisions
## How I tested it

Three questions. Takes two minutes to fill out. Cuts review time significantly because the reviewer isn’t guessing at intent.

The team stuff

Review turnaround matters. We set a norm: small PRs get reviewed same day. Bigger ones within 48 hours. When reviews sit for days, people context-switch, branches diverge, and merging becomes painful. Fast reviews keep everyone moving.

Let CI handle the boring stuff. Linting, formatting, type checks, security scanning – all automated. The reviewer’s job is to think about things machines can’t: design, intent, risk.

Disagreements happen. When two people can’t agree, we focus on the goal of the change, not personal taste. If we’re still stuck, we pull in a third person, make a call, and write it down so we don’t relitigate it next sprint.

My quick mental checklist

Before I approve anything, I run through this:

  • Do I understand what this change is trying to do?
  • Does the code actually do that?
  • Are edge cases handled?
  • Do the tests cover the risky parts?
  • Will this be readable in six months?

If I can answer yes to all five, I approve. If not, I comment. Simple as that.

Code reviews are a conversation about risk and clarity. When we stopped treating them as gatekeeping rituals and started treating them as collaborative problem-solving, everything got better. Fewer bugs, faster merges, less friction. That’s the whole trick.