Stop Doing Security Reviews by Hand

| 5 min read |
security devops devsecops automation

Your manual security gate is a bottleneck pretending to be a process. Here's how I moved security checks into the pipeline at the fintech startup so we could ship fast without shipping stupid.

Quick take

Automate the boring security checks. Put humans on the hard problems. Ship faster with fewer holes.

NATO cyber defense environments are rigid, process-heavy, approval-driven. Everything gated. Everything slow. And honestly? For classified systems, that made sense.

Fintech isn’t classified systems.

At the fintech startup we ship multiple times a day. We handle financial data, user accounts, payment flows. The attack surface is real. But if I made every deploy wait for a manual security review, we’d ship once a week. Maybe. And the devs would hate me. Rightfully so.

So I stopped pretending manual review scales and started building security into the pipeline itself.

The Problem with Manual Gates

Here’s what a manual security review actually looks like at a startup doing continuous delivery: a Slack message saying “hey can you look at this PR?” followed by me context-switching from whatever I’m actually working on, skimming the diff, and approving it because I’ve got six other things on fire.

That’s not security. That’s theater.

Manual review is slow. It’s inconsistent – I catch different things depending on whether I’ve had coffee. It doesn’t leave an audit trail beyond a thumbs-up emoji. And it absolutely can’t cover every commit when you’re pushing code dozens of times a day.

The goal isn’t to remove security engineers from the picture. It’s to stop wasting their time on things a script can catch.

What I Actually Built

Security in a pipeline isn’t one tool. It’s layers. You want cheap, fast checks early and heavier analysis later. The key is that none of it requires a human to manually trigger.

Pre-commit: catch secrets before they leave the laptop. This is the single highest-value automation I’ve ever set up. One leaked API key in a public repo and you’re having a very bad week. At the fintech startup, dealing with financial APIs, a leaked key could mean real money walking out the door.

#!/bin/sh
## .git/hooks/pre-commit
./scripts/scan-secrets.sh || exit 1

Dead simple. Runs in under a second. Saves you from the kind of incident that makes the news.

PR checks: block the merge if it’s dirty. Static analysis, dependency audit, container scan. All automated. All required to pass. Not allow_failure: true – that’s a suggestion, not a gate.

security-checks:
  script:
    - ./scripts/sast.sh
    - ./scripts/dependency-audit.sh
    - ./scripts/container-scan.sh
  allow_failure: false

In NATO cyber defense there’s this concept of “deny by default.” Same principle here. The merge doesn’t happen unless the checks are green. No exceptions, no “I’ll fix it later” PRs.

Post-merge: the heavy stuff. DAST against staging. Security-focused integration tests. Things like “can an unauthenticated user hit this admin endpoint?” and “are the CORS headers actually set right?” Static analysis can’t catch these. You need a running application.

In production, runtime monitoring and log analysis close the loop. Security becomes a continuous signal, not a checkbox someone ticked three sprints ago.

Making It Stick

Here’s where most teams screw up: they turn on everything at once, get 400 findings on the first run, and the entire dev team starts ignoring the output within a week.

Don’t do that.

Start with one check. Make it reliable. Make the failure messages actually useful – not “vulnerability found” but “this dependency has a known RCE, upgrade to version X, here’s the CVE link.” The difference between a useful pipeline and an annoying one is whether the developer knows what to do when it fails.

At the fintech startup I rolled checks out one at a time over a few months. Secret scanning first. Then dependency auditing. Then SAST. Each one tuned, false positives suppressed, team trained on the output before adding the next. Boring? Yes. Effective? Absolutely.

Build a suppression process too. Some findings are false positives. Some are accepted risks. That’s fine. But track the suppressions and review them. A suppression list that only grows is a red flag.

Humans on the Hard Problems

Scanners catch known patterns. SQL injection signatures, outdated libraries, hardcoded credentials. Important stuff. But they can’t think.

Threat modeling is a human job. Reviewing the authentication architecture of a new feature – human job. Penetration testing that actually simulates an attacker’s creativity – human job. Incident response when something real happens at 2 AM – very much a human job.

The whole point of automating the repetitive checks is to free up time for this work. In NATO cyber defense, dedicated teams handle threat analysis without getting bogged down running vulnerability scanners by hand. Same principle at a startup, except the “dedicated team” is me and maybe one other person. Which is exactly why automation matters even more.

The Payoff

Before automation, security at the fintech startup was me trying to review everything and inevitably missing things. After, every single commit gets checked. Every dependency gets audited. Every container gets scanned. And I spend my time on architecture reviews and threat modeling instead of eyeballing diffs.

The pipeline doesn’t get tired. It doesn’t get distracted. It doesn’t approve a PR because it’s Friday afternoon and everyone wants to go home.

Automate what machines do better. Save human judgment for what actually needs it. That’s the whole strategy.