DevSecOps in Practice: What I Actually Implement

| 7 min read |
devsecops security devops ci-cd

The concrete pipeline configs, policy-as-code patterns, and runtime controls I set up to bake security into delivery.

Quick take

Security as a final gate doesn’t work at enterprise scale. I’ve been building DevSecOps pipelines for telecom companies and drawing on patterns from earlier NATO-adjacent work. This post covers what I actually implement: pre-commit hooks for secrets, CI pipeline security stages with real configs, container scanning, runtime policy enforcement with OPA, and the triage system that makes all of it sustainable. The key insight is that security controls must be fast enough that developers don’t route around them.


I have a rule when I start working with a new team: before I recommend anything, I ask the team to show me how code gets from a developer’s laptop to production. Every step. Every tool. Every credential.

The answer is almost always incomplete. Not because people are hiding things, but because nobody has mapped the full path recently. That incomplete picture is where security gaps live.

DevSecOps isn’t a product you buy. It’s the practice of embedding security checks into the workflow developers already use. If it slows them down, they’ll circumvent it. If it’s invisible and fast, they’ll barely notice it’s there. That’s the target.

Layer 1: the developer’s machine

The cheapest place to catch a security issue is before it ever leaves the developer’s laptop. Pre-commit hooks are the first line.

Here is what I install on every project:

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.2.0
    hooks:
      - id: gitleaks
  - repo: https://github.com/hadolint/hadolint
    rev: v2.8.0
    hooks:
      - id: hadolint
        args: ['--ignore', 'DL3008']
  - repo: https://github.com/antonbabenko/pre-commit-terraform
    rev: v1.62.3
    hooks:
      - id: terraform_validate
      - id: terraform_tfsec

Gitleaks catches secrets before they hit Git history. Hadolint catches Dockerfile anti-patterns. Terraform validation and tfsec catch infrastructure misconfigurations.

The critical design principle: these hooks must run in under 10 seconds. If a pre-commit hook takes 30 seconds, developers disable it. I’ve seen this happen at every organization I’ve worked with. Speed isn’t optional.

For Go projects specifically, I also add go vet, staticcheck, and gosec as pre-commit checks. Gosec catches common security issues like SQL injection patterns and hardcoded credentials.

Layer 2: CI pipeline security stages

Pre-commit hooks are a safety net, not a gate. The real enforcement happens in CI. Here is the pipeline structure I implement, using GitHub Actions as the example:

# .github/workflows/security.yml
name: Security Checks
on: [push, pull_request]

jobs:
  secrets-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
        with:
          fetch-depth: 0
      - uses: gitleaks/gitleaks-action@v2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

  sast:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Run gosec
        uses: securego/gosec@master
        with:
          args: '-exclude-generated ./...'
      - name: Run Semgrep
        uses: returntocorp/semgrep-action@v1
        with:
          config: >-
            p/owasp-top-ten
            p/golang

  dependency-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Run Trivy for dependencies
        uses: aquasecurity/trivy-action@master
        with:
          scan-type: 'fs'
          severity: 'CRITICAL,HIGH'
          exit-code: '1'

  container-scan:
    runs-on: ubuntu-latest
    needs: [sast]
    steps:
      - uses: actions/checkout@v3
      - name: Build image
        run: docker build -t app:${{ github.sha }} .
      - name: Scan image
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: 'app:${{ github.sha }}'
          severity: 'CRITICAL,HIGH'
          exit-code: '1'

Four parallel jobs: secrets scanning, static analysis, dependency scanning, and container scanning. The container scan depends on SAST passing first – no point scanning an image if the code has critical issues.

The key decision is the gating policy. I use a two-tier approach:

  • Critical and High severity: block the pipeline. The PR can’t merge.
  • Medium and Low: create a tracking issue automatically, assign it to the team, set a remediation deadline. Don’t block the pipeline.

This is the balance that makes DevSecOps sustainable. If you block on everything, teams stop caring about the signal because it’s always red. If you block on nothing, the findings pile up and nobody fixes them.

At one telecom company, we went from zero automated security checks to full pipeline coverage in three weeks. The first week was brutal – the initial scan found over 200 findings across their repositories. We triaged them into buckets, fixed the critical ones immediately, and created a backlog for the rest. Within a month, new PRs were consistently clean.

Layer 3: infrastructure and runtime controls

Static analysis catches what it can see. It can’t see runtime behavior, configuration drift, or privilege escalation. That’s where policy-as-code and runtime controls come in.

I use Open Policy Agent (OPA) with Gatekeeper for Kubernetes-based deployments. Here is a constraint that prevents containers from running as root:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPAllowedUsers
metadata:
  name: require-non-root
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    excludedNamespaces: ["kube-system"]
  parameters:
    runAsUser:
      rule: MustRunAsNonRoot

And one that requires resource limits on every container:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sContainerLimits
metadata:
  name: require-limits
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
  parameters:
    cpu: "2"
    memory: "2Gi"

These constraints are enforced at admission time. If a deployment tries to run a container as root, the API server rejects it. No exceptions. No “we’ll fix it later.” The cluster enforces the policy.

For network policies, I default to deny-all and explicitly allow what is needed:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress

Then each service gets a specific policy that allows only its known dependencies. This is tedious to set up and worth every minute. When a service gets compromised, lateral movement is limited by the network policy. I learned this pattern during work that touched NATO security frameworks – the principle’s the same regardless of scale. Default deny. Explicit allow. No implicit trust.

Layer 4: triage and ownership

This is where most DevSecOps implementations die. The tools work. The scans run. The findings pile up. Nobody owns them.

My triage system is simple:

  • Critical: fix within 24 hours. Page the on-call if necessary. These are actively exploitable issues in production code.
  • High: fix within one sprint. Track in the regular backlog. Team lead owns the follow-up.
  • Medium: fix within 30 days. Batch these into security-focused work blocks.
  • Low: fix when convenient, or accept the risk with documentation.

Every finding gets an owner. Not a team – a person. “The security team will handle it” is how findings go to die.

I also set up a weekly triage meeting. Fifteen minutes. Review new findings, check progress on open items, and escalate anything that’s stuck. At one company, this meeting reduced the median age of open vulnerabilities from 47 days to 11 days in two months. Not because we added more tools. Because someone was consistently asking “is this fixed yet?”

Secrets management

Secrets in source control are the most common security finding I see. Hardcoded API keys, database passwords in config files, AWS credentials in test fixtures. It happens everywhere.

The fix is structural, not behavioral. Don’t tell developers “be careful with secrets.” Give them a system that makes it hard to leak secrets in the first place.

  • All secrets live in a vault (HashiCorp Vault, AWS Secrets Manager, whatever fits your stack).
  • Applications pull secrets at runtime, never at build time.
  • CI credentials are scoped to individual jobs and rotate automatically.
  • Pre-commit hooks catch anything that slips through.
  • Git history scanning runs on the full repository, not just the diff. Because secrets removed in a later commit are still in the history.

What I’ve learned from doing this repeatedly

The technical setup is the easy part. The hard part is culture.

Developers aren’t the enemy of security. They’re the delivery mechanism for security. If you treat them as adversaries – locking everything down, blocking every build, requiring approvals for trivial changes – they’ll find workarounds. Shadow builds. Personal accounts. Manual deployments that skip the pipeline entirely.

The organizations that succeed with DevSecOps are the ones that make secure the default and make the default fast. Secure container base images that are pre-approved. Templates that include security configs out of the box. Scanning that runs in parallel with tests instead of after them.

Security should feel like good tooling. When it does, adoption is automatic.