I’ve sat on both sides of the technical due diligence table. As a founder at Dropbyke and a fintech startup, I’ve been the one sweating while investors poked through our codebase. As a consultant, I’ve been the one poking. The view is different from each chair, and most advice about due diligence gets it wrong because the author has only sat in one.
Quick take
Technical due diligence isn’t a code audit. It’s a risk conversation. Investors want to know if the technology can do what the pitch deck promises, whether the team can sustain it, and what breaks first under growth. If you’re being evaluated, lead with honesty. If you’re evaluating, ignore code style and focus on architecture, people, and the gap between plan and reality.
What Investors Actually Care About
Here is what doesn’t matter: whether you use tabs or spaces, whether your code coverage is 80% or 90%, whether your variable names are poetic. I’ve never seen a deal die over formatting.
Here is what does matter: Can the product do what the sales team is promising customers today? Can the team extend it to do what the business plan says it will do in eighteen months? What are the risks that could slow that down or stop it entirely?
That’s really it. Everything else is a detail that feeds into one of those three questions.
Stage changes the lens considerably. When I went through diligence at Dropbyke, we were early. The codebase was scrappy. Some of our deployment was still manual. Nobody cared. The investors wanted to know if we understood our technical risks, if we could ship fast, and if the founding team had the judgment to make sound architecture decisions under pressure. At growth stage, the bar moves. Multiple engineers need to work in the codebase without stepping on each other. Deployments need to be repeatable. Security can’t be an afterthought. But even then, perfection isn’t the standard. The standard is: does this team know what they are doing, and are they making reasonable tradeoffs for their stage.
Preparing When You’re Being Evaluated
I learned this the hard way at the fintech startup. Our first due diligence session was a scramble. An investor’s technical advisor asked for an architecture diagram and we didn’t have one. Not because the architecture was bad, but because nobody had written it down. We spent half the session whiteboarding basics instead of talking about the interesting parts of our ML pipeline. That was a waste.
Prepare a one-pager: architecture overview showing major components and data flow, stack list with brief rationale for key choices, infrastructure and deployment notes, team structure, and how a feature moves from idea to production. You can write this in an afternoon and it will cover 80% of what reviewers ask for.
Surface your weaknesses before they surface them. Every startup has technical debt. Every startup has areas where security is thinner than it should be. Every startup has a bus factor problem somewhere. If you say “we know our payment integration is fragile and here is our plan to fix it in Q1,” that builds trust. If the reviewer discovers it and you look surprised, that destroys trust. I’ve seen founders try to hide known problems. It never works. Good technical reviewers will find them, and then you have two problems: the weakness itself and the fact that you tried to hide it.
Clean up the obvious things. Remove dead code. Make sure there are no credentials committed to the repo. Fix the broken tests that have been ignored for months. Reviewers will sample, not read every line, but first impressions carry weight. A repo with secrets in the commit history signals carelessness.
Get your IP house in order. Employee IP assignments should be signed. Third-party licenses should be compatible with your business model. If anyone copied code from a previous employer, deal with that now. This is one of the few areas where a due diligence finding can actually kill a deal.
Evaluating Someone Else’s Technology
When I evaluate a company for an investor or acquirer, I follow a specific order that has served me well.
Architecture first. I ask the CTO to walk me through the system on a whiteboard. Not slides. A whiteboard. I want to see if they can explain their own system clearly, where the boundaries are, how data flows, and where the external dependencies live. This conversation tells me more in thirty minutes than reading code for a week. If the CTO can’t explain their architecture clearly, that’s a serious signal regardless of what the code looks like.
Team second. I talk to engineers individually. Not the CTO’s hand-picked representatives, but a cross-section. I ask about the last production incident. How they handle disagreements about technical direction. What they would change if they had a week with no feature pressure. The answers reveal whether the team has real ownership or whether everything flows through one person’s head. Bus factor is a real risk. If one departure would cripple the product, that needs to be in the report.
Code third, and selectively. I look at authentication and payment code because security mistakes there are existential. I look at the core business logic that creates differentiation. I read a few recent commits to understand current engineering standards, and I look at some older code to see how legacy is handled. I’m not grading style. I’m looking for evidence of judgment.
Operations and infrastructure. How often do they deploy? What happens when something breaks at 2 AM? Is there monitoring? Is there alerting? Are there runbooks, or is the incident response plan “call the CTO”? Manual deployments are a yellow flag. No monitoring is a red flag. At Dropbyke, we had invested in Terraform and automated deployments before our due diligence. That came up in the review and signaled operational maturity that investors valued beyond the code itself.
Scalability in context. I don’t care if the system can handle ten million users if the business plan targets ten thousand. I care about whether the team has thought about what changes at 10x their current scale. Honest answers like “our database will need read replicas at 5x and we’ll probably need to shard the activity feed at 20x” are far more reassuring than “we can scale infinitely because we’re on AWS.”
The Signals That Actually Change Outcomes
Red flags, in my experience, cluster around people and process more than technology.
No version control. I’ve seen this exactly once, and it ended the evaluation immediately. Foundational process failure.
The CTO can’t explain the architecture. If the technical leader doesn’t have a clear mental model of the system, nobody does.
Defensive answers. When I ask about a weakness and the response is deflection or minimization, I assume there are more problems I haven’t found yet. Transparency is the single strongest signal of engineering maturity.
Key person dependency with no mitigation plan. One person holding all the context for a critical system isn’t unusual at a startup. But if the team hasn’t acknowledged it as a risk or started documenting, that tells me something about how they think about risk generally.
Yellow flags need context. Technical debt is normal. Outdated dependencies are common. A monolith is fine at early stage. Limited monitoring is fixable. A junior team can work if leadership is strong. These are concerns, not deal breakers. The question is always: does the team know about it, and do they have a credible plan.
Writing a Report That’s Actually Useful
The worst due diligence reports I’ve read are laundry lists of findings with no prioritization. Twelve pages of “this function lacks error handling” tells the investor nothing about whether to write the check.
A useful report answers three questions: First, can the technology deliver on the current business promises? Second, what are the top three risks that could slow or stop progress? Third, what investment in time and money is needed to mitigate those risks? Everything else is appendix material.
Be balanced. If the team built something impressive given their constraints, say so. If the architecture is sound but the deployment process is fragile, say that too. Decision makers need to weigh risk against momentum, and a report that only lists problems is as useless as one that only lists strengths.
The Real Point
Technical due diligence is a conversation about risk, not a code review. The best outcomes I’ve seen, on both sides, come from honesty. As a founder, lead with what you know is weak and what you plan to do about it. As an evaluator, focus on architecture, team, and the gap between the plan and reality. The code is the least interesting part.