The enterprise due diligence playbook I’d inherited from years in financial services was meticulous, thorough, and almost entirely wrong for the job. It had been built to evaluate mature vendors selling into regulated institutions: banks, payment processors, networks where a bad technology decision could cascade into systemic risk. Applied to a twelve-person crypto startup that had been writing code for eight months, it produced absurd results, flagging the absence of things that shouldn’t exist yet whilst missing the signals that mattered.
The collision between enterprise rigor and startup reality forced me to build something new, and that hybrid framework turned out to be far more predictive than either approach alone.
What Enterprise Gets Right
Give credit where it’s due: enterprise evaluation frameworks exist because the consequences of getting it wrong are severe. When Mastercard evaluates a technology partner, a missed risk doesn’t just cost money; it can compromise transaction integrity for millions of cardholders, trigger regulatory action, or damage a brand that took decades to build. The frameworks that emerge from that pressure have real strengths.
Systematic evaluation. Enterprise due diligence forces you to examine a vendor across multiple dimensions simultaneously: technical architecture, security posture, financial viability, operational maturity, regulatory compliance. The value lies in the completeness. Startup evaluators, particularly in the crypto space, tend to fixate on the technology and the team, which matters enormously but leaves entire categories of risk unexamined. A systematic framework prevents the evaluator’s enthusiasm for a clever protocol design from overshadowing a fundamental business model problem.
Risk frameworks. Financial services institutions classify risk with a granularity that most tech evaluators never bother with. Concentration risk, counterparty risk, operational risk, liquidity risk: each gets a dedicated assessment, mitigation plan, and monitoring regime. This discipline is transferable to startup evaluation, albeit the specific risks change shape. A startup’s concentration risk looks different: dependence on one blockchain protocol, one regulatory interpretation, or one technical leader whose departure would cripple the roadmap.
Financial modeling. Enterprise due diligence demands financial projections with clear assumptions, sensitivity analysis, and stress testing. The models are often wrong (all models are), but building them forces both evaluator and vendor to articulate what they believe about the future and what would have to be true for the business to work. Startup evaluators tend to dismiss financial projections as fiction. They are fiction, but useful fiction: the assumptions embedded in a startup’s financial model tell you more about how the founders think than the numbers themselves do.
Where the Frameworks Break
The same rigor that makes enterprise due diligence reliable for evaluating mature vendors makes it actively misleading when applied to early-stage startups. The failure modes are predictable, and I watched them play out repeatedly through the StartPath accelerator.
Over-indexing on process maturity. Enterprise frameworks penalize the absence of formal processes: documented change management, SOC 2 compliance, incident response runbooks, disaster recovery plans. For a five-hundred-person vendor serving production traffic, these are table stakes. For a fifteen-person startup that pivoted its core product three months ago, demanding them is like failing a toddler for not having a retirement plan. The processes don’t exist because they shouldn’t exist yet; building them prematurely would be a misallocation of the startup’s scarcest resource: engineering time.
Penalizing speed. Enterprise culture is deeply skeptical of velocity. Moving fast reads as reckless when you’re responsible for critical financial infrastructure, and that skepticism is earned. But startups that move slowly die. A startup shipping weekly is doing the only thing that lets it learn fast enough to find product-market fit before the money runs out. Enterprise evaluators consistently read velocity as recklessness, and this misread filters out the startups with exactly the execution pace that predicts success.
Requiring documentation that doesn’t exist yet. Enterprise due diligence runs on documentation: architecture diagrams, API specifications, security policies, audit trails. Startups focused on building and iterating for less than a year often have sparse documentation, which registers as a red flag in the enterprise framework. But documentation follows product stabilization; demanding it before the product has stabilized is asking the startup to document something that will change next month. What matters at this stage is whether the team has the instinct and capability to produce documentation when the time comes.
The absence of a capability at the moment of evaluation tells you nothing about whether that capability will exist six months from now. Enterprise frameworks treat absence as evidence of inability, and that assumption is fatal for startup evaluation.
Confusing “not yet built” with “can’t build.” This failure mode underlies all the others. Enterprise due diligence evaluates what exists today; startup evaluation needs to predict what will exist tomorrow, based on the team’s capability, the market opportunity, and the rate of execution. A startup without a disaster recovery plan is a different animal from a vendor that should have one and doesn’t. The first is a timing issue; the second is a competence issue. Enterprise frameworks collapse this distinction, producing a systematic bias against early-stage companies.
The Hybrid Framework
The framework that predicts startup success borrows the systematic discipline of enterprise evaluation whilst replacing the maturity-biased criteria with signals that are meaningful at the early stage. Through the StartPath accelerator and broader crypto/blockchain advisory work, I converged on five dimensions that, evaluated together, produced better predictions than either pure enterprise or pure startup frameworks applied in isolation.
Dimension 1: Technical Trajectory Over Technical State.
Instead of evaluating the current architecture against enterprise standards, evaluate the rate and direction of technical progress. How much has the architecture improved over the last three months? Are the technical decisions getting better over time, or is the team compounding early mistakes? A startup with a messy codebase that’s getting cleaner every sprint is a better bet than one with a pristine architecture that hasn’t evolved in six months. Look at commit history, architectural decision records (if they exist), and the delta between early design choices and recent ones.
Dimension 2: Founder-Problem Intimacy.
This is the startup evaluator’s equivalent of the enterprise framework’s “domain expertise” checkbox, measured differently. Enterprise frameworks look for credentials and experience in the relevant domain. For startups, what matters more is how deeply the founders understand the specific pain they’re solving: the granular, ugly, specific problem rather than the domain in general. Founders who’ve personally experienced the pain they’re addressing approach edge cases, priorities, and product decisions from a fundamentally different angle than founders who identified an opportunity through market research. This isn’t absolute; outsiders with fresh perspectives occasionally outperform domain insiders, but the pattern holds strongly enough to anchor an evaluation dimension around it. You can hear it in how they describe the problem: founders with genuine intimacy talk about specific failure modes and workarounds, whilst founders without it talk about market size and TAM.
Dimension 3: Adaptability Under Constraint.
Enterprise due diligence evaluates resilience through documented business continuity plans. For startups, resilience is better measured by observing how the team responds to real constraints in real time. How do they react when a key assumption is invalidated? When a critical integration partner changes their API? When a regulatory shift threatens their business model? The StartPath accelerator created natural pressure-test moments, and the teams that adapted fastest, pivoting their approach without losing their strategic direction, consistently outperformed the teams that either froze or flailed.
Dimension 4: Market Timing Awareness.
Enterprise frameworks evaluate market position: share, competitive landscape, differentiation. For startups, especially in crypto and blockchain, the more useful dimension is timing awareness. Does the founding team understand where their market is in its maturity cycle? Are they building for where the market is today, or where it will be when their product matures? Startups that are building infrastructure for a market that’s eighteen months away from needing it often look like they’re building something with no demand, and enterprise evaluators mark them down for it. But in crypto specifically (and in emerging technology markets more broadly), the infrastructure builders who time it right capture enormous value precisely because they were “too early” by conventional evaluation standards.
Dimension 5: Revenue Model Coherence.
This is where enterprise rigor earns its keep. Even at the early stage, a startup should be able to articulate a revenue model whose assumptions are internally consistent, even if the specific numbers are speculative. “We’ll charge transaction fees” amounts to a hope, not a model. A coherent model specifies who pays, why they pay, what alternatives they have, and what has to be true about the market for the unit economics to work. The discipline of financial modeling, borrowed from enterprise due diligence, is valuable here, albeit applied to a much wider uncertainty range with the explicit acknowledgment that the model will be wrong. What matters is whether the thinking behind it is rigorous.
A revenue model at the early stage earns its value through coherence rather than accuracy: internal consistency, grounded assumptions, and reflective thinking about who pays and why.
Scoring and Weighting
The five dimensions don’t carry equal weight, and the weighting shifts based on the startup’s stage and the market they’re addressing. For pre-product startups, founder-problem intimacy and adaptability dominate; technical trajectory matters less when there isn’t enough code to evaluate trajectory against. For startups with a live product, technical trajectory and revenue model coherence carry more weight, because now you have artifacts to assess rather than just narratives.
After running this framework through dozens of evaluations, the pattern that emerged is that no single dimension is sufficient, and the interaction effects between dimensions matter as much as the individual scores. A team with extraordinary founder-problem intimacy but poor adaptability will build something deeply relevant to the current market and then fail to evolve when the market shifts, which in crypto happens constantly. A team with strong technical trajectory but weak revenue model coherence will build impressive technology that never generates sustainable economics.
What the Collision Reveals
The broader lesson from forcing enterprise and startup evaluation frameworks into the same room is that both are partially right, and both are incomplete on their own. Enterprise rigor without startup sensibility produces false negatives: promising startups rejected for the absence of things that shouldn’t exist yet. Startup intuition without enterprise discipline produces false positives: charismatic founders with clever technology and no viable business underneath.
The hybrid that emerges from the collision, systematic in its discipline, adjusted in its criteria, honest about what can and can’t be evaluated at the early stage, predicts the only thing that matters: which startups will still be building a year from now.