Most architecture reviews follow a predictable script: a team presents a design they’ve already committed to, a room full of senior engineers asks questions that are polite rather than probing, and the review concludes with something like “looks good, let’s keep going.” The decision was made weeks ago, and the review exists to legitimize it rather than challenge it.

The architecture reviews that actually change decisions share three traits: they happen earlier in the design cycle, they carry explicit approval or rejection authority, and someone in the room has both the standing and the mandate to say no.

Review as Theater vs. Review as Decision Point

Architecture reviews fail for three structural reasons, and I’ve seen them recur in every organization I’ve worked in, from enterprise financial services to startup infrastructure teams.

The first is timing. The review happens after the team has invested weeks of design and early implementation work. By the time senior engineers see the proposal, the sunk cost is real: the team has emotional investment, the sprint plan assumes the current direction, and reversing course feels wasteful even when it’s the right call. A review that happens after commitment is a review that can only approve, and this is the failure mode I’ve encountered most often. The instinct driving it is well-intentioned: teams want to present polished work, so they wait until the design is “ready.” But polished means invested, and invested means resistant to change.

The second failure is the absence of decision rights. The review has no explicit authority to approve, reject, or conditionally approve. It’s advisory at best, which means the proposing team can listen politely, thank everyone for the feedback, and proceed unchanged.

The third failure is composition. The wrong people are in the room, or the right people are missing. A review lacking someone who understands the downstream implications is incomplete, and one lacking someone with organizational authority to block a bad decision is toothless. Many reviews suffer from both problems at once, ending up as a room full of peers who can weigh in but can’t actually block anything.

An architecture review with no authority to reject produces the same outcome as a status update, without the honesty of calling it one.

The Format That Works

I rebuilt the process from the structural failures up, and the format I landed on has five elements, each addressing a specific dysfunction.

Trigger criteria. Reviews don’t happen on a schedule; they trigger based on the nature of the change. Any change that introduces a new data store, creates a new service boundary, modifies a public API surface, or affects more than two teams requires a review. The criteria are written down, and everyone knows them. This eliminates both the “we didn’t think it needed a review” excuse and the calendar bloat of reviewing everything.

Pre-read materials, distributed 48 hours in advance. The proposing team writes a one-page design brief covering the problem statement, the proposed solution, the alternatives they considered and rejected, and the risks they’ve identified. Two days is enough time to read thoughtfully and show up with real questions. Reviews where reviewers encounter the design for the first time in the room devolve into clarification questions rather than substantive challenge; the pre-read eliminates that failure mode entirely.

Time-boxed to 60 minutes. The proposing team gets 10 minutes to present (everyone has already read the pre-read, so this is context-setting and highlighting the key trade-offs rather than a walkthrough from scratch). The remaining 50 minutes are structured discussion. A hard stop forces prioritization: reviewers focus on the decisions that matter most rather than bikeshedding implementation details.

Explicit decision at the end. Every review concludes with one of three outcomes: approved, approved with conditions, or rejected. “Approved with conditions” is the most common and most useful outcome; it preserves the team’s momentum whilst addressing legitimate concerns. The conditions are specific and trackable: “approved, contingent on adding a circuit breaker before the downstream dependency and documenting the rollback procedure.” Vague feedback like “think more about scalability” gives the appearance of governance without actually constraining anything.

A named decision-maker. One person in the room has final authority. In my organization, this is typically a staff or principal engineer with domain expertise in the affected area. The decision-maker can be overruled by the CTO (me), but the expectation is that they own the call for the review. This kills the dynamic that sinks most review processes: spread accountability across eight people in a 60-minute meeting and it belongs to none of them. At the scale of four to eight teams, a single named authority works well; beyond that, you need a domain-based rotation where different staff engineers own different architectural areas, or the decision-maker becomes a bottleneck that slows the very process it’s meant to govern.

Who Should Be in the Room

Composition matters as much as format, and I’ve converged on four roles that each serve a different function.

The proposing team (typically two people: the tech lead and one engineer deep in the implementation) presents the design and answers questions. They know the system best and carry the context that reviewers lack.

An affected downstream team representative brings the perspective of the people who will live with the consequences. If you’re introducing a new event schema, the team that consumes those events needs a voice. The most valuable pushback tends to originate here, because downstream teams see coupling risks and operational burdens that the proposing team has blind spots around.

A staff or principal engineer with domain expertise provides the technical depth and cross-cutting perspective. This is the role I’ve written about in The Staff Engineer Bottleneck: staff engineers are force multipliers precisely because they see across team boundaries, and the architecture review is one of the highest-leverage venues for that vision.

The named decision-maker (sometimes the same staff/principal engineer, sometimes a different one) carries the authority to approve or block. This person’s job is to synthesize the discussion into a clear outcome and own that outcome going forward.

The people who live with the consequences of an architectural decision are rarely the people who made it, and the review room is the one place to close that gap.

Follow-Through: Where Most Processes Die

A well-run review that produces no follow-through is worse than no review at all, because the organization assumes the guardrails are working when they aren’t. Follow-through has three parts.

Decisions recorded as Architecture Decision Records (ADRs). The outcome of the review, including the rationale, the alternatives considered, and the conditions attached, goes into an ADR that lives alongside the codebase. Six months from now, when someone asks “why did we build it this way?”, the ADR provides the answer without requiring archaeology through Slack messages and meeting recordings.

Conditions tracked to completion. If the review outcome was “approved with conditions,” those conditions need an owner, a deadline, and a tracking mechanism. I use a simple spreadsheet, albeit any project management tool works. The important thing is that conditions don’t silently expire. A quarterly audit of open conditions from past reviews keeps the process honest.

A 30-day check-in. Thirty days after the review (roughly two sprint cycles, enough time for the implementation to have hit real friction), the proposing team and the decision-maker have a brief conversation: did the architecture hold up against reality? Were the assumptions sound? Did new risks emerge? This check-in is short (15 minutes) and focused purely on learning. It catches the cases where the design looked sound on paper but encountered friction in implementation, and it feeds those lessons back into the next review cycle.

When to Skip the Review

Good governance includes knowing when governance isn’t needed. Small, reversible changes within an existing architecture don’t warrant a formal review, and subjecting them to one breeds resentment and slows teams down. The trigger criteria serve as a filter: if a change doesn’t meet them, the team proceeds without a review. The exemption is a deliberate design choice, and selectivity is how the review process earns trust.

Where This Doesn’t Apply

This format assumes collocated or overlapping-timezone teams, a mid-stage organization with enough engineers to have genuine team boundaries, and teams that can actually get into a room together, physically or virtually. If your teams are fully async across twelve time zones, a written RFC process with structured comment periods will serve you better than fighting calendar Tetris for a 60-minute slot. If you’re a startup with fewer than 20 engineers, formal trigger criteria and named decision-makers add overhead that outweighs the benefit; at that scale, a tech lead pulling two people into a room for 30 minutes covers the same ground. The framework scales well between roughly 30 and 200 engineers; outside that range, adjust to fit.

The Cultural Shift

The format, the pre-reads, and the decision rights are all solvable problems. Building a culture where “rejected” is a legitimate and respected outcome is harder by an order of magnitude. In most organizations, rejecting an architecture proposal feels adversarial, like a personal critique of the team that proposed it. Reframing rejection as the process working correctly is the shift: the team surfaced a plan, the review identified fundamental issues, and the organization saved itself from a costly mistake. Both the proposing team and the reviewer who said no did exactly what they were supposed to do, and the outcome is a better system with no shame in the detour.

That cultural shift takes time, consistent messaging, and leaders who model it by accepting pushback on their own proposals gracefully. The most effective thing I’ve done is submit my own architectural proposals to the same review process and accept feedback publicly, including a rejection when it was warranted. Without that modeling, an architecture review where everyone is afraid to say no produces the same rubber-stamp outcomes the process was designed to prevent.