Skip to content

The playbook is not a switch you flip. It's a set of practices you adopt incrementally, each one building on the previous. Start with what solves your most immediate pain, prove it works, then expand.

The right starting point depends on your team's size, your current pain points, and how much process you already have. A three-person startup adopts differently than a fifty-person engineering org. Both can get value from the first week.

By Team Size

Solo or small startup (2-5 people)

Start with the ritual: scope, code, review. Read the Preamble and Design Rules. That's it for week one. The ritual gives you a consistent development workflow. The philosophy gives you shared language for design conversations.

Add self-review before peer review in week two. Even a two-person team benefits. It catches things. The overhead is minimal: maybe two to three hours total to set up and internalize.

Small team (5-15 people)

Start with the ritual, then add structured planning within the first month. At this size, scope drift becomes real. Architecture decision records prevent the "I thought we agreed on X" conversations that waste everyone's time.

Architecture Decision Records have disproportionate impact on small teams. When there are only a few people, decisions get made in hallway conversations and disappear. ADRs capture the reasoning so new hires and future-you understand why things are the way they are.

Add multi-perspective reviews after the first month. You don't need five people reviewing every change. But you do need to occasionally put on the security hat, the product hat, the testing hat. Monthly cadence reviews prevent slow drift.

Medium team (15-50 people)

Everything above, plus formalized onboarding and knowledge transfer. At this size, you're adding people faster than institutional knowledge spreads organically. Structured onboarding means new engineers read the Preamble and Design Rules in their first week, pair with experienced engineers on the ritual in their second week, and contribute independently by week three.

Pattern libraries and architecture references start paying off here. When three teams are all making similar database decisions independently, having a shared patterns resource saves duplicated work and prevents inconsistent choices.

Large organization (50+ people)

At this scale, the playbook's value shifts from individual productivity to organizational consistency. Role-specific adoption guides. Quarterly adoption metrics. Cross-team architecture reviews. The philosophy stays the same; the implementation becomes more structured.

The biggest risk at scale is that the playbook becomes theater. Teams go through the motions of reviews and rituals without the preamble thinking that makes them valuable. Metrics help here: not "did we do a review?" but "did the review find anything?" Not "did we write an ADR?" but "did someone reference it when making a related decision?"

Four Phases

Regardless of team size, adoption follows a natural progression. Each phase builds on the previous one, and each has clear signals that tell you when you're ready to move on.

Phase 1: Foundation

Read the Preamble and Design Rules. Set up the daily ritual. Establish code review norms. This takes a week or two. You know you're ready to move on when the ritual feels automatic and code reviews consistently surface useful feedback.

Phase 2: Development workflow

Add structured planning, testing practices, and documentation standards. This takes two to four weeks. You know you're ready to move on when scope is captured before coding starts, tests are written alongside code, and commits explain the why.

Phase 3: Architecture and patterns

Integrate pattern libraries into design discussions. Add architecture decision records. Introduce the full review pipeline with specialized perspectives. This takes one to two months. You know you're ready when design discussions reference shared patterns and trade-offs are documented before implementation.

Phase 4: Release and operations

Formalize release processes, incident response, and operational practices. Add monitoring cadences. This is ongoing. You know it's working when releases feel routine, incidents are handled systematically, and post-incident reviews produce lasting improvements.

Common Pitfalls

These aren't failures. They're predictable patterns that every adopting team encounters.

Adoption fatigue. Teams try to adopt everything at once. Three weeks in, nobody remembers which practices to use for what. Fix: start with the ritual only. Add one practice per month. Slower adoption that sticks beats fast adoption that collapses.

Review ceremonies die. Reviews happen reliably for the first two months, then start getting skipped. Fix: put them on the calendar. Rotate who runs it. Document what you find. If it's not calendared, it's optional, and optional things get dropped under pressure.

The most common version: a team establishes code review and test reviews, runs them consistently for six weeks, then hits a deadline. Reviews get skipped "just this once." Three months later, nobody remembers they were supposed to be happening.

Design rules as dogma. Teams treat the rules as laws instead of trade-offs. Someone rejects a PR because "it violates Simplicity" without engaging with why the complexity was added. Fix: revisit Chapter 2. The rules conflict with each other by design. The value is in the conversation about the trade-off, not in rigid enforcement.

No shared context. Team members use different subsets of the playbook. One person runs reviews, another doesn't. There's no shared understanding of the ritual. Fix: the Preamble and Design Rules are the shared foundation. If everyone has read those two chapters, the rest works itself out through conversation.

Customization

The playbook provides a framework. Your team provides the context. Keep the philosophy intact. Customize everything else.

A fintech team will lean harder on security reviews. A consumer product team will emphasize product alignment and accessibility. An infrastructure team will focus on resilience and operational reviews. The five review lenses are the same; the weight given to each one shifts based on what matters most in your domain.

The principles are tool-agnostic. The ritual (scope, code, review) works with any development environment. The review perspectives work with any review process. How you trigger them is an implementation detail. The playbook includes tooling for AI-assisted development, but the thinking applies whether you use those tools or not.

Knowing It's Working

You'll feel it before you can measure it. Code reviews surface real issues instead of style nitpicks. New engineers ramp up faster because the context is documented. Incidents feel manageable because there's a process. Scope discussions happen before coding, not during.

If you want to measure it: track code review feedback quality (are reviews finding real issues?), test coverage on new code (not legacy, just new), onboarding time to first independent contribution, and deployment success rate. These are trailing indicators. They move slowly. But they move in the right direction when the practices are real.

The simplest test: would you recommend this way of working to a team you care about? If something feels like overhead without payoff, cut it or change it. The playbook is a starting point, not a destination.

What's Next

You've read the book. Here's how to start.

Pick one thing. Read the Preamble with your team. Try the ritual for a week. Run one review through multiple perspectives. Whichever feels most relevant to your current pain, start there. Don't try to adopt everything. One practice, used consistently, teaches you more than five practices used sporadically.

The full Command Reference has every command, organized by category. The source repository has the install instructions if you want to use the tooling. But the tooling is optional. The thinking is not.