0

πŸ§‘β€πŸ’» The Tech Lead Playbook: From Best IC to Multiplier - Part 2 πŸš€

A deep, opinionated, practical guide for the engineer who has just been handed (or is about to be handed) a team. The tactics, mental models, decision frameworks, and anti-patterns that take you from "great individual contributor" to "the person who makes the team 3x more effective." Grounded in 2026 reality β€” small teams, AI-leveraged engineers, async distributed work, and a hiring market that demands you ship.

If you read only one section first, read Β§2 Mindset, Β§5 Technical Direction, and Β§9 The Operating Cadence. Everything else is the implementation of those three.

Companion to πŸš€ The SaaS Template Playbook πŸ“– (how to build), πŸ€– The AI SaaS Playbook (Practical Edition)πŸ“˜ (how to add AI), 🦸 The Solo-Founder Playbook: Zero Hero πŸš€ (operating alone), and πŸ—οΈ Building High-Quality AI Agents πŸ€– β€” A Comprehensive, Actionable Field Guide πŸ“š (agentic systems). This one is for the lead of a team of 3–10 engineers at a startup, a scale-up, or a fast pod inside a big company.


πŸ“‹ Table of Contents

  1. ⚑ Read This First
  2. 🧠 The Tech Lead Mindset
  3. 🎭 Tech Lead vs Senior Eng vs Staff vs EM
  4. πŸšͺ The First 90 Days
  5. 🧭 Setting Technical Direction
  6. πŸ›οΈ Architecture & Technical Decisions
  7. πŸ“¦ Project Execution: Planning β†’ Delivery
  8. πŸ‘₯ People: 1:1s, Coaching, Conflict
  9. ⏱️ The Operating Cadence
  10. πŸ” Code Review & Design Review
  11. πŸ”₯ Incidents, On-Call & Quality
  12. 🀝 Stakeholders: PM, Design, EM, Exec
  13. πŸ€– Leading in the AI Era (2026)
  14. πŸ§‘β€πŸ”¬ Hiring & Calibration
  15. πŸ“ˆ Performance, Promotion & Letting Go
  16. 🌱 Growing the Team Without Breaking It
  17. πŸ’¬ Communication: Writing, Speaking, Status
  18. ⚠️ The Tech Lead Anti-Pattern Catalog
  19. πŸ—ΊοΈ The Phased Roadmap (Day 1 β†’ Year 2)
  20. πŸ“‹ Cheat Sheet & Resources

Section 1 -> 9 Read Part 1 here https://viblo.asia/p/the-tech-lead-playbook-from-best-ic-to-multiplier-part-1-wd43EZOKLX9

10. πŸ” Code Review & Design Review

Review is the most public way you set technical culture. Everyone watches how you review.

10.1 The PR review philosophy

Three goals, in this priority:

  1. Correctness: does this work? does it not break X?
  2. Maintainability: will the next person understand this? does it match codebase conventions?
  3. Growth: is this a teaching moment? for the author or for future readers?

Style/taste is a distant fourth. Adopt automated formatters and linters; never spend a code review on whitespace.

10.2 The TL's review behaviors

  • Speed. Same-day for blocking reviews; <24h for non-blocking. A team's velocity is bounded by review latency.
  • Bias toward approve. If the change is correct and the design is reasonable, approve with comments rather than block. Leave nits as "nit:" prefix; explicitly mark blocking concerns.
  • Comment on the why, not the what. "Could we use X here?" β†’ "Could we use X here? It avoids the N+1 we hit in the orders module last quarter." The reasoning is the gift.
  • Praise good code. "Nice β€” this is much cleaner than the old pattern." Code review is also a feedback channel.
  • Pull bigger discussions out of the PR. When a comment thread is heading toward "should we redesign this," stop, schedule a sync, write an ADR if needed.
  • Don't gate. As TL you might be one of N reviewers. Don't make every PR wait for you. Identify 2–3 senior-enough reviewers and rotate.

10.3 The "two-rounds" rule

If a PR needs >2 rounds of review, something is wrong. Causes:

  • The author didn't have enough context before coding (fix: better task hand-off, design first).
  • The reviewer is over-reaching (fix: separate PR-style nits from blocking issues).
  • The change is too big (fix: smaller PRs).
  • The author and reviewer disagree philosophically (fix: pull the conversation out of the PR).

Track this informally β€” if multiple PRs need 4+ rounds, call out the pattern at retro.

10.4 PR size discipline

Short PRs get reviewed faster, merged faster, ship faster, and have fewer bugs. Targets:

  • Ideal: <200 LOC of meaningful diff.
  • Acceptable: <500 LOC.
  • Refactor: can be large if truly mechanical (renames, code-mod) and explicitly tagged.
  • Anything else over 500 LOC needs justification in the PR description.

Most large PRs are 3 PRs that got merged into one because the author didn't know how to split. Teach the team to plan PR boundaries before coding.

10.5 Design reviews

Already covered in Β§6. To add:

  • Design reviews are async-first (inline comments on the doc) before any meeting.
  • The meeting is 45 min, focused on remaining open questions, not narration.
  • Author drives. The TL is a participant, not the chair, unless the author is junior.
  • End every design review with a written decision summary in the doc itself: "Decided: X. Open: Y. Next steps: Z."

10.6 The "what would I have written?" trap

A senior reviewer's worst instinct: the author wrote working, correct, conventional code, and the reviewer says "I would have done it differently." Discard this voice. Unless your alternative is materially better (correctness, perf, maintainability, conventions), let the author's choice stand. The team's code is the team's code. It does not have to look like your code.


11. πŸ”₯ Incidents, On-Call & Quality

The team's quality bar is set in incidents and post-mortems, not in design docs.

11.1 The on-call covenant

Every team that owns production has an on-call rotation. The TL's job is to make it bearable.

Rules:

  • One primary, one secondary, weekly rotation.
  • No one is on-call alone in their first 8 weeks. They shadow.
  • Anyone awakened twice in a week gets the next week off rotation.
  • All pages are reviewed every Monday: real or noisy? noisy ones go to a tracked queue and get killed.
  • The page volume is a team metric you report every month. Down is good.

A team where on-call is a coin flip between "quiet week" and "trauma" will burn out. The TL who fixes the worst alert each month forever will earn lifelong loyalty.

11.2 The incident response rhythm

When things break:

  1. Declare an incident β€” name a commander (not always you), open a channel, start a timeline.
  2. Stop the bleed first, fix the cause second. Roll back; failover; rate-limit. Resist the urge to debug the root cause while production is on fire.
  3. Communicate. Status updates every 15–30 min, even "no progress yet, still investigating." Silence is worse than bad news.
  4. Mitigate fully before declaring resolved.
  5. Pause before the post-mortem. People need an hour to come down.

The TL is not always the incident commander. Train others to lead β€” it's a great growth opportunity for senior engineers and reduces single-person dependency.

11.3 Post-mortems: blameless and useful

A post-mortem that reads "X engineer should have noticed Y" is useless. Future engineers will not "notice better" β€” humans don't work that way.

Format:

## Incident: <one-liner>
Date, severity, duration, customer impact (specific numbers)

## Timeline
- HH:MM β€” what happened
- HH:MM β€” what someone did
(Be specific. Use real timestamps. Show the rabbit holes.)

## What went well
## What went poorly
## Where we got lucky (this is the best section)
## Root cause (with the 5-whys done genuinely)
## Action items
- [ ] <action> β€” owner, due date, type (preventative / detective / resilience)

The "where we got lucky" section is the most under-used. "We got lucky that the engineer who deployed at 3pm was online; if it had happened at 6am there would have been no one." Unearths the latent risks that the dramatic root cause hides.

Action items: 3–5 max, all assigned, all dated. Track them. A post-mortem with no completed action items is theater.

11.4 Quality is a TL responsibility

Bug rate, regressions, support tickets, customer complaints β€” all roll up to the TL. You don't write all the tests, but you set the bar that says "we don't ship without one for the happy path + 2 edge cases" (or whatever your bar is).

Defaults to enforce:

  • Tests in PRs for new logic. Always.
  • A bug found in production = a regression test in the next PR. Cultural rule.
  • Flaky tests are bugs. Quarantine within 24 hours; fix or delete within a week.
  • Code coverage is a signal, not a target. Don't chase 100%; do investigate sudden drops.

11.5 The "every team has 1 systemic risk" exercise

Once a quarter, list the top 3 things that could take your team down for >24 hours. Examples: "Our database has no read replica. If it dies, we're down for hours." "Our deploy pipeline depends on a scriptthe original author left." "Our auth is a single library version behind a known CVE."

Pick 1, fix it that quarter. Most teams have an embarrassingly long list of these and most will never blow up β€” but the day one does, your team will look like heroes for having shipped the fix six weeks earlier.


12. 🀝 Stakeholders: PM, Design, EM, Exec

The political layer. Most new TLs ignore it and learn it the hard way.

12.1 Working with the PM

The PM is your closest collaborator. A great TL/PM pair is the single biggest predictor of team success. Tactics:

  • Weekly 30-min PM/TL sync (separate from sprint planning). Topics: roadmap drift, customer signal, tech-debt-vs-features trade-off, escalations.
  • Co-write the roadmap. Not "PM writes, TL approves." Both names on the doc.
  • Speak in their currency. When pushing for tech debt, frame in terms of feature velocity, customer impact, churn risk. Not "this code is ugly."
  • Disagree privately, align publicly. If you and PM disagree, fight it out in a 1:1, not in a sprint review in front of engineers. The team's trust is fragile; visible TL/PM conflict shakes it.
  • Bad PM behaviors to push back on: mid-sprint scope additions without trade-off, customer commitments without team consultation, deadlines decided without engineering input, vague requirements ("make it better").

If your PM is weak (vague, scope-shifting, slow-deciding), document the pattern, share with your manager, propose specifics. Don't suffer silently for a quarter.

12.2 Working with Design

If you have a designer, treat them as a peer of the PM, not an "input."

  • Loop them into design reviews, not just visual reviews.
  • Share constraints early ("we cannot animate at 60fps on mobile because of X"). Designers respect constraints; they hate surprises.
  • Ship design polish as deliberately as features. A "design polish week" once a quarter compounds product quality.

12.3 Working with your EM

Already covered in Β§3.2 if TL+EM split. To add:

  • Bring your EM bad news first, in private, with options. Never let your EM hear about a problem from someone else.
  • Tell them what you need. Air cover, hiring, comp, headcount, escalation. EMs aren't mind readers.
  • Tell them what's working. Not all your communication is "I have a problem." Make sure they see what's going right.
  • Expect: candor, defense of you with their leadership, growth coaching, comp/headcount advocacy. If you're not getting these, talk to your EM directly about the gap.

12.4 Working with execs

You'll be in front of your CEO/CTO/VP at some point β€” quarterly review, incident, hiring panel. Defaults:

  • Lead with the outcome, not the journey. "We shipped X, customers report Y, here's the data." Not "We started by exploring approach A, then..."
  • Time-box. Aim for 50% under your slot. Execs talk to many teams; brevity is respect.
  • Have one "ask" ready. "What I need from you: faster decisions on Z."
  • When asked a hard question, answer it. Don't dodge. Don't over-promise. "I don't know yet, here's how I'll find out by Friday."
  • Read the room. Big-picture exec wants narrative; technical exec wants the diff.

Anti-patterns: bringing problems without options, over-explaining technical detail, defending your team aggressively when constructive feedback would help, surprising the exec with bad news in a public forum.

12.5 Cross-team work

When a project spans your team and another:

  • One DRI (directly responsible individual) per cross-team initiative. Not co-DRIs. Not committees. One.
  • A shared design doc owned by the DRI, reviewed by both teams.
  • A shared metric that both teams can see weekly.
  • Resolve conflicts through the metric, not through politics. "The migration is slipping; here's the data; here's what we'll change."

If you're the DRI, you serve both teams equally. If you're not, you support without taking over.

12.6 Saying no

The single most important political skill of a tech lead. Most TLs say yes too much in year 1 and end year 1 with a team that resents them.

How to say no:

  • "That's a great idea, but to take it on we'd need to drop X. Want to do that swap?"
  • "I want to commit to this seriously, which means I can't do it this quarter. Can we pencil it in for next quarter?"
  • "Engineering capacity for that is roughly 3 weeks. Given our roadmap, here's what would have to slip. Which would you like to drop?"
  • "I don't think we should do this because <specific reason>. Here's an alternative that hits 80% of the value."

Saying yes to everything is dishonest. The team can tell. The PM can tell. The exec who wanted the thing eventually finds out you didn't actually have capacity. Trust dies in fake yeses.


13. πŸ€– Leading in the AI Era (2026)

Every TL playbook written before 2024 is partially obsolete. AI-augmented engineering changes the math.

13.1 What changed

  • Code is cheaper to write. A senior + Claude/Codex can produce 2–4x the code per hour vs unaided. The bottleneck moved from typing speed to specification quality, review throughput, and integration testing.
  • Junior productivity gap shrunk and widened. Juniors with AI assistance look more productive than juniors without. But juniors who learn nothing because AI did the work are a long-term liability. Coaching matters more, not less.
  • Architecture matters more. The constant cost (writing code) dropped; the variable cost (a bad architectural choice) is unchanged. Teams that lean into AI without good design ship faster and end up with worse codebases.
  • Tribal knowledge β†’ AI-readable knowledge. Codebases with great structure, naming, types, and docs let AI dramatically out-perform. Codebases without get worse AI assistance.
  • Reviewing AI-generated code is its own skill. Subtle hallucinations, plausible-but-wrong code, over-engineered solutions, missed conventions. The team's review bar must rise, not fall.

13.2 The AI-augmented team operating model

The shape of a great 2026 team:

  • 5 engineers, each AI-augmented.
  • 70%+ of code is AI-assisted in some form (autocomplete, agentic editing, tool-using agents for migrations and tests).
  • Specs and reviews dominate the human time budget.
  • The TL is the person responsible for: which AI tools the team uses, what's allowed in code (security, licensing), and the spec/review quality bar.

Specifically the TL must own:

  1. Tool selection. Which IDE assistant, which agentic tool, which model, which guardrails. Update quarterly.
  2. Codebase AI-readiness. CLAUDE.md (or equivalent) at root and per-package. Conventions documented. Tests as executable specifications.
  3. Review bar. AI-generated code does not get a free pass. Author is fully responsible for what they merged. "The model wrote it" is not a defense.
  4. Security & data hygiene. No secrets in AI prompts. Model providers' data handling reviewed. Customer data never sent to consumer-tier endpoints.
  5. Skill calibration. Engineers should be able to do their job without AI for 1 day. If the team would grind to a halt without GPT-5, you've over-rotated.

13.3 What junior engineers need (more than ever)

In 2026 it's easier than ever for a junior to ship code that works and harder than ever for them to learn fundamentals. The TL must defend the learning.

Tactics:

  • Some tasks are deliberately AI-light. "This is a learning task β€” please write it without AI assistance and we'll review together."
  • Pair sessions where the senior shows their AI workflow β€” including when they reject AI output.
  • Code review where the question is "explain what this code does and why", not just "does it work."
  • A quarterly "from scratch" exercise: implement X without AI, then with AI, compare.

This is not about being purist; it's about ensuring the junior in 2026 still has the mental models to be a senior in 2029.

13.4 What senior engineers need

Different problem. Seniors with AI risk:

  • Becoming over-trusting of AI suggestions in their domain.
  • Skipping the design step because "the model can figure it out."
  • Producing more code without producing more value.
  • Plateauing on harder skills (system design, distributed systems, cross-team work) because line-coding feels productive.

TL response: push seniors toward harder problems faster. Owning a multi-team system. Mentoring 2 juniors. Publishing an internal tech talk. AI gave them time back; spend that time on growth, not output.

13.5 Hiring in the AI era

The bar moved. What you hire for:

  • Spec/design skill. Can they decompose a fuzzy problem into a crisp spec a model could execute against? This is now a top-3 hiring signal.
  • Review skill. Can they read AI-generated code and find the subtle bugs? This is the moat.
  • Domain & customer instinct. AI can write the code; it can't tell you the export format finance actually needs. People who talk to users win.
  • Judgment & taste. "This works but I wouldn't ship it because…" is the senior signal.
  • Curiosity about AI tools themselves. Anyone treating AI as a threat or a fad in 2026 is a 1–2 year career risk.

What you de-emphasize:

  • Boilerplate-grade live coding ("implement linked list reversal"). AI does that; it's now a hiring trap that selects for the wrong skills.
  • Trivia about specific frameworks. AI knows the API.

13.6 The TL's own AI workflow

You can't lead what you don't use. By end of 2026, a competent TL is comfortable:

  • Drafting design docs with AI assistance (you write the spine, AI fills sections, you edit).
  • Generating ADR options for a decision (give it the context, ask for 3 options + trade-offs, then decide).
  • Reviewing PRs with AI-summarization for unfamiliar code.
  • Using AI agents for refactor proposals, migration plans, test generation.
  • Reading AI-generated code skeptically β€” you are the last line of defense.

If you're not personally fluent, the team will out-skill you in 6 months and you'll lose technical credibility. Block 1 hour/week on tooling.

13.7 Don't be the AI maximalist or minimalist

Two failure modes:

  • Maximalist. "Everything should be AI-driven." Team ships shallow code, no one has fundamentals, customer issues take longer to debug because no one understands the system.
  • Minimalist. "I don't trust this stuff, we'll write everything by hand." Team falls behind, talent leaves, you're 2 years behind by 2028.

The right answer is fluent pragmatism: use AI where it accelerates without degrading quality, refuse where it degrades, defend learning, and update your stance every quarter as the tooling improves.


14. πŸ§‘β€πŸ”¬ Hiring & Calibration

You don't fully control hiring as a TL but you significantly shape it.

14.1 What makes a good engineer for your team

Generic "good engineers" don't exist; engineers are good for a specific role. Write the spec:

  • The role's daily work (60% of time): what tasks, what stack, what cadence.
  • The 20% of growth: what stretches them.
  • The 20% of unique team needs: domain knowledge, on-call shape, written-async culture.
  • The 5–8 must-haves and the 3–5 nice-to-haves.

Force the must-have list to be small. Long must-have lists are how teams reject great candidates.

14.2 The interview loop

For a typical SWE hire (mid–senior), 4–5 stages:

  1. Recruiter screen (HR β€” culture, motivation, salary band).
  2. Technical phone screen (~60 min β€” code + system thinking, calibrated to the role).
  3. System design or architecture discussion (60 min, senior+ only).
  4. Hands-on / take-home (real-ish problem, 90 min live or 4 hours async with strict cap).
  5. Team / hiring manager / leadership (~45 min β€” values, motivation, hard questions).

In 2026, AI changes this:

  • Live coding should allow AI assistance and observe how the candidate uses it. The signal is judgment, not typing.
  • Take-homes should test design + integration, not implementation.
  • Add a "review this PR" stage. Show a 200-line PR (some good, some bad) and watch their thinking.

14.3 The TL in the loop

You should:

  • Own the technical phone screen or system design round (you set the bar).
  • Attend every hiring debrief.
  • Veto with reason β€” you should be able to articulate the no in writing in 3 sentences.
  • Not block hires for personal taste. Calibrate against the role spec, not against you.

14.4 Common TL hiring mistakes

  • Hiring people just like you. Diverse teams ship better products. "They felt like a cultural fit" is often "they reminded me of me" with a euphemism.
  • Hiring for who they are today, not who they'll be in 2 years. Slope > intercept. The candidate growing fast at junior is often a better year-2 senior than the candidate who was already senior but coasting.
  • Ignoring red flags because you're desperate. Hiring under pressure is the #1 source of regretted hires. No hire is better than a wrong hire.
  • Over-engineering the loop. 7 rounds of interview lose top candidates to faster-moving competitors. 3–5 well-designed rounds beat 7 weak ones.
  • Not closing. Once you decide yes, call them within 24 hours. Top candidates are in 2–3 loops. Speed wins.

14.5 Onboarding (where most teams fail)

Hiring is a 60% bet; onboarding is the other 40% of whether they succeed. Most teams treat onboarding as "set up the laptop and find a buddy." That's a setup for 6 months of mediocrity.

A real onboarding plan:

  • Day 1: environment, accounts, intro, no expectation of code.
  • Week 1: read the team direction doc, last 3 design docs, last 3 post-mortems. Ship 1 trivial PR (typo, doc fix). Pair with 2 different people.
  • Weeks 2–4: owned but small task. Daily standups. Weekly 1:1 with TL.
  • Month 2: owned medium task. Lead 1 design review of their own work.
  • Month 3: owned project end-to-end. By end of month 3, they're a functional team member. If not, escalate.

Have a written 30-60-90 plan per hire. Review at each milestone. Most hires that fail at month 6 had a bad month 1 that no one caught.

14.6 The "buddy" pattern

Pair every new hire with a non-TL buddy for the first month. Buddy answers stupid questions, walks them through the codebase, joins their first 3 standups. Reduces TL load by 40% and creates a peer relationship for the new hire.


15. πŸ“ˆ Performance, Promotion & Letting Go

The most consequential conversations of the year.

15.1 The performance signal

Performance is rarely a sudden event; it's a slow signal across months. Track informally per engineer:

  • Quality of their commits (PRs needing rework, bug rate, test coverage).
  • Their delivery vs. estimates over a quarter.
  • Quality of their design contributions.
  • Quality of their reviews on others' work.
  • Their engagement signals (1:1 energy, retro contributions, public visibility of their work).
  • Their growth slope (are they better than last quarter? clearly?).

This isn't surveillance β€” it's the TL's job. Most TLs run on vibes; the rigorous TL has a private 1-page-per-engineer doc updated monthly.

15.2 The promo case

If you're not in the EM seat, you write the technical case for promotion (the EM owns the people case). Format:

  • Scope. What they own β€” clearly bigger than 6/12 months ago.
  • Impact. What shipped because of them, with concrete metrics.
  • Influence. Who learned from them, what designs they led, who they reviewed.
  • Examples (3–5 specific, dated, concrete).
  • Gaps. What they still need to demonstrate at the next level.
  • Recommendation.

Bias yourself toward evidence over narrative. "Sara is great" loses; "Sara led the export-service redesign, mentored Jamal through his first design doc, and reduced our P1 bug rate by 40% over Q3" wins. Save evidence over the year so you don't have to scramble in promo cycle.

15.3 The non-promo case (harder)

When someone expects promo and isn't ready:

  • Communicate it 3+ months before the cycle, not in the cycle. Surprises are unforgivable.
  • Be specific: "To be promoted to senior, you need to demonstrate X, Y, Z. You've done X. You haven't yet done Y. Z is the gap. Here's what we'll do in the next 6 months."
  • Tie to evidence, not opinion.
  • Re-evaluate on schedule. Don't move goalposts.

If they won't level up no matter what β€” at some point it becomes a different conversation about role fit.

15.4 Performance issues β€” the gradient

Not every performance issue is a fire. Track:

Severity Signal Response
Soft One off-week, one weak PR, one missed sprint Note, watch, address in 1:1 if recurs
Pattern 3+ weeks of below-bar output, quality slipping Direct conversation, written expectations, check-in 4 weeks
Hard Multi-month underperformance, unwilling to engage Formal performance plan with EM/HR involvement

Most TLs miss the "Pattern" stage β€” they avoid the awkward conversation, then 8 months later the engineer is on a PIP and surprised. The TL who names the pattern early and lets the engineer course-correct often turns 60% of these around.

15.5 Letting someone go

The conversation you'll have at most 1–3 times per year (more often, you're hiring badly).

  • It's never the same day they hear it. Performance conversations should escalate gradually so the final conversation is not a surprise.
  • It's not yours alone. The EM/HR drives the formal process; you support and provide evidence.
  • Communicate to the team thoughtfully. A short, dignified note ("X is no longer with us, we wish them well, here's how their work is being handled"). Don't gossip. Don't pretend it didn't happen.
  • Check the team within 48 hours. Layoffs and firings spike anxiety; people need reassurance.
  • Reflect honestly. What did you miss? What signals were there 6 months earlier? Most fires reveal a hiring or coaching gap. Update your patterns.

15.6 The reverse case: when a great engineer leaves

When a senior IC quits, treat it as a Sev-1 incident on team continuity.

  • Have the conversation. Why? (Sometimes there's still time.)
  • Document everything they own, every decision they're carrying. Pair before they leave.
  • Plan the void: who steps up, what gets dropped, what gets hired against.
  • Tell the team without spinning. "X is leaving for Y reasons. Here's what we're doing."
  • Reflect: what made them leave? Is the cause structural (comp, growth, scope) or local (a project they hated)? Adjust if structural.

A high-performer leaving is often the canary on a structural issue. Don't waste the signal.


16. 🌱 Growing the Team Without Breaking It

Growth is harder than it looks. A team of 4 that adds 3 engineers in a month is a team of 7 with 4 engineers' worth of context.

16.1 The "rule of 5"

Teams under 5 are tight, fast, low-process. Teams of 5–8 are the productivity sweet spot. Teams of 9+ start to need sub-structure (sub-teams, leads-of-leads). Most early-stage tech leads keep ramping past 9 because the company keeps hiring; the team's velocity degrades.

If you're past 9, push for splitting the team. Two teams of 5 typically out-deliver one team of 10.

16.2 The onboarding tax

Every new hire costs 4–6 weeks of a senior engineer's time across the first 8 weeks. If you onboard 3 hires in a quarter, you've spent ~3 senior-months on onboarding, pretty close to the time it would have taken to ship one mid-sized project. Plan for it; don't be surprised.

16.3 Adding seniority vs adding hands

When the team feels overloaded, the instinct is to hire more juniors. Often wrong. Ask:

  • Are we slow because we don't have enough hands? β†’ mid/junior helps.
  • Are we slow because we keep making bad decisions? β†’ senior or staff helps.
  • Are we slow because we keep firefighting in production? β†’ senior + on-call investment.
  • Are we slow because we don't know what to build? β†’ not a hiring problem (PM/strategy).

Misdiagnosing produces a team with 8 people and the same throughput as 5.

16.4 The TL's transition out

At some point the team is too big to TL alone (typically 8+). Two paths:

  1. Step up to staff or EM. You hand TL duties to a senior; you take on broader scope.
  2. Split the team and hand off one half. You stay TL of one team; new TL takes the other.

Either way, plan the handover. Identify and groom your successor 6 months in advance. Hand off projects, then hand off rituals (standups, design reviews), then hand off final say. A handover done in 2 weeks is a betrayal; in 3 months it's a graduation.

16.5 Don't let the team age into a monoculture

Healthy teams have diversity in:

  • Seniority (no team should be all senior or all junior; both extremes break).
  • Background (industry, language ecosystem, prior org type).
  • Tenure (mix of long-tenure context-keepers and recent fresh-eyes).
  • Demographic.

Audit yearly. If your team is drifting into homogeneity, the next 3 hires are the lever. Resist the temptation to hire "people like the team" β€” short-term comfort, long-term staleness.


17. πŸ’¬ Communication: Writing, Speaking, Status

Writing is the highest-leverage skill of a tech lead. Speaking is the second.

17.1 The weekly written update

Every Friday (or whatever cadence works), the TL writes a 200–500 word update to the team and stakeholders. Format:

# Team X β€” Week of YYYY-MM-DD

## Shipped this week
- [item] β€” [owner], [link]

## In flight
- [item] β€” [owner], [status], [risk if any]

## Decisions made
- [decision] β€” [link to ADR/doc]

## What's next week
- [top 3]

## Asks / blockers
- [specific ask, named owner of the request]

Why it matters: forces you to think about the week deliberately; gives stakeholders 0-effort context; builds your team's "story"; trains you to write briefly. Most TLs skip this for a year and wonder why their leadership has no idea what the team does.

17.2 The art of the brief

Compress aggressively. Internal communication has 3 lengths:

  • One line: Slack message, status update, ask.
  • One paragraph: decision, escalation, summary of complex thread.
  • One page: ADR, design summary, weekly update.
  • Multi-page: RFC, postmortem. Use sparingly.

If a thread is heading toward 50 messages, stop and write a one-page summary. You'll save the team 4 hours of catching up.

17.3 The art of the ask

Most TL asks are too vague. "Can someone help with X?" gets ignored.

Ask format:

@person β€” by [date], could you [specific thing]?
Why: [1-line reason or impact]
Context: [link]

Three properties: a named person (not @channel), a specific date, a specific thing. "@Maria β€” by Thursday EOD, could you look at the auth design doc and sign off / flag concerns? Need this to start the migration on Monday. [link]"

17.4 Public speaking & demos

You'll present sometimes β€” quarterly review, demo day, all-hands, customer call. Defaults:

  • Open with the punchline. Not background, not "first I'd like to thank…" Lead with the conclusion. "We shipped X and customers reduced their workflow time by 40%."
  • Less is more. A 5-minute demo with 1 thing landed > 15-minute demo with 5 things half-landed.
  • Tell a story. Problem β†’ approach β†’ result. Engineers default to architecture diagrams; humans connect to story.
  • Prepare for the question you fear most. Usually you know exactly what it is. Have a clear, short answer.
  • Practice once. Out loud. Just once. The difference is huge.

17.5 Slack hygiene

A team's Slack culture is set by the TL.

  • Threads, not channel spam. Reply in thread; only "broadcast back to channel" if relevant.
  • Async-default. Reasonable response time is 4 hours during work, not 4 minutes.
  • Status emojis or DND norms. Make it OK to be unreachable for 2 hours of deep work.
  • No business decisions in DMs. If it matters, it goes in a channel or a doc.
  • One channel per topic, archive aggressively. A team with 25 stale channels makes everything harder to find.

17.6 Writing for AI

In 2026, write so AI can read your team's stuff well. CLAUDE.md (or equivalent), READMEs, ADRs, design docs β€” all benefit from being structured, named clearly, and explicit about non-obvious context. The team that writes well for AI also writes well for new humans.


18. ⚠️ The Tech Lead Anti-Pattern Catalog

The 12 most common TL failure modes and their antidotes.

18.1 The Hero TL

Symptom: TL takes the hardest tickets, ships the heroic Friday-night fixes, has the deepest knowledge of every system. Why it fails: Team plateaus. TL becomes a single point of failure. Burnout in 12 months. Antidote: rotate ownership of every "hard" thing. Pair before solving. Document instead of hoarding.

18.2 The Ghost TL

Symptom: TL retreated to deep IC work; team rarely sees them; no direction; no 1:1s; no design reviews. Why it fails: Team drifts. Stakeholders lose confidence. Engineers feel unsupported. Antidote: force the calendar. Block 1:1s, design reviews, weekly written update. Make them non-negotiable.

18.3 The Bottleneck TL

Symptom: every PR waits on TL approval. Every decision goes through TL. Vacation = team paralysis. Why it fails: team velocity bounded by TL throughput. Antidote: delegate review. Identify 2–3 "lieutenants" who can approve. Use ADRs so decisions are documented, not personality-bound.

18.4 The Yes-Person TL

Symptom: TL says yes to every PM request, every customer ask, every exec idea. Team drowns. Quality drops. Why it fails: trust erodes. Engineers leave. Eventually you fail at delivery despite working harder. Antidote: Β§12.6. Practice saying "yes, if we drop X." Build "no" into your weekly habit.

18.5 The Architecture Astronaut

Symptom: TL writes 30-page design docs about future-proof systems for problems no one has yet. Why it fails: team ships nothing. Customer waits. Engineers lose respect for the role. Antidote: ship-then-design. Build the simplest thing that works. Refactor when patterns emerge.

18.6 The Cargo-Culter

Symptom: TL imports a process from their last company without examining whether it fits. "At Big Co we did Scrum daily so we will here." Why it fails: processes designed for 200-person orgs strangle 5-person teams. Team rebels. Antidote: start from problems, derive process. Steal pieces, not whole methodologies.

18.7 The Conflict Avoider

Symptom: TL doesn't address performance issues, conflict, or hard decisions. Hopes they resolve themselves. Why it fails: problems compound. Team loses respect for TL. Hardest call still has to be made, just later, with worse outcomes. Antidote: Β§8.5. Schedule the hard conversation this week. Use SBI. Practice the script.

18.8 The Drama Magnet

Symptom: every conflict on the team becomes a TL conflict. TL gets drawn into every disagreement. Why it fails: the team's emotional weather lives in the TL. Burnout and bias. Antidote: triage. Most conflicts the team can resolve. Step in for structural issues; coach through interpersonal ones.

18.9 The Stack Maximalist

Symptom: every quarter brings a new framework, language, datastore, deploy tool. Team in constant migration mode. Why it fails: velocity actually drops. Onboarding becomes impossible. Tech debt compounds. Antidote: boring tech rule. Pick stable, well-documented tools. Migrate only when current tool is failing, not when newer tool is interesting.

18.10 The Vibe-Driven TL

Symptom: TL operates entirely on instinct. Few written docs. Decisions in DMs. Direction in their head. Why it fails: team can't operate without TL present. New hires take forever to ramp. Decisions get re-litigated. Antidote: write it down. ADRs, weekly updates, direction doc, definition of done. Pay the writing tax.

18.11 The Performance Blind

Symptom: TL believes "everyone is doing fine" right up until someone's surprise resignation, manager escalation, or PIP. Why it fails: preventable issues become unfixable. Antidote: Β§15. Maintain a per-engineer health doc. Talk early. Lead with evidence.

18.12 The Burnout Heroic

Symptom: TL works 60+ hours/week as a badge. Expects team to follow. Doesn't take vacation. Why it fails: TL crashes in 12–18 months. Team copies the pattern and crashes alongside. Antidote: model rest. Visibly take vacation. Visibly leave at 6pm. Visibly say "I don't know, I'll think about it tomorrow." Health is contagious; so is unhealth.


19. πŸ—ΊοΈ The Phased Roadmap (Day 1 β†’ Year 2)

What "doing well" looks like at each stage.

19.1 Week 1–4: Listen & Learn

Goal: build context and credibility, change as little as possible. Output: 1:1s with everyone, state-of-the-team note, light shadowing of all rituals. Anti-pattern: announcing changes in week 2.

19.2 Month 2–3: Diagnose & Quick Wins

Goal: 2–3 visible improvements, draft technical direction, establish cadence. Output: weekly update started, 1:1s rolling, definition-of-done in place, direction doc v1. Anti-pattern: big bang reorganization.

19.3 Month 4–6: Operate & Make 1 Hard Call

Goal: team is shipping predictably; you've made one visible hard call (kill a project, change on-call, confront a performance issue). Output: quarterly plan, ADR repo started, healthy review latency, no surprises in 1:1s with EM. Anti-pattern: still being the bottleneck on every decision.

19.4 Month 7–12: Compound

Goal: the team's habits run without you. You spend more time on direction and less on coordination. Output: at least 1 engineer leveled up under your coaching, at least 1 architectural improvement landed, on-call quality improved, public weekly updates respected. Anti-pattern: plateauing β€” same outcomes as month 3.

19.5 Year 2: Scale or Pass the Baton

Goal: team has grown (in scope, in headcount, in capability). You're either ready for staff/EM scope, or grooming a successor while you take on something new. Output: at least 2 engineers operating at the level above where they joined; team direction respected by adjacent teams; you're on the company's "radar" as a leader, not just a TL. Anti-pattern: the team is fine but you're stuck at the same scope.


20. πŸ“‹ Cheat Sheet & Resources

20.1 The 1-page TL cheat sheet

Pin to your monitor:

WEEKLY
β–‘ 1:1 with each report (theirs, not yours)
β–‘ Architecture/design review (60 min)
β–‘ Written team update
β–‘ 2–3 hr deep-work blocks protected
β–‘ Manager 1:1 prepped

MONTHLY
β–‘ Direction doc revisit
β–‘ Tech debt registry triage
β–‘ Skip-level / peer-TL coffee
β–‘ Per-engineer health note updated
β–‘ At least 1 hard conversation handled

QUARTERLY
β–‘ Quarterly plan drafted, agreed, communicated
β–‘ Direction doc rewritten
β–‘ Top 3 systemic risks identified, 1 fixed
β–‘ Promo/perf calibration with EM
β–‘ Personal retro (what worked, what didn't)

DEFAULTS
- Two-way doors decided fast
- One-way doors decided in writing
- ADR for irreversible technical calls
- Design doc for >2-week or cross-team work
- DoD signed before commit
- Async-first, written-first
- "No" with options, not without

20.2 Stock phrases (that work)

  • "What does success look like for you in 6 months?"
  • "To take that on, we'd need to drop X. Want to make that swap?"
  • "Considered alt: X. Decided against because Y."
  • "I want to be wrong in writing so you can correct me."
  • "Disagree-and-commit: I'll back the team's call publicly even if I'd have decided differently."
  • "What's the smallest version of this we can ship Friday?"
  • "What did you learn this sprint that you didn't know last sprint?"
  • "Where did we get lucky?"
  • "I don't know yet. I'll find out by Friday."
  • "That's a good idea. Let's not do it this quarter."

20.3 Reading list

The short list of books worth your time:

  • The Manager's Path β€” Camille Fournier. The canonical book on the engineering management ladder, including the TL chapter. Read first.
  • An Elegant Puzzle β€” Will Larson. Best operational manual for engineering leadership at scale.
  • Staff Engineer β€” Will Larson. Adjacent role; useful frame for what's next after TL.
  • High Output Management β€” Andy Grove. The original. Output as the unit. Still the best.
  • Team Topologies β€” Skelton & Pais. The org-design book that explains why your team is sized the way it is.
  • Accelerate β€” Forsgren, Humble, Kim. The data on what makes engineering teams perform. Reference often.
  • Crucial Conversations β€” Patterson et al. The script for hard conversations. Practical.
  • Thinking in Systems β€” Donella Meadows. The mental models you'll re-read for the rest of your career.

20.4 Operating templates (steal these)

  • ADR: Β§6.1
  • Design doc: Β§6.2
  • Weekly update: Β§17.1
  • Definition of done: Β§7.3
  • Escalation: Β§7.4
  • Postmortem: Β§11.3
  • 30-60-90 onboarding: Β§14.5
  • Direction doc: Β§5.2

Copy each into a docs/templates/ folder in your repo. New artifacts use them. The team learns the format; the format becomes the culture.

20.5 The single test of whether you're doing this well

At the end of every month, ask yourself two questions:

  1. "Is the team shipping more meaningful work than they were 3 months ago?" Not "more lines of code" β€” more meaningful. More customer impact, fewer regressions, faster decisions, clearer direction.
  2. "Have at least 2 engineers on the team grown visibly under my watch?" Specific examples. New skills. Bigger scope. Better designs.

If both yes β†’ keep doing what you're doing. If shipping yes, growth no β†’ you're an operator, not a leader. Invest in the people side. If growth yes, shipping no β†’ you're a coach, not a TL. Invest in technical execution. If both no β†’ something's wrong. Stop and diagnose. Talk to your manager, your peers, your team.

The role compounds. Every month doing it well makes the next month easier. Every month doing it poorly makes the next month harder. There is no neutral.


This playbook is a living document. The 2026 reality (AI-augmented engineering, distributed teams, async-default, the rising bar on technical writing) will keep shifting. Update yours. Argue with mine. Ship better than us both.


If you found this helpful, let me know by leaving a πŸ‘ or a comment!, or if you think this post could help someone, feel free to share it! Thank you very much! πŸ˜ƒ


All Rights Reserved

Viblo
Let's register a Viblo Account to get more interesting posts.