A debt backlog with business-impact scores, not engineering grievance.
Triaged list of technical debt scored by business impact, with a parallel-track execution plan that doesn't stop feature work. Monthly retainer.
Who this is for
CTO or VPE at a 10–50 person scaleup where velocity is dropping and every new feature fights the last year's shortcuts, but feature work cannot stop.
The pain today
- Every new feature slower than the last because of compounding debt
- Team morale declining as 'quick fixes' pile up
- Board asking about engineering velocity with no clear answer
- Tried debt sprints — business pushback because features stop
- Can't tell which debt is actually costing vs which is theoretical
The outcome you get
- Tech debt audit categorized by type (code, infra, data, process, people)
- Business-impact scoring — what's costing this quarter vs theoretical
- Parallel-track plan — feature work continues while debt gets paid down
- Quarterly execution plan with specific owners and milestones
- Progress metrics that prove the investment pays back
Debt taxonomy
Technical debt has five categories, each with different cost patterns. Code debt: coupling, duplication, outdated patterns, low test coverage. Infrastructure debt: outdated deploy pipeline, missing monitoring, inefficient cloud architecture. Data debt: schema decay, stale data, missing indexes, inconsistent models. Process debt: missing code review norms, ad hoc deployment, no incident postmortems. People debt: knowledge silos, tenure gaps, missing hiring ladder. Most teams focus on code debt because it's most visible. Process and people debt often cost more but are harder to see. I audit all five and let the business-impact scoring reveal where the compounding is actually happening.
Business-impact scoring
Not all debt is equal. Scoring framework: for each debt item, estimate monthly cost (engineer hours lost, incidents caused, features delayed, customer-reported issues). High-cost debt gets priority regardless of how easy it is to fix. Low-cost debt gets deferred even if it's highlighted in every retro. The scoring is approximate — within 30% is fine — and forces honest prioritization. Founders and boards understand monthly business cost; they don't understand 'unfixed TODO from 2024.' Framing debt in business terms changes the conversation from 'engineering wants cleanup time' to 'this specific fix saves X hours per month going forward.'
Parallel-track execution
Debt sprints (stop features, fix debt) usually fail because business pushes back. Parallel-track works: 70–80% of engineering capacity on features, 20–30% dedicated to debt paydown, tracked separately in sprint reports. Debt work prioritized per business-impact score. Quarterly business review shows: features shipped + debt paid down + resulting velocity change. This makes debt investment legible. Teams without parallel-track alternate between feature sprints and 'we have to fix this now' emergency sprints — worse velocity overall, worse morale. I set up the parallel-track cadence in the first month and hand ownership to engineering leadership after 2–3 months.
Case studies: Cuez, Imohub
Cuez: API at 3 seconds, team knew it was slow but not how slow or what was costing. Parallel-track approach — feature work kept shipping while targeted optimization took the core API to 300ms (10x faster) and cut infra ~40%. Imohub: legacy Imóveis SC stack with 120k+ records, unclear which code was load-bearing. Rebuild-focused approach with parallel dual-run migration — new Imohub stack stood up alongside legacy, traffic shifted gradually, legacy retired once stable. Query response under 0.5s, infra cost down 70%. Same debt-triage thinking applied to different severity levels; approach scales from 'fix hot spots' to 'rebuild with migration.'
Retainer pricing
Technical debt triage fits the Fractional CTO service at Advisory ($4,500/mo) for strategic guidance or Fractional ($8,500/mo) for deeper operational involvement with your engineering team. Typical engagement: 3–6 months through initial triage, prioritization, and first 2–3 quarters of parallel-track execution. After that, engineering leadership owns the ongoing cadence and I step out. 14-day money-back, cancel anytime after. Work Made for Hire on written deliverables (audit, prioritization matrix, execution plan).
What to do with the 'pet debt' nobody will touch
Every codebase has the monster — the one area nobody wants to touch because it's complex, load-bearing, and poorly understood. Sometimes the monster is genuinely P0. Sometimes it's a monster the team built up emotionally that's actually P3 in business terms. Triage forces honest assessment. If the monster is P0, the execution plan tackles it deliberately with extra review and testing. If it's P3, it stays untouched — despite engineering opinion — because business impact doesn't justify the risk of major changes. Making these calls visible in a business-scored framework takes the emotion out of the decision and gives you defensible priorities.
Recent proof
A comparable engagement, delivered and documented.
Rescued a slow API that was blocking user growth
Refactored the backend architecture, making the system far more responsive and scalable for the growing user base.
Frequently asked questions
The questions prospects ask before they book.
- How do I know which debt is actually costing?
- Scoring combines engineer survey (where do you lose hours monthly) with incident history (which systems caused outages) and velocity data (which features took longer than expected). Triangulating these surfaces real cost centers vs theoretical complaints. Scoring is approximate — within 30% is enough for priority decisions.
- Can you do this without stopping feature work?
- Yes — parallel-track execution is the core of the method. 70–80% of capacity stays on features; 20–30% goes to debt paydown on the highest-priority items. Quarterly reviews show both feature shipping and debt progress, so business stakeholders see the trade-off honestly.
- What if the team disagrees on priorities?
- Team input feeds the scoring, but I make priority calls based on business impact. Team disagreement is usually productive — different tenure perspectives surface different cost centers. The final prioritization is explicit and defensible; team can disagree with the call while accepting the framework.
- How do we measure success?
- Velocity metrics (story points or features shipped per sprint) over time. Incident frequency (P0/P1 incidents per month) over time. Debt items resolved per quarter vs new debt accrued. Engineer retention and satisfaction (qualitative). Most successful engagements show measurable velocity improvement within 2 quarters.
- What if the team isn't mature enough to execute?
- Parallel-track requires some process maturity (sprints, retros, basic metrics). If the team is pre-process, the first month is setting up minimum viable process before triage starts. Teams without process history often get as much value from process introduction as from debt work itself.
Ready to start?
Tell me what you need in 60 seconds. Tailored proposal in your inbox within 6 hours.