Hook
I've watched founders spend $200,000 on technology that didn't fit their business. I've also seen a three-person startup pick the right stack and ship a product in three weeks for under $15,000. The difference between those two outcomes was rarely about the technology itself. It was about how the decision got made.
Over 16 years and more than 250 projects, I've built a framework for evaluating technical decisions. Not a checklist for developers. A thinking tool for the people writing the checks and running the company. The ones who need to ask the right questions without necessarily understanding every answer at the implementation level.
This is the same framework I use as a Fractional CTO when clients bring me in to untangle their technology strategy. I'm sharing it here because too many founders still rely on the opinion of whoever talks the loudest in the room.
TL;DR Summary
- Most bad technical decisions come from optimizing for the wrong variable (usually cost or speed alone).
- The framework has five filters: business alignment, team capability, total cost of ownership, reversibility, and time-to-value.
- Run every significant technology choice through all five before committing.
- Real examples from GigEasy, Cuez, bolttech, and Imohub show how this works in practice.
- You don't need to be technical to use this. You need to ask better questions.
Need a hand with your website or web app?
Free 30-min strategy call. I'll review your situation and give you a clear next step.
Table of contents
- Why technical decisions fail
- The five-filter framework
- Filter 1: Business alignment
- Filter 2: Team capability
- Filter 3: Total cost of ownership
- Filter 4: Reversibility
- Filter 5: Time-to-value
- Putting it together: Real project examples
- Common mistakes I still see
- How to use this framework without a technical background
- FAQ
Why technical decisions fail
After sitting in hundreds of technology evaluation meetings, I can tell you: most bad decisions don't come from picking the "wrong" technology. They come from optimizing for the wrong thing.
Here are the patterns I see over and over:
The shiny object. A developer reads a blog post about a new framework. It looks exciting. The team adopts it before asking whether it solves a problem they actually have. Six months later, they can't hire anyone who knows it, and the original developer has left.
The cost trap. A founder picks the cheapest option for every layer of the stack. The initial build comes in under budget. Then the maintenance costs start. Then the scaling costs. Then the rewrite costs. The "cheap" choice ends up being the most expensive one they made.
The resume-driven decision. An engineer wants to learn Kubernetes, so suddenly the two-person SaaS app "needs" container orchestration (a system that automatically manages and coordinates software running across multiple servers). I've seen infrastructure bills triple because someone wanted a line item on their LinkedIn profile.
The copy-paste. "Netflix uses this, so we should too." Netflix has 12,000 engineers. You have four. Their problems are not your problems.
None of these are about technology being bad. They're about decision-making process being absent.
The five-filter framework
Every technical decision I evaluate goes through five filters. The order matters. If a choice fails an early filter, there's no point evaluating the rest.
Think of it like hiring. You wouldn't negotiate salary with a candidate who can't do the job. Same idea here: start with the most important question first.
| Filter | Core question |
|---|---|
| 1. Business alignment | Does this support what the business actually needs to accomplish? |
| 2. Team capability | Can our current team (or a realistic hire) build and maintain this? |
| 3. Total cost of ownership | What does this really cost over 2-3 years, not just the initial build? |
| 4. Reversibility | If we're wrong, how hard is it to change course? |
| 5. Time-to-value | How quickly does this start generating returns? |
Let me walk through each one.
Filter 1: Business alignment
This sounds obvious, but it's the filter most teams skip. They jump straight to comparing features, benchmarks, and GitHub stars (a popularity metric for open-source software).
The question isn't "which technology is best?" It's "which technology is best for what we're trying to do in the next 12-18 months?"
When I joined the Cuez project (a live TV production platform based in Belgium), the existing codebase had accumulated years of technical decisions that made sense individually but didn't align with where the product was heading. The API (a way for different software systems to communicate) response times had ballooned to 3 seconds. That's an eternity for live television production, where delays measured in milliseconds affect the broadcast.
The fix wasn't adopting some new fancy framework. It was removing unused libraries, replacing custom-built code with well-tested framework features that already existed, and optimizing database queries. We got the response time down to 300ms. The technology choices were boring. The business outcome was dramatic.
Questions to ask your technical team:
- What specific business goal does this technology choice support?
- What happens to our product roadmap if we pick option A vs option B?
- Are we solving a problem we have today, or a problem we might have in two years?
- If our business model changes direction next quarter, does this choice still work?
Filter 2: Team capability
The best technology in the world is worthless if nobody on your team can use it well. And "well" is doing heavy lifting in that sentence. Anyone can write a basic app in any language. Writing production-grade software that handles real users, real data, and real edge cases requires genuine depth.
At bolttech, a $1B+ insurance technology company, the engineering decisions had to account for distributed teams across multiple countries. Picking a niche framework that only three people in Singapore understood would have been a disaster. The technology choices needed to match the talent pool available across all the markets where the company operated.
I think about team capability in three layers:
Current skills. What does the team already know well? Switching an entire team to a new language or framework has a real cost. I've watched a 6-week project turn into a 6-month project because the team was learning Go while trying to ship a product.
Hiring pipeline. Can you find and afford developers who know this technology? Some frameworks have tiny communities. When your lead developer leaves (and eventually, they will), can you replace them? If you're choosing between two web frameworks, the size and health of each community should factor into your decision.
Maintenance burden. Who maintains this in year two and year three? The developer who chose it might not be around. Does the technology have good documentation? Active community support? Regular security updates?
Questions to ask:
- How many developers on our team have production experience with this?
- What's the hiring market like for this skill set in our budget range?
- If our lead developer leaves tomorrow, how long until a replacement is productive?
- What does the learning curve look like, and can we afford the slowdown?
Filter 3: Total cost of ownership
This is where I see founders get burned the most. They compare the price of building something and ignore every cost that comes after.
Total cost of ownership (TCO) for a technology decision includes:
- Build cost. Developer time, design, testing, deployment.
- Infrastructure cost. Hosting, databases, third-party services, CDNs (content delivery networks that serve your website from servers closer to your users).
- Maintenance cost. Bug fixes, security patches, dependency updates, monitoring.
- Scaling cost. What happens to your monthly bill when traffic doubles? Or when your data grows 10x?
- Opportunity cost. What could your team be building instead of maintaining this choice?
At Imohub, where I served as CTO for a real estate technology platform, we had to make infrastructure decisions that would scale with property listing data. The initial build cost was one variable. But real estate platforms accumulate massive amounts of data: images, documents, search indices, geolocation data. A hosting choice that looked cheap at 10,000 listings could become ruinous at 500,000.
We ran the numbers forward. Not just "what does this cost today?" but "what does this cost at 5x and 10x our current volume?" That analysis changed several of our initial assumptions.
A simple TCO exercise you can do today:
Take any technology choice your team is evaluating. Ask them to estimate costs at three scales:
| Current scale | 3x scale | 10x scale | |
|---|---|---|---|
| Monthly infrastructure | $ | $ | $ |
| Developer hours for maintenance | hrs/month | hrs/month | hrs/month |
| Third-party service fees | $ | $ | $ |
If the 10x column makes you uncomfortable, that's worth a conversation. You might not reach 10x, but you want to know the trajectory before you're locked in.
Filter 4: Reversibility
This one took me years to appreciate. Early in my career, I treated every technical decision like it was permanent. It made decision-making slow and stressful. Now I categorize decisions differently.
One-way doors are decisions that are expensive or impossible to reverse. Choosing your primary programming language. Picking a database architecture. Signing a three-year enterprise contract. These deserve weeks of evaluation.
Two-way doors are decisions you can change without major pain. Picking a CSS framework. Choosing a project management tool. Selecting a logging service. These deserve hours, not weeks.
The problem is that most teams treat two-way doors like one-way doors. They spend three weeks evaluating which analytics tool to use when switching analytics tools takes an afternoon. Meanwhile, they spend three days picking a database that will take 18 months to migrate away from.
When I was helping build the GigEasy MVP (a fintech platform backed by Barclays and Bain Capital), speed mattered. We had three weeks to ship. So I was deliberate about which decisions were reversible and which weren't. The database schema and API contracts got careful thought. The frontend component library? We picked one that was good enough and moved on. We could always swap it later, and that flexibility let us hit the deadline.
How to assess reversibility:
- If this doesn't work out, what does it take to switch? (Time, money, team effort)
- Are we locked into a contract or vendor?
- Does this choice affect our data structure in ways that are hard to undo?
- Can we run a small pilot before committing fully?
If switching costs are low, make the decision quickly and move on. Save your deliberation budget for the irreversible choices.
Filter 5: Time-to-value
How long until this technology choice starts producing results? Not "how long until it's built" but "how long until the business benefits?"
This filter catches a specific failure mode: over-engineering. Teams build the perfect system that takes 9 months to launch, when a simpler system could have shipped in 6 weeks and started generating revenue (or data, or user feedback) immediately.
The GigEasy project is the clearest example from my experience. The investors wanted to validate a market hypothesis. They didn't need a system that could handle a million transactions. They needed something functional that real users could test. We shipped the MVP in three weeks. That speed wasn't because we cut corners. It was because every technical decision was filtered through "does this help us learn something from real users faster?"
If we'd built for scale from day one, we would have spent months on infrastructure that might never be needed. The startup would have burned cash on theoretical problems instead of getting real feedback from the market.
For a deeper look at how this applies to custom web application development, the same principle holds: build for the stage you're in, not the stage you hope to reach.
Questions to ask:
- When will real users interact with this?
- What's the minimum version that produces useful business data?
- Are we building for today's actual needs or next year's hypothetical ones?
- Can we ship in phases instead of one big launch?
Putting it together: Real project examples
Let me show you how the five filters played out in real projects.
GigEasy: Three-week MVP
Context: Fintech startup backed by Barclays and Bain Capital. Needed a working product to validate their market hypothesis. Budget was tight. Timeline was tighter.
| Filter | Assessment |
|---|---|
| Business alignment | Validate market fast. Every decision measured against "does this help us launch in 3 weeks?" |
| Team capability | Small team, so we picked technologies we already knew deeply. No learning curves. |
| Total cost of ownership | Low initial cost mattered. But we also picked technologies that wouldn't require a rewrite at the next stage. |
| Reversibility | Deliberately chose reversible options where possible. Frontend choices were two-way doors. Data model was a one-way door and got extra attention. |
| Time-to-value | Three weeks. That was the constraint that shaped everything else. |
Result: Shipped on time, under $15,000. The technology choices were pragmatic, not flashy. Laravel for the backend (because I could move fast in it), React for the frontend (because the hiring pool is enormous), and simple managed cloud infrastructure (because we didn't need to babysit servers).
Cuez: Performance rescue
Context: Live TV production platform in Belgium. API responses taking 3 seconds. Users and broadcast workflows suffering.
| Filter | Assessment |
|---|---|
| Business alignment | Speed was the product requirement. Live TV can't wait 3 seconds for data. |
| Team capability | Existing team knew the codebase. We didn't need new technology; we needed to better use what we had. |
| Total cost of ownership | Fixing the existing system was far cheaper than rebuilding. The audit and optimization cost a fraction of a rewrite. |
| Reversibility | Low risk. We were improving existing code, not replacing it. Each optimization could be rolled back if it caused issues. |
| Time-to-value | Immediate. Each optimization delivered measurable improvement the day it shipped. |
Result: API response time went from 3 seconds to 300ms. No new frameworks. No rewrites. Just methodical engineering: removing unused libraries, replacing custom code with framework built-ins, and tuning database queries.
Imohub: Scaling a real estate platform
Context: CTO role at a real estate technology company. Platform needed to handle growing property data while keeping search fast and costs manageable.
| Filter | Assessment |
|---|---|
| Business alignment | Search speed and data capacity directly affected user experience and conversion. |
| Team capability | Small team, so technology choices needed to be well-documented and widely supported. |
| Total cost of ownership | The 10x exercise mattered here. Property data grows continuously. We modeled costs at 5x and 10x listing volume. |
| Reversibility | Database and search infrastructure were one-way doors. We piloted extensively before committing. |
| Time-to-value | Phased rollout. Core search improvements shipped first, with advanced features following. |
Result: Infrastructure that scaled with the business rather than ahead of it. Costs stayed predictable because we'd done the math upfront.
Common mistakes I still see
After 250+ projects, certain patterns keep repeating. If you recognize any of these, it's a sign that your decision-making process needs structure.
Letting the loudest voice win
Technical decisions shouldn't be popularity contests. I've been in rooms where a senior engineer passionately advocates for a technology, and nobody pushes back because they don't want to challenge the expert. Run it through the five filters instead. The framework doesn't care about seniority or enthusiasm.
Confusing complexity with quality
More sophisticated doesn't mean better. Some of the most effective systems I've built used boring, well-understood technology. The GigEasy MVP didn't use microservices, Kubernetes, or a dozen cloud services. It used a monolithic application, a single database, and managed hosting. It worked. It shipped. It validated the business.
Skipping the "what if we're wrong?" conversation
Every technology evaluation should include the question: "If this turns out to be the wrong choice, what happens?" Not to be pessimistic. To be realistic. I've been wrong plenty of times in 16 years. The projects that survived my mistakes were the ones where we'd thought about reversibility upfront.
Evaluating technology in isolation
A database isn't good or bad. It's good or bad for your specific use case, team, budget, and timeline. PostgreSQL is excellent for many applications. It's a poor choice if your team only knows MongoDB and you need to ship in two weeks. Context matters more than benchmarks.
Ignoring the humans
Technology decisions are people decisions. Can your team use it? Can you hire for it? Will it frustrate your developers so much they quit? I've seen companies lose their best engineers because leadership mandated a technology stack that nobody enjoyed working with. Developer experience isn't a luxury. It's a retention strategy.
How to use this without a technical background
You don't need to understand the technology to use this framework. You need to ask the right questions and recognize when you're not getting real answers.
When your CTO or lead developer proposes a technology choice, ask these five questions:
- "How does this connect to our business goals for the next 12 months?"
- "Who on the team has built something real with this before?"
- "What does this cost us at 3x and 10x our current scale, including maintenance?"
- "If this doesn't work out, what does it take to switch?"
- "When do we start seeing results from this?"
If the answers are vague, push back. "It's the industry standard" is not a business case. "Everyone's using it" is not a team capability assessment. "It'll scale" is not a cost analysis.
Good technical leaders can answer these questions in plain language. If someone can't explain why a technology choice makes sense for your business without resorting to jargon, that's a red flag.
If you don't have a technical leader and need help evaluating these decisions, that's exactly what a Fractional CTO engagement is for. I spend the first 90 days of every engagement building exactly this kind of decision-making structure with the founding team.
FAQ
Do I need to be technical to evaluate technology decisions?
No. You need to understand your business goals, budget constraints, and timeline. The five-filter framework translates technical choices into business questions. Your role is to make sure the technical team is answering the right questions, not to evaluate the technology yourself.
How long should a major technical decision take?
It depends on reversibility. One-way doors (database architecture, primary language, core infrastructure) deserve 1-3 weeks of evaluation. Two-way doors (frontend libraries, development tools, analytics providers) should take a day or two at most. If your team is spending three weeks picking a CSS framework, they're misallocating their decision-making energy.
What's the biggest technical decision mistake you've seen?
Building for scale before validating the product. I've watched teams spend 6 months building infrastructure to handle millions of users for a product that never got past 500. Validate the business first with technology that's good enough, then invest in scaling what's proven to work.
Should cost always be the primary factor?
Rarely. Cost is one of five filters, and it's filter number three for a reason. A cheap choice that doesn't align with business goals or can't be maintained by your team is the most expensive decision you'll make. Total cost of ownership over 2-3 years matters more than initial build cost.
When should I bring in outside help for technical decisions?
When you don't have a senior technical leader on the team. When internal opinions are deadlocked. When you're making a large irreversible commitment (new platform, major rewrite, significant infrastructure change). When the cost of getting it wrong is high enough that an independent perspective is worth the investment.
How does this framework apply to choosing between building custom software and buying off-the-shelf?
Run it through the five filters. Business alignment: does an off-the-shelf tool actually do what you need, or will you bend your process to fit the software? Team capability: do you have people who can customize or build on it? Total cost: what do licenses, customization, and integration cost over three years? Reversibility: how locked in are you to this vendor? Time-to-value: can you be up and running faster with a ready-made solution? For a more detailed comparison, see custom web app vs. SaaS decisions.
Next steps
If you're facing a technical decision and want a structured evaluation, I'm happy to talk through it. I've done this for startups, mid-market companies, and enterprise teams. The framework scales because the questions stay the same.
Get in touch to discuss your technology decisions, or learn more about my background and the projects that shaped this framework.
