Hook
Three seconds. That was how long users waited every time they clicked something in the Cuez application. Three seconds to load a show rundown. Three seconds to pull up a guest list. Three seconds to check the live event schedule.
For a SaaS product used during live television broadcasts — where every second of dead air matters — that kind of delay was a serious business problem. Users were frustrated. The client was worried about churn. And the development team had already tried the obvious fixes without much success.
I was brought in as a senior engineer to figure out what was actually going on. What followed was a systematic investigation that took the API from 3 seconds to 300 milliseconds — an 80% reduction — without rebuilding the application from scratch.
This article walks through exactly how I did it, step by step. If your product's backend feels sluggish and you're not sure where to start, this is the playbook.
TL;DR Summary
- The problem: A B2B SaaS product (Cuez by Tinkerlist) had API response times averaging 3 seconds, frustrating users and threatening retention.
- The approach: Full codebase audit to find hidden performance drains — unused libraries, outdated dependencies, custom code that duplicated framework features, and unoptimized database queries.
- The result: API response times dropped to 300ms (80% faster), infrastructure costs fell by roughly 40%, and the platform gained capacity to handle 10x more concurrent users.
- Key insight: The biggest performance gains came from removing things, not adding them. Stripping out unused code and replacing hand-built solutions with framework built-ins delivered more impact than any single optimization technique.
Need a hand with your website or web app?
Free 30-min strategy call. I'll review your situation and give you a clear next step.
Table of Contents
- The Problem: A 3-Second API in a Real-Time Product
- Step 1: The Full Codebase Audit
- Step 2: Removing Unused and Outdated Libraries
- Step 3: Replacing Custom Code with Framework Built-Ins
- Step 4: Optimizing Database Queries
- Step 5: Reducing the Dependency Footprint
- Step 6: The Framework Upgrade
- The Results
- What This Means for Your Business
- FAQ
- Conclusion & Next Steps
The Problem: A 3-Second API in a Real-Time Product
Cuez is a SaaS product built by Tinkerlist, a company based in Belgium. The platform helps television producers manage live shows — think rundowns (the sequence of segments in a broadcast), guest coordination, timing sheets, and real-time updates during live events.
The product was built on Laravel (a popular PHP framework) with a Vue.js frontend, running on AWS. Architecturally sound choices, but over time the codebase accumulated what engineers call technical debt — shortcuts and workarounds that slow things down over the long run.
By the time I joined: average API response time was 3 seconds (some endpoints worse), the frontend felt sluggish waiting on data, and previous optimization attempts — a cache here, a query tweak there — had delivered marginal improvements at best. The team had tried point fixes. What they hadn't done was look at the full picture.
Step 1: The Full Codebase Audit
Before changing a single line of code, I spent time understanding the entire system. This is the part most teams skip — and it's the most important part.
What a Codebase Audit Actually Looks Like
A codebase audit is like a health checkup for your software. Instead of checking blood pressure and cholesterol, you're checking: How many external libraries does the application depend on? Are they still maintained? Is the application using its own framework effectively, or did developers build custom versions of things the framework already provides? How does data flow from the database to the user's screen? What gets loaded on every single request, even when it's not needed?
I mapped out the dependency tree — a complete list of every external package the application relied on. I profiled API requests to see where time was being spent. And I read through the code to understand why certain architectural decisions had been made.
What I Found
The picture that emerged was not a single catastrophic problem. It was death by a thousand cuts. Dozens of small inefficiencies, each adding 50 or 100 milliseconds, compounding into that 3-second response time.
Here's a simplified breakdown of where those 3 seconds were going:
| Time Sink | Approximate Impact |
|---|---|
| Unused libraries loaded on every request | ~400ms |
| Custom code doing what the framework already handles | ~500ms |
| Unoptimized database queries | ~800ms |
| Outdated dependencies with known performance issues | ~300ms |
| Excessive dependency chain (libraries loading other libraries) | ~600ms |
| Other overhead (serialization, middleware, etc.) | ~400ms |
No single item was "the problem." They all were. And that's exactly why point fixes hadn't worked — optimizing one query saves 100ms, but when the whole stack is adding drag, you need a systemic approach.
Step 2: Removing Unused and Outdated Libraries
The first thing I did was the simplest: I removed code that wasn't doing anything.
The Library Graveyard
Over the life of any software project, developers add libraries to solve specific problems. A charting library for a feature that was later redesigned. A date formatting package that was replaced by a better one but never removed. A debugging tool that was only needed during development but got bundled into production.
In the Cuez codebase, I found over a dozen packages that were either no longer used anywhere, had been replaced by something else but never fully removed, or were only used in one small place and could be replaced with a few lines of native code.
Why Unused Libraries Hurt Performance
If a library isn't being used, how can it slow things down? In PHP (the language Laravel is built on), there's a mechanism called autoloading that registers every installed package so it's available if needed. More packages means more registration work on every request. And libraries have their own dependencies — removing one library might also remove three others it was pulling in.
Think of it like a kitchen with counters cluttered by appliances you never use. Clearing the counter doesn't make your stove hotter, but it makes everything you do in that kitchen faster.
The Impact
Removing unused and outdated libraries cut roughly 400 milliseconds from every API request. Not because any single library was that slow, but because the cumulative overhead of loading, registering, and managing dozens of unnecessary packages adds up.
Step 3: Replacing Custom Code with Framework Built-Ins
This was the most impactful single step, and it's a pattern I see in almost every project I audit.
The Problem with Reinventing the Wheel
Laravel is a mature framework with built-in solutions for most common backend tasks: caching, queue management, data serialization (converting database records into the format your browser expects), authentication, and much more. When developers are under deadline pressure, they sometimes build custom solutions for problems the framework already solves. Not bad engineering — usually just tight timelines and imperfect knowledge.
In the Cuez codebase, I found several areas where custom code was doing jobs that Laravel handles natively:
- Custom data transformation logic where Laravel's built-in API Resources would have been faster and more maintainable.
- Hand-rolled caching that didn't use Laravel's cache layer, which integrates directly with Redis (a high-speed in-memory data store — think turbo-charged temporary storage).
- Custom middleware (code that runs on every request) duplicating functionality already in the framework.
- Manual query construction where Laravel's query builder would have been more efficient.
Framework maintainers spend years optimizing these tools. A custom solution built under a deadline almost never matches that level of optimization — and framework built-ins are tested against edge cases across thousands of applications.
The Refactoring Process
I replaced custom implementations one endpoint at a time, starting with the highest-traffic API routes. For each: document current behavior, rewrite using framework tools, test against the original to verify identical output, and measure the improvement. Methodical, endpoint-by-endpoint work over several weeks — but the results were consistent.
The Impact
Replacing custom code with framework built-ins shaved approximately 500 milliseconds off API response times. It also made the codebase significantly easier to maintain, which matters for the long-term health of the product.
Step 4: Optimizing Database Queries
The database is where most API performance problems live. Cuez was no exception.
N+1 Queries: The Silent Killer
The single biggest database issue was something called an N+1 query problem. Here's what that means in plain language:
Imagine you have a list of 50 TV shows. For each show, you need to load its segments, guests, and timing data. A naive approach would be:
- Run 1 query to get all 50 shows.
- Run 1 query per show to get its segments (50 queries).
- Run 1 query per show to get its guests (50 queries).
- Run 1 query per show to get its timing data (50 queries).
That's 151 database queries for what should be 4 queries (one per data type, using a technique called eager loading that fetches related data in bulk).
Each individual query might only take 10-20 milliseconds. But 151 of them? That's 1.5 to 3 seconds — just in database time.
Missing Indexes
A database index is like the index in a book. Without it, the database has to read every single row in a table to find what it's looking for (called a full table scan). With an index, it jumps straight to the relevant data.
Several frequently-queried columns in the Cuez database were missing indexes. Every time the API needed to look up shows by date, or filter segments by type, the database was scanning the entire table. Adding indexes turned queries that took 300-400 milliseconds into queries that took 10-20 milliseconds.
Over-Fetching Data
Some endpoints loaded entire records when they only needed a few fields. I rewrote key queries to select only what was needed and added pagination where endpoints were returning entire datasets instead of manageable pages.
The Impact
Database optimizations collectively removed approximately 800 milliseconds from API response times. The N+1 fix alone was worth hundreds of milliseconds. And the efficiency gains meant the database server was doing less work overall, which freed up capacity for growth.
Step 5: Reducing the Dependency Footprint
After removing unused libraries (Step 2), there was still more to trim.
Direct vs. Transitive Dependencies
When you install a library, it often brings along its own dependencies — which bring their own dependencies. One popular PHP package might pull in 15 other packages you never directly chose. I mapped out the full dependency tree and found many of these transitive dependencies (the ones your dependencies depend on — like friends-of-friends) were duplicating functionality, locked to outdated versions, or far heavier than necessary.
The fix was to replace parent libraries with lighter alternatives, configure packages to share underlying dependencies instead of duplicating them, and inline small functions instead of loading large libraries for 20 lines of code.
The Impact
Reducing the dependency footprint removed roughly 600 milliseconds from the total overhead and reduced the application's deployment size.
Step 6: The Framework Upgrade
The final step was upgrading Laravel itself — from an older version to Laravel 10.
Why Framework Versions Matter
Framework upgrades aren't just about new features. Each major version includes performance improvements in the core engine, optimized database layers, better caching mechanisms, and updated PHP version support — which means access to language-level speed gains. Laravel 10 specifically improved model serialization and route resolution, both of which directly affect API response times.
The Upgrade Process
I audited breaking changes, updated all dependencies for compatibility, ran the full test suite, benchmarked performance before and after, and deployed through a staged rollout (test environment first, then production). The upgrade also forced removal of deprecated patterns, further cleaning up the codebase.
The Impact
The framework upgrade, combined with the PHP version bump it enabled, contributed to the remaining performance gains and provided a modern, well-supported foundation for future development.
The Results
After completing all six steps, here's where Cuez landed:
| Metric | Before | After | Improvement |
|---|---|---|---|
| Average API response time | 3,000ms | 300ms | 80% faster |
| Database queries per request (typical) | 100-150+ | 5-15 | ~90% fewer |
| External dependencies | Dozens of unused/redundant packages | Clean, minimal dependency tree | ~40% reduction |
| Infrastructure cost | Baseline | ~40% lower | Server resources freed up |
| Concurrent user capacity | Limited | 10x headroom | Significant scalability gain |
The 80% improvement wasn't the result of one clever trick. It was the compound effect of systematically eliminating inefficiencies across the entire stack.
Timeline
The full optimization project took approximately 8-10 weeks, working alongside the existing development team. The improvements were rolled out incrementally — users started feeling the difference within the first few weeks as individual endpoints were optimized.
What This Means for Your Business
If you're a founder or business leader reading this, here's what this case study illustrates:
Speed Is a Retention Problem
Users don't file bug reports when your application is slow. They just use it less, switch to a competitor, or cancel their subscription. Slow software erodes trust in ways that only show up in your churn numbers.
The Fix Is Usually Not a Rebuild
The instinct when software is slow is often: "We need to rebuild it." That's almost always wrong and the most expensive option. In Cuez's case, the architecture was sound. The problem was accumulated cruft. Cleaning house was faster, cheaper, and less risky than starting over. For a broader look at when to optimize vs. rebuild, see my guide on website speed optimization and why every second costs you money.
Performance Work Pays for Itself
The ~40% infrastructure cost reduction alone justified the investment. But the real value was in user retention and scalability. A product that responds in 300ms feels modern. One that takes 3 seconds feels broken.
You Probably Have Similar Problems
If your web application has been in development for more than a year with multiple developers contributing — there are almost certainly unused dependencies, duplicated framework functionality, and unoptimized queries hiding in your codebase. These aren't edge cases. They're the norm.
FAQ
How do I know if my API has a performance problem?
If your users mention slowness, that's the most reliable signal. On the technical side, API response times above 500 milliseconds for standard data-fetching endpoints usually indicate room for improvement. Tools like New Relic, Datadog, or even Laravel Telescope can show you exactly how long each request takes and where the time is spent.
Can't I just add more servers to fix slow API responses?
Throwing hardware at a software problem is expensive and temporary. If your code runs 151 database queries when it should run 4, doubling your server capacity doesn't fix the root cause — it just delays the cliff. Optimize the code first, then scale the infrastructure.
How much does API performance optimization cost?
For a typical mid-size SaaS application (10-50 API endpoints, one database, standard framework), expect $15K-$40K for a thorough audit and optimization. The return — in retained users, reduced infrastructure costs, and improved conversion — typically pays that back within 3-6 months.
Does this approach work for frameworks other than Laravel?
The principles are universal. Every framework (Django, Rails, Express, Spring) has built-in tools that teams under-utilize. Every codebase accumulates unused dependencies. Every database can be queried more efficiently. The methodology — audit, remove, replace, optimize — applies everywhere.
How is this different from a full application rewrite?
A rewrite means starting from scratch — months of development before you're back to feature parity. The approach I used at Cuez preserved the existing application, its features, and its data. The product kept running and improving throughout the process, with zero downtime for users.
What role does a Fractional CTO play in performance optimization?
A Fractional CTO brings the experience to diagnose these problems quickly and the authority to prioritize fixing them. Performance work often gets deprioritized in favor of new features — until users start churning. A senior technical leader who understands both the engineering and the business case ensures it actually gets done.
How long does it take to see results?
Individual endpoint optimizations show results immediately. A full codebase optimization project like Cuez typically takes 6-12 weeks, with incremental improvements visible throughout.
Conclusion & Next Steps
The Cuez project is a pattern I see repeated across the SaaS products I work on. Applications start fast, accumulate technical debt over time, and gradually slow down as unused libraries pile up, custom code proliferates, and database queries go unexamined. The good news is that these problems are fixable — systematically, methodically, and without rebuilding from scratch.
If your application is slower than it should be:
- Measure first. Get actual numbers on API response times, not just user impressions.
- Look at the full picture. Audit the entire stack — dependencies, custom code, database patterns, framework utilization.
- Prioritize by impact. Start with the highest-traffic endpoints.
- Consider a professional audit. An outside perspective often finds things internal teams have become blind to.
I've optimized backend systems for 250+ projects over 16 years. If your API response times are holding your product back, let's talk about what a focused optimization engagement would look like.
Author Bio
I'm Adriano Junior, a senior software engineer and consultant with 16 years of experience building and optimizing web applications. I led the Cuez API optimization that cut response times by 80% (3 seconds to 300ms), and I work with SaaS companies and startups to solve performance, architecture, and scaling challenges. Get in touch.
