Taking on select engagements · limited availability
Book a call →

The reference stack we use for every SMB engagement, and why most of you need less infrastructure than you think

Most SMBs don't have an infrastructure problem. They have a complexity problem. This is the approach we bring to SMB engagements: start with the smallest stack that answers the real constraints, add complexity only when something specific hurts, and stop copying companies that don't share your scale. Kubernetes is almost always the wrong starting point.

Most SMBs don't have an infrastructure problem.

They have a complexity problem.

The pattern shows up constantly. A small team builds something that works, then decides to "do it properly" and ends up introducing Kubernetes, microservices, queues, and three layers of abstraction they don't actually need.

Six months later, nothing ships faster. Everything is harder to debug. And the system is now dependent on infrastructure the team barely has time to maintain.

This is not a rare case. This is the default.

This post is how we approach SMB engagements, what stack we actually use, and why the right answer is usually much simpler than people expect.


We don't start with a stack

We start with a few questions we ask every single time.

Not a checklist. Actual questions that shape the entire design.

How many users do you have right now?

What breaks today when traffic spikes?

Who is going to be on call when something fails?

How fast do you need to ship changes?

Do you have any real compliance requirements yet?

That conversation alone usually cuts the problem in half.

Because most of the time, the answers sound like this:

A few thousand users.

One or two developers.

No dedicated infrastructure team.

No real compliance pressure yet.

And that immediately tells you something important.

You don't need a complex system.


The stack we actually use (most of the time)

For the majority of SMBs, the reference stack looks something like this:

Application
Next.js, Django, or Rails. Pick one and keep it simple. Usually a single service.
Compute
Containers on ECS or a simple platform service. No Kubernetes.
Database
Postgres on RDS or something similar. Not self-managed.
Storage
S3 or Cloudflare R2.
CDN
Cloudflare or CloudFront.
CI / CD
GitHub Actions.

That's it.

No service mesh. No event streaming platform. No ten microservices talking to each other over a network that now needs observability just to understand basic flows.

And this setup handles far more than people think.


Where this breaks (and what we add)

Eventually, something starts to hurt. Usually one of these:

  • Database under load
  • Slow requests
  • Background jobs blocking user flows
  • Deployments taking too long

Now we add things, one at a time.

  • Caching layer
  • A queue like SQS
  • Read replicas
  • Better logging and metrics

The important part is not what we add.

It's when we add it.

We wait until there is a real reason.


When Kubernetes actually makes sense

Very rarely at the beginning.

Kubernetes is powerful, but it comes with a cost. Not just in setup, but in everything that follows:

  • Upgrades
  • Networking
  • Debugging
  • On-call load

In a small team, that becomes the product.

We only recommend it when two things are true:

  • You actually have scale that justifies it.
  • You have a team that can operate it without slowing everything else down.

Most SMBs have neither.


Why teams overbuild

This is the part people don't say out loud.

Copying large companies

Teams look at companies like Netflix or Uber and assume that is the right model.

It isn't.

Those architectures solve problems at massive scale. Applying them early just creates work.

Planning for growth that does not exist yet

The common argument is "what if we grow fast?"

The better question is: what will break first?

Because something always breaks first. And it is almost never the thing people designed for.

Ignoring the cost of running the system

Every piece of infrastructure has a long tail.

  • It needs to be maintained.
  • It needs to be upgraded.
  • It needs to be debugged.

In a small team, that cost is not theoretical. It shows up immediately.

Choosing tools for the wrong reasons

Sometimes the stack is driven by trends or career goals.

It looks good on paper.

But it does not make the system better.


A real example

We were running Kubernetes in a small team.

At first, it felt like the right choice. Flexible, scalable, future-proof.

Over time, it became clear what we actually signed up for:

  • Constant upgrades.
  • API deprecations.
  • Networking issues that took time to understand.
  • Operational overhead that kept growing.

We were spending more time maintaining the platform than improving the product.

So we asked a simple question: do we actually need this?

The answer was no.

We moved toward ECS. Same workloads. Less infrastructure to manage. Simpler deployments. Fewer moving parts.

Nothing broke. Nothing regressed.

We just stopped paying for complexity we did not need.


What this looks like in practice

A typical SMB comes in with an idea like this:

"We should split into microservices."

"We should use Kubernetes."

"We should prepare for scale."

We push back.

Start with one service.

Use a managed database.

Keep deployment simple.

Then we wait.

When something breaks, we fix that specific problem. Not the imaginary future version of it.


The rule we follow

Keep the system as simple as possible, for as long as possible.

Not forever. Just long enough that every layer you add is justified by something real.


Most SMBs don't fail because they lack advanced infrastructure. They fail because they introduce it too early and spend their time managing systems instead of building products.

If you take one thing from this, it's this:

Build for what you have today. Earn the complexity later.

Thinking about your stack?

If you're an SMB sitting on an architecture that feels heavier than your team can carry, we can help you cut it down. Free 30-minute call: we'll take an honest look at what you're running and tell you what to keep, what to remove, and what to leave alone.

Book a 30-min call