Not every production service needs a platform story big enough to impress conference slides.

Sometimes the most senior infrastructure decision is choosing less.

This gateway is a good example. It is a stateless Node service with optional Redis, clear environment-driven configuration, and a Docker-based deployment flow. The deploy config in config/deploy.yml uses Kamal instead of reaching immediately for Kubernetes, and I think that is exactly the kind of pragmatic decision more teams should write about.

Start with the workload, not the trend

The service does not manage durable business state. It does not run a complicated job system. It does not require service discovery across dozens of internal components. It listens on one port, calls a few upstreams, and scales horizontally in a mostly boring way.

That is not an insult. It is a gift.

Workloads like this are perfect candidates for simple deployment models because the application already has the right shape:

  • stateless request handling
  • config via environment variables
  • easy containerization
  • no tight coupling to node-local disk

The Dockerfile reflects that simplicity. The image is straightforward, startup is explicit, and the deployment config focuses on the things that matter: image registry, target host, SSL proxying, secrets, and runtime env.

Why a smaller platform can be the better platform

Kubernetes solves real problems. It also introduces a lot of surface area: manifests, controllers, ingress decisions, secret management conventions, observability plumbing, rollout policy, and operational overhead that teams often underestimate.

For a service at this stage, Kamal offers a simpler path:

  • container-based deploys
  • explicit host targeting
  • manageable secret injection
  • enough structure for repeatability without a full control plane

That is often the right tradeoff when the app itself is still evolving. You get a production deployment workflow without committing to infrastructure complexity you may not need yet.

The design still leaves room to grow

What I like about this repo is that the service is not painted into a corner. The app is already containerized. Redis is optional and can be externalized. Config is environment driven. Health and metrics endpoints exist. Those are all good portability decisions regardless of the scheduler.

In other words, choosing Kamal here does not mean choosing against future scale. It means deferring platform complexity until there is evidence you need it.

That distinction matters. A lot of teams frame these decisions as ambition versus simplicity. The better framing is timing versus cost.

What you give up

Of course there are tradeoffs.

A lighter deploy model may give you less built-in support for:

  • sophisticated autoscaling policies
  • standardized multi-node orchestration patterns
  • first-class service mesh integrations
  • broad internal platform consistency if the rest of the org is already on Kubernetes

Those are real considerations. But they are only benefits if your team will actually use them and support them well.

What I would watch for

The triggers that would make me revisit the platform choice are pretty clear:

  • traffic growth that makes rollout and capacity management significantly harder
  • more internal dependencies and sidecars around the gateway
  • stronger requirements for multi-region active-active operations
  • an organizational shift toward standardized platform tooling

Until then, the simpler path is often the more responsible one.

The lesson

Infrastructure decisions should reflect the shape of the application and the maturity of the team operating it.

For a stateless routing gateway, choosing Kamal over Kubernetes is not “less serious.” It is a bet that operational focus matters more than platform fashion. In my experience, that is often the senior move: build the application so it can grow, but keep the deployment story as small as reality allows.