Redis starts innocently in most web apps.

You add it for one thing:

  • a task queue broker
  • a cache
  • maybe rate limiting

Then enough practical needs pile up and suddenly Redis is not just an infrastructure dependency. It is the place where your product stores operational intent.

That happened in Trek Point.

Redis ended up backing:

  • Celery broker and result backend
  • rate limiting
  • feature flags
  • runtime product settings

At that point it is fair to say Redis became a small control plane for the application.

Why This Happened

Because it was useful.

That is the honest answer.

There are a lot of low-friction product and operational decisions that do not justify a new table, admin surface, or deployment just to change one value. Redis made those decisions easy to externalize.

Examples:

  • toggling feature availability across workers
  • changing a free-tier GPX export limit without a code deploy
  • keeping rate limiting shared across processes
  • running async jobs without introducing another moving part

For a small team shipping quickly, that is a great trade.

The Best Part of This Pattern

Redis let us centralize controls that benefit from being:

  • shared across processes
  • fast to read
  • easy to mutate operationally
  • resilient to deploy boundaries

I especially like runtime settings in this category. Being able to change something like a free-tier threshold without redeploying is not glamorous, but it is exactly the kind of leverage product teams need when they are learning.

That is the difference between configuration as code and configuration as a live product control.

The Risk Is Blast Radius

The downside is also obvious once you say it out loud:

if one Redis dependency backs jobs, rate limits, feature flags, and runtime settings, then Redis trouble can degrade multiple unrelated parts of the product at once.

That is the real trade.

It is not a problem when Redis is healthy. It is a design characteristic when Redis is not.

In Trek Point, that means one dependency influences:

  • whether Celery work flows
  • whether API clients hit limits properly
  • whether admin-controlled feature availability is honored
  • whether runtime quota settings fall back to defaults

That is more than “just caching.”

Failure Defaults Matter a Lot

One thing I appreciated in this codebase is that some Redis-backed behaviors degrade intentionally.

For example:

  • feature flags default to enabled when Redis cannot be read
  • runtime settings fall back to code defaults

Those defaults are not arbitrary. They tell you what kind of failure the team considered safer:

  • keep the product broadly available
  • avoid hard failures on transient control-plane issues

That is a reasonable bias for user-facing product behavior, even if it means you temporarily lose some operational precision.

Why I Still Like This Design

I would not dismiss this as a hack. For a product at Trek Point’s stage, this is exactly the kind of internal platform decision that keeps velocity high.

Redis is doing real work here:

  • distributing state cheaply
  • giving product and ops levers without schema churn
  • supporting both infrastructure and business behavior

That is not architectural laziness. That is a practical platform layer.

Where I’d Draw the Line Later

I would keep this pattern for a while, but I would watch for signals that it is time to split responsibilities:

  • different availability expectations for jobs versus feature controls
  • operational confusion about which Redis failures affect which subsystems
  • more auditability required around settings changes
  • a need for richer administrative history or validation

That is when a tiny control plane starts wanting stronger product boundaries of its own.

The Bigger Lesson

Infrastructure choices often become product choices gradually.

Redis is a good example because it is so easy to reach for. Over time it becomes the place where your system stores shared decisions, not just shared data.

That is what happened for Trek Point. We did not set out to build a control plane. We set out to solve a handful of practical problems quickly and ended up with a lightweight operational substrate that the rest of the app now relies on.

That is worth recognizing explicitly, because once a dependency is carrying control-plane semantics, you should operate it with much more respect than a generic cache.