Most API gateway articles are really about request forwarding. This one is about orchestration.

The system behind this repo is a Node/Express gateway that sits in front of four region-scoped Valhalla clusters. Clients do not choose na, sa, eu, or oc. They send one request to one API surface, and the gateway decides where that request belongs. In practice, that means the gateway is not just a network hop. It is the product boundary.

The heart of the design lives in src/services/gatewayService.js. The unifiedProxy() function does four important jobs in a single flow:

  1. Extract coordinates from different payload shapes.
  2. Generate a stable cache key for the request.
  3. Resolve which region or regions the request belongs to.
  4. Decide whether to forward to one Valhalla cluster or orchestrate a cross-region fallback.

That design is interesting because it keeps the public API simple while allowing the backend topology to evolve independently. The clients do not need to know how the routing data is partitioned. That abstraction matters a lot once your infrastructure stops being monolithic.

Why a thin proxy was not enough

A thin reverse proxy works when upstreams are interchangeable. That was not the case here. Each Valhalla cluster only knows about a subset of the world, and the correct destination depends on the request payload itself. The system has to inspect coordinates before it can decide where to send traffic.

That is why the gateway owns coordinate extraction with helpers in src/utils/coordinates.js. It supports locations, shape, sources, and targets, which means the API can front multiple Valhalla endpoints without hard-coding routing logic in each route handler. This looks small, but it is one of the most important DX decisions in the codebase. The complexity lives once, close to the gateway boundary, instead of leaking into every handler and every client.

Why the gateway became the product

Once the service started making region decisions, caching results, and hiding backend fragmentation, it stopped being just infra. It became the thing developers integrate with.

You can see that in src/routes/valhallaRoutes.js and src/routes/mapboxRoutes.js. The gateway exposes both native Valhalla-style POST endpoints and a Mapbox-compatible Directions endpoint. That is a product move, not an implementation detail. It tells you the service is trying to satisfy different kinds of consumers while preserving a single operational core.

This is also why I like the lack of a database in this service. The gateway is intentionally stateless. It owns orchestration, not durable business state. That makes it easier to deploy, easier to scale horizontally, and easier to reason about when debugging request flow.

Tradeoffs in centralizing orchestration

There is a cost to this design. unifiedProxy() becomes a very powerful function very quickly. It is convenient because every request path shares the same behavior, but it can also turn into a bottleneck for change. When the gateway is responsible for cache semantics, region inference, fallback policy, and error shape, every new feature wants to land in the same place.

That tradeoff is worth it early because it prevents logic from fragmenting across routes. But if the product keeps growing, I would probably split the orchestration into smaller policy modules: cache policy, region selection policy, and fallback policy. Not because the current design is wrong, but because the current design is succeeding. Consolidated logic is great until it becomes the only place you are afraid to touch.

What I like about this architecture

The most senior thing in this codebase is its restraint. The service does not introduce a message bus, workflow engine, or internal platform just to route requests between clusters. It uses a small number of clear layers:

  • Express for the HTTP boundary
  • middleware for auth and request context
  • services for orchestration and upstream behavior
  • config-driven region endpoints

That keeps the mental model tight. A request comes in, the gateway identifies what world it belongs to, applies caching and fallback rules, and returns one response. The underlying topology can stay messy. The client experience stays clean.

That is exactly what good gateway design should do.

What I would do next

If this service grew into a larger platform, I would focus on three things next:

  • add latency histograms and cache hit metrics so the orchestration decisions are observable, not just correct
  • make the cache key canonical so equivalent JSON payloads cannot fragment the cache
  • extract the routing policy into testable modules before the single orchestration path gets too crowded

The big lesson is simple: when your backend is geographically fragmented, the best developer experience is often a smart gateway, not a smarter client. If you can hide infrastructure complexity behind one stable API contract, you create room to scale the platform without forcing every integrator to relearn your topology.