When people hear “route-planning SaaS,” they often assume the backend must already be split into map services, billing services, auth services, and some kind of developer platform. That sounds neat on a diagram. It is also an easy way to spend months building organizational overhead before the product earns it.

We kept Trek Point as a single Flask application with clear feature boundaries for much longer than architecture fashion would recommend. That was not because we missed the microservices memo. It was because the real complexity in this product lives in the joins:

  • route planning feeds exports
  • exports feed activities
  • activities feed sharing and previews
  • billing gates planner capabilities
  • org subscriptions change individual entitlements
  • OAuth scopes expose the same domain model to external clients

Splitting those boundaries too early would have moved the complexity from Python imports to network calls without reducing the actual coordination cost.

The Shape of the Monolith

The app is built around a conventional Flask app factory. That part is not novel. What mattered was being disciplined about module boundaries inside the monolith.

At startup we wire together:

  • feature blueprints like maps, billing, orgs, oauth_provider, public_api, and page
  • shared infrastructure like SQLAlchemy, Babel, Flask-Login, CSRF, and rate limiting
  • operational concerns like Sentry and OpenTelemetry
  • async execution via Celery running inside the same application context

That gave us one deployment unit, one request context model, and one place to compose cross-cutting behavior. It also meant adding something like OAuth or subdomain-aware CORS did not require building a second internal platform just to talk to the first one.

Why This Worked for a Product Like Ours

Trek Point is not a CRUD app with isolated admin features. It is a product where small UX choices often cross three or four domains.

Take something as simple as FIT export from the route planner. On paper that sounds like “maps.” In practice it touches:

  • planner request validation
  • export size constraints
  • billing entitlements
  • user identity
  • file generation
  • client-side UX expectations

If those concerns live in separate services too early, the code gets “distributed” but the decision-making does not get simpler.

The monolith let us do a few valuable things quickly:

1. Reuse domain logic everywhere

The same saved-route and activity concepts power the web app, the public API, export flows, and background jobs. Keeping them close reduced drift.

2. Ship product surfaces incrementally

The developer platform did not require a new auth service. The org billing model did not require a billing rewrite. We could add capabilities by extending existing boundaries.

3. Keep debugging local

When a user hit a paywall, uploaded a broken GPX, or saw a billing inconsistency, we could trace the full path in one codebase. That matters more than people admit.

Where the Monolith Started to Show Stress

I do not want to romanticize it. Monoliths are cheap until they are not.

The pressure points in Trek Point are exactly the ones you would expect in a shipped product:

  • cross-blueprint coupling, especially where maps needs billing and org context
  • growing operational blast radius when one deploy affects marketing pages, OAuth, and async activity processing
  • versioning pressure on the public API because it reuses internal domain logic
  • more discipline required around imports, testing, and migration safety

This is the real tradeoff: a monolith delays distributed systems complexity, but it does not eliminate the need for architecture. You still need boundaries. They just happen to be module boundaries instead of service boundaries.

The Rule We Followed

The best architectural rule we used was simple:

keep product domains separate, but let them compose inside one process until the operational pain becomes dominant

That is different from “put everything anywhere.” It means:

  • billing owns billing invariants
  • maps owns route and activity flows
  • orgs owns membership and seat semantics
  • OAuth owns client and token behavior
  • shared libraries own infrastructure concerns like storage, locale routing, and telemetry

That structure gave us most of the productivity benefits of a monolith without turning the codebase into one giant views.py.

What I Would Do Differently Now

If I were reshaping Trek Point today, I still would not start by pulling out services. I would first harden the internal seams:

  • formalize domain services around shared operations that are now spread across helpers
  • define stricter API contracts between public API handlers and internal models
  • make migrations and operational workflows more explicit
  • isolate the highest-churn async/media pipelines before splitting calmer business domains

That last point matters. Teams often extract billing first because it sounds important. In a product like this, the noisier boundary may actually be media processing and preview generation.

The Bigger Lesson

A monolith is not a lack of architecture. It is a bet that the hardest part of the product is still learning the domain, not coordinating distributed runtime boundaries.

For Trek Point, that bet paid off. We got to spend more time solving route-planning, export, billing, and integration problems, and less time building service choreography around a product that was still finding its shape.

I would rather have a monolith with honest boundaries than a fleet of small services hiding the same complexity behind HTTP.