From Monolith to Microservices: A Practical Migration Story
Everyone talks about microservices like it's a checkbox. The reality is messier. How we migrated a monolithic Java application to microservices without halting feature development.
The monolith-to-microservices migration is one of those things that sounds straightforward in architecture diagrams and becomes a multi-year odyssey in practice. I went through this at a previous company, and the experience taught me more about software engineering than any greenfield project ever has.
Why We Migrated (and Why You Might Not Need To)
Let me be clear: microservices are not inherently better than monoliths. Our monolith served us well for years. We migrated because we hit specific scaling bottlenecks: deploy times exceeded 45 minutes, a memory leak in the reporting module brought down the entire system, and teams were blocked on each other's release schedules. If you don't have these problems, a well-structured monolith is probably the right choice.
The decision to migrate should be driven by organizational pain, not architectural fashion. Conway's Law is real: your system architecture will eventually mirror your team structure. We had five teams working on one deployable artifact, and the coordination cost was killing our velocity.
The Strangler Fig Pattern: Our Best Friend
We didn't do a "big bang" rewrite. That's how projects die. Instead, we used the Strangler Fig pattern: new features were built as services from day one, and existing functionality was extracted incrementally. The monolith and services coexisted behind an API gateway that routed traffic based on the migration state.
// API Gateway routing during migration
routes:
/api/v1/users/*: -> user-service # migrated
/api/v1/reports/*: -> report-service # migrated
/api/v1/evaluations: -> monolith # not yet migrated
/api/v1/billing/*: -> monolith # not yet migrated
Each extraction followed the same playbook: identify the bounded context, extract the data store, build the service, run both old and new paths in parallel with comparison testing, then cut over. This process took 2-4 weeks per service, and we migrated 12 services over 8 months.
Data Is the Hard Part
Everyone focuses on splitting the code. The real challenge is splitting the data. Our monolith had a single PostgreSQL database with 200+ tables and foreign keys everywhere. You can't just point two services at the same database — you lose the independence that makes microservices worthwhile.
We used a staged approach:
- Phase 1: Logical separation — each service gets its own schema within the same database
- Phase 2: Read path separation — services read from their own schema, with CDC (Change Data Capture) syncing data they need from other services
- Phase 3: Full separation — each service gets its own database instance, communicating only through APIs and events
Phase 2 was the most dangerous. CDC pipelines have lag, and lag means inconsistency. We spent significant effort building reconciliation jobs that detected and repaired drift between the source of truth and replicated data.
What I'd Do Differently
If I did this again, I'd invest in end-to-end contract testing from day one. Our biggest source of production incidents during migration wasn't service failures, it was subtle contract mismatches. A service expected a date string, the monolith sent a timestamp. The request succeeded but the data was wrong. Contract testing with tools like Pact would have caught these before deployment.
I'd also resist the urge to extract too many services too quickly. We ended up with a few services that were too small to justify their operational overhead. A service that gets deployed once a quarter and has one endpoint probably should have stayed in a neighboring service. The right number of services is the one where each service has a team that owns it and deploys it independently. No more, no less.