From Monolith to Microservices: An epic migration journey

Amit Gupta
7 min readMay 13, 2021

--

The challenge..

We live in an era where a lot of enterprise level systems have monolithic architecture. However as the system grows, the scale of operations and speed of development becomes more linear. The technology teams aspire to accelerate the pace of change in the system while increasing the number of teams to deliver value faster to the core business at a reasonably lower cost. Hence there is a strong urge to slaying the monolith architecture into Microservices in order to address the insufficient pace of system development. It is an epic journey that goes through multiple iterations to give satisfactory results. Embarking upon this journey most definitely constitutes a huge challenge and often requires significant investment.

The approach…

A big bang application migration or completely replacing a complex system with microservices can be a huge risk. It is prudent to evaluate how much improvement every step brings to the overall architecture. Hence an iterative approach is a must to have to evolve the architecture in small steps and evaluate each step if it brings one closer to the goal. An enterprise level monolith system is often composed of tightly integrated layers so the cost of decoupling a functionality as a potential microservice candidate can be high and requires considerable thought. So deciding what functionalities to decouple and planning their incremental migration are some of the key challenges faced by the technology teams. The most common approach is to slowly replace big functionalities into smaller microservices while minimizing the changes needed in the system so as to have more agile world where developers and operations team can work in tandem with each other to deliver small and loosely coupled bundles of software quickly and safely.

The Migration Strategy

Step 1: Stop digging the hole

The famous “law of holes” states that “You need to stop digging if you find yourself in a hole”. The first step requires the technology teams to stop making things worse with the existing monolithic. Any new additions or services should be developed outside the monolithic system so as to allow scalability and faster pace of delivery. That allows the new functionalities to scale when needed and ensures faster pace of delivery.

Step 2: Identify the monolith functionality to be chipped

One of the most important and challenging steps is to select the business functionality to be decomposed. One can start with either selecting parts of the code that are frequently modified or a functionality that is needed to be scaled on demand. One may end up with a list of such functionalities. The next step would be to select the one amongst the list which is fairly loosely coupled. At first, it is always preferred to choose a functionality which does not require direct user interaction.

It is also fairly important to create a boundary of the microservice. After extraction the component should be separately deployable along with the monolith and should have a separate CICD pipeline and a change control mechanism.

It’s certainly hard to figure out where to “cut” and how to evolve the software architecture correctly. The choice of the monolith to be chipped, requires both art and science. The brittle interdependencies across various layers need to be identified early on.

Many architects prefer that any component that requires to be scaled up on demand should be loosely coupled with the overall system. Such components are the usual choices to be made separately deployable. Some primary targets could be notification engine, reporting engine, audit engine etc.

Step 3: Strangling the Monolith and set up a build pipeline

Often tech teams make the mistake of porting the existing implementation as is and extracting it into a separate service partly because of bias towards the already written code (also referred to as IKEA effect in psychology) or it could be because of lack of capability to program in different languages. Strangling or decomposing the monolith should be looked upon as an opportunity to do a technology refresh and implementing the new service with the appropriate choice of tech stack. “Go” or “golang” has become a popular choice while writing microservices because of its ability to handle heavy loads, provides better speed and support for concurrency.

The Strangler fig pattern is considered as one of the most popular and effective approaches to decompose a monolith incrementally. At its core it describes that functionality should be extracted from the monolith in the form of services that interact with the monolith via RPC or REST or via messaging based events. Over time, once the functionality (and associated code) within the monolith is ready to be retired it leads to the new microservices “strangling” the existing codebase in monolith.

Figure 1

Another important aspect is to figure out how micro the microservice should be? It should be answered together by development and operations team as to how many services can be independently monitored, deployed and released and accordingly the optimum size of microservice should be determined. Going “extreme micro” and not being able to manage it later could lead to big time failure. Teams must be operationally ready to handle the new set of services. The infrastructure and operations team must be aligned with the development process to set up continuous delivery pipeline, automated infrastructure provisioning, enhanced monitoring and logging etc. and package the new microservices in containers (like docker based) and deploy and scale. Also, a gateway could be set up with a reverse proxy engine like NGINX for transparent routing to back end components. Proxy can also be used for changing protocols in case the new microservice supports a different one (for example a gRPC protocol) while the monolith supports REST/SOAP. Proxy could be configured to transform requests and responses accordingly. An alternate approach could be to develop the transformation logic within the new microservice so as to not burden the proxy server.

Another huge challenge while creating a decoupled service is to look for the possibility of cascading failures in the system. Sometimes architects end up designing multiple REST based microservices that call one another. In case of load on the system, threads may accumulate on one of the microservices called in a sequence and becomes a performance bottleneck which is counterproductive. Architects must define a general rule as to how many microservices should be called in a sequence following a synchronous communication protocol. Usually this results in asynchronous/events based communication across several microservices which may directly impact the business functionality. Hence this needs to be taken into utmost consideration while designing multiple microservices in a complex monolith system.

Thus it is imperative to set up a build and deployment pipeline with the very first decoupled microservice so as to use it for the future ones thereby making the transition smoother and faster for all teams.

Step 4: Decomposing the database

Every monolith system usually has data spread across multiple tables in a single database schema. The queries usually perform join operations to fetch the required information from multiple tables linked through foreign keys. If one follows the shared database approach then it leads to issues like lack of clarity on what part of schema can be changed safely, data inconsistency across services to name a few other than the obvious scalability issues. The best approach is to split the database apart but it is far from a simple endeavor. One needs to consider issues of data synchronization, transaction integrity, join operations etc. But it is always worth the effort to lead us toward microservices totally encapsulating their own data storage and retrieval mechanisms.

Having each microservice control over its own data in a dedicated database is the most preferred solution towards building a microservices architecture. But sometimes it is too challenging to split it if more than one business functionalities share information that can’t be split easily. In such a scenario an interim solution can be developed. The shared data can be together taken out and can be wrapped around a service also referred to as “Database wrapping service” pattern. Each co-dependent microservice can call it to change/view the data in synchronous or asynchronous mode. This helps in overcoming scalability issues and paves the way for subsequently splitting it further until each microservice gets control of their data completely without any interdependencies. Such microservices can share or provide their data through inter microservices communication mechanisms through an event driven approach.

Summary

An unnecessary switch may provide fatal results. It is always recommended to refactor into microservices only if the system has become too complex to operate, manage and scale. Going extreme-micro provides extremely fast pace of development but also leads to operational issues in managing the overall system. Thus an interim approach for huge and tightly coupled modules can be to disintegrate them into separately deployable smaller modules at first (called mini-services) and then split them further to comply with the microservices architecture. This approach has often proved to be highly beneficial by splitting a big module from the monolith system and making it separately deployable thereby allowing to scale on demand.

Correctly implementing microservices architecture in a new application is hard but it is even harder to migrate from an existing monolith to microservices. This requires utmost perseverance in following an incremental approach and carefully evaluating each step to move in the right direction. It requires support and collaboration between the architects, operations team, development team and product management to do this right. If one treads on the right path, it is a journey worthwhile.

--

--