Microservices have become one of the biggest IT phenomenon in last few years, specially 2017. So many people were grappling with the question as to why Software industry is ripping its applications they built in years and years of hard work. What changed that industry is ready to say bye bye to monolith and migrate to Microservices architecture.

For last 20 years I am watching the evolution of IT industry with utter amazement. Never in the history of mankind, world around us has changed so fast. We have biggest transport service provider Uber with no cars, biggest accommodation service provider Airbnb with no properties of its own and players like Amazon and Netflix have change the world of retail and entertainment forever. And very soon humans will be competing with robots for employment.

IT, from being a support industry has transformed into THE Industry.

And now you see the craze of corporations trying to bomb their software applications and move them to microservices. Why?

Lets step back a little to see what’s going on here. What comes next is not in exact chronological order but will make sense when put in context.

THE GREAT FINANCIAL CRISIS (2007-2008)

This story begins with the financial crisis of 2007-2008.
It was one of the biggest, since the Great Depression of 1930.
The crisis started with collapse of Northen Rock in UK in 2007 and became a full blown international crisis in September of 2008 with the collapse of Lehman Brothers.
I was in Newcastle UK then. We saw half of the stores closing down in Metrocentre, the biggest mall in the area.
While the Governments were busy saving the financial system, the companies were looking for options to cut cost to make the organisation lean.
One of the department that companies looked at, for cutting the cost was IT (Information and Technology) organisation. Executives had always complained that 85% of their IT budget was spent on “keeping the lights on”.

IT, by 2007 had adopted the object oriented programming model well. The industry was doing good with Continuous Integration as suggested by Mr Martin Fowler. The distributed computing, supported by webservices was making things so much easier for applications to communicate with each other.
However, the hardware side of the house was still trapped in the older models. Companies were maintaining their own hardware because of reluctance to change, privacy, compliance, security concerns etc. While the executives had always complained of high cost of maintaining the hardware, there was little they could do.

The financial crisis forced the industry to give a hard look at this. Companies spent a lot of money on hardware that was operational only during the business hours and was sitting idle rest of the time. Over capacities were built-in to handle the performance and rush hours traffic management. I hope you can understand the frustrations of the executives.
The industry once again gathered courage and decided to pool the resources. Think of a server which serves different regions of the globe at different hours of the day. That way the hardware is used to its maximum.

However, installing and uninstalling an application is not an easy task. An application is tightly coupled to the underlying operating system (OS) which, in turn, holds the hardware.
To address this issue, lets see if Operating System can be made independent of the underlying hardware and lets look at the possibilities it can bring. The new structure will look like the diagram below

hypervisor

1. The hardware is controlled by hypervisor (an OS of OSs). The hypervisor can hoist multiple instances of operating system (now virtual operating system). That way the application remains tied to the Operating System and the same hardware can hoist multiple operating systems and multiple applications in effect.
2. The biggest opportunity is that the whole application is now an independent, isolated component that can be deployed as needed. So, if the traffic shoots up, more instances of the application can be fired up and if the traffic goes down, then the number of instances can be reduced. This elasticity opens an immense opportunity for organisations to save money. Pay for what is being used.

3. Also think of the scenarios where different applications have different peak hours. This is an opportunity to maximize hardware utilization. The attendance and login application can have more instances during the early hours of the day when the employees come to the office and post peak hours a minimum number of instances will be good enough. Other resource hungry applications can take that space.

4. One hardware can hoist multiple applications in multiple times zones. So while Japan is at peak of its business activity, Europe is sleeping. The hardware will give maximum space to Japanese applications and keep the European applications to minimum and vice-versa.

5. Also the application can get deployed to an altogether new hardware without any fuss.

These were enormous opportunities and most important, the businesses were looking for this kind of solution.
Interestingly, the software to do this- the hypervisor, was available. Always available. Actually it was there since 1967 with IBM.
The age of virtualization had begun. VMWare, Amazon, Google Cloud, Microsoft Azure and other players quickly jumped on the opportunity.
More and more companies started moving to virtual platform. we now call it “The Cloud”.

But wait, there was something else to it. If hardware can be virtualized then even the platform can be virtualized and then definitely even software can be virtualized. This was the birth of IAAS (Infrastructure as a Service), Platform as a Service (PAAS) and Software as a Service (SAAS).

Have a look

iaaspaassaas_1

Now its possible just to create the application and rest of the stack can be commissioned and de-commissioned as required. Do you see the birth of companies like Amazon store, Uber, Airbnb, Whatsapp, Netflix  and this website where I am writing the blog.  🙂

But this was just one part of the puzzle and though industry made major strides in virtualization it was not going to solve the agility issues the business was looking for. IT was struggling with its own limitations. One of the top challenges was the delivery model.

THE SOFTWARE DELIVERY CHALLENGES

Software delivery model challenge was not new and industry was wrestling with it for a quite sometime.
Companies had expanded operations to different parts of the world in search of talent. Distributed Development was order of the day. The software components themselves were massively distributed especially after being powered by webservices. The world of software was even messier and complicated.
The IT industry knew long back that Waterfall was not working and it was experimenting with different models like Spiral, Agile, XP etc.
Mr Martin Fowler and Grady Booch, came up with the idea of Continuous Integration (CI) which proposed to the keep the code updated all the time and ready for build on the fly. The idea was quickly adopted by the industry and SVN quickly became industry standard. The IT industry became more agile but it was still not fulfilling the business need of agility and speed.
Mr Fowler then came up with the idea of Continuous Delivery Model on top of Continuous Integration. It proposed to keep the application ready to be delivered at all times. It was a kind of natural progression but he himself was not sure of its adoption or the way it will evolve. He visited India in 2009 and aired his concerns. I was in one of those sessions in Pune- listening. It took some time to understand those dimensions.
CI required to
1. Automate the application configuration so that all the cluster servers have the required application and inter-application properties propagated and configured all times.
2. Automate the testing- all types of testing, the unit, the integration, the regression, the pre-prod and the UAT.
3. And the application should be ready to go online on drop of a hat.

And even though it was too much to ask for, that was the way to go.
So the efforts were On to put CI/CD in working. Softwares like Jenkins, Git, Chef, Puppet, Selenium were evolving.
But as Mr Fowler suspected, softwares were too complicated. And the solution came from a different part of the industry.
The Architecture.

THE ARCHITECTURE

The engineers at Netflix realized that the biggest challenge to the application agility was the structure of the application itself. The existing single process, multi-function architecture was making the application just too bulky, inflexible, and difficult to maintain. It had high inertia to evolve due to the centralized architecture. Individual functional units had little or no space to maneuver.

On the delivery side, change to one part of the application led to re-delivery of all the functionalities. There was no way to delivery only the modified function.
And in my experience, on many occasions, defect fix in one part of the application caused defects in other parts of application. This unexpected but possible behavior always kept the whole team on toes, even though they were not visibly impacted.

This meant multiple layers of testing (including integration, regression, staging, and UAT) of the whole application.

A continuous delivery pipeline (CI/CD) had to work too much to meet the regression, code coverage, and quality requirements. Making it worthless in many a cases.

The scaling of application was homogeneous. You cannot scale the most used functional units independently. There was no consideration of time while considering scaling.

Maintainability was the most hurt. Applications were too complicated. Teams rarely understood the interconnections between different components. And as days passed by and more and more functionalities got added to the application, and maintainability became more and more difficult. This led to risk aversion and inflexibility kept on piling up.

No different were the issues of monitoring and performance.

No matter what you do, with this architecture in place, noting was going to bring the dynamism and agility, business community was looking for.

These monolith applications, or as many people have started calling them today- the “Monolithic Hell”, were too out dated to meet the requirements of current day business.

The solution the Netflix engineers were looking at was to get rid of monolith and build a number of smaller applications delivering “one functional unit” at a time. And the architectural idea was to follow the restful web-service component approach (primarily).
The one functional unit brought a lot of possibilities.

These small applications will have very clear functional boundaries, they can be independently upgraded and deployed.
They can have a separate development and testing life cycle and will make CI/CD more manageable. Maintainability will improve and so will monitoring and performance.

By virtue of being smaller in size they can be independently scaled and time could a dimension while considering scalability. So the microservice for “Upgrade Plan” can gain more weight on scalability than “Update customer information”. A login application can have high availability during early hours when staff comes to office.

And the best part is, these applications will have cloud compatibility built it. It will carry all the configurations required to build, configure and run the application to the extent that microservices carry configuration for the web-server and many a times carry the whole web-server itself along with the platform.

But you have to keep in mind that the Application Landscape is going to change drastically. Instead of looking like a bus, it is going to look like a bunch of cars- each running independently, managed independently but is expected to work in-coordination in many cases.

transformation

While deployment was a no-brainer in the monolith model, it can be a hell in the microservices world where your one application is split into say 300 smaller applications.

While deploying one application with little down time was easy to do in monolith world, deploying 300 application independently and the at times parallel can become a very very complicated task.

Continuous Integration and Continuous Deployment (CI/CD) is not an option anymore. Its a mandate. And this is how Mr Fowler’s proposal becomes part of software delivery lifecycle. 🙂

If you think you can do microservices without out automating the testing then that’s not going to work. Have you testing strategy in place.

Be ready to handle chaos. Chaos Monkey is your friend. 🙂

Well going back to the point, this is the architecture which is capable of delivering the agility, business was looking for.  It is cloud friendly and makes continuous delivery real.

So welcome to the era of microservices.

Note: If you found this article useful, please share it. I appreciate the feedback and encouragement. Also let me know if you want to hear on other topics of interests.


Comments

Leave a comment