As mentioned in my previous post, A path to microservices , adopting a microservices architecture is not simple. It requires many prerequisites to be managed successfully. With multiple services you quickly realize how many resources they use. Even the smallest service has a run-time footprint and consumes CPU cycles, even when sitting idle. Multiply this by number of services and you quickly get the picture. This post explores how this can be improved, and if it is possible to go beyond microservices to a serverless architecture .
What’s beyond microservices?
A microservices architecture has a lot of code written for a single microservice that can be reused. Or, at least abstracted away to make it easier for developers to spin up their own microservices functionality in a consistent way. That pretty much means just abstracting away all of the “plumbing” code.
If you take away all the hosting of your controllers, authentication, logging, tracing middleware, as well as the dependency injection and pipeline constructions, and you are writing your code correctly, only the actual business function itself should remain. All of the other code to authenticate, validate, log, monitor and resolve is a duplicate in every microservices application. If you move all of this common functionality into a hosting microservice , which is capable of accepting your request handling function and execute it when a request comes, you can quickly see after you sort out the whole automated deployment and scaling for that single service that you have a psuedo ecosystem for running business functions.
You can then take out that ecosystem and make it a backbone for your microservices. Developers can then create business functions and push them into the ecosystem.
You do not need to provision machines, create hosting configs and all the “plumbing” things that are required for a typical standalone service. This is where the term “serverless” or “function as a service (faas)” appears. Initially, it appears as though you don’t need servers anymore. The only requirement becomes the ecosystem that is capable of executing your business functions. However, this does not mean that you don’t need an actual server behind the scenes. It means that you don’t need to worry about managing the server. This is done automatically.
If I were to generalize a microservices architecture, it would appear something like this:
In a “serverless” world, it would look something like the following:
As you can see from the second diagram, not much changed. The handler functions will be in either separate services or in the same host. All of the other infrastructure services are still there (or at least they should be). The main difference is that by generalizing how the handler functions end up in a host service, developers are “free” from, setting up the build and deployment process.
This is neither something new nor a silver bullet. Services already exist that allow you to host your functions (such as AWS Lambda, Iron.io, Google functions, etc.). However, it comes at a cost.
Can you create a “function” host?
It would be really nice if you could make simple functions (with no dependencies) and roll them out as frenzy. In the real world, however, you get various kinds of dependencies. You need to store data, do transactions, and have some functions called by a scheduler among other complicated cases. This means that if you try to get everything into your handler function, it will bloat and become really ugly with limited monitoring, error handling, etc.
An alternative to creating a function host would be to create convenience “methods” to put/get data to a storage, distributed state/cache, start asynchronous processing (like use messaging queues), add a scheduler and other general functionality. Usually you would already have them in your microservice. Or, in a monolith world, its’ just a task to wrap them up nicely and expose that functionality for your handler functions (via injection, context or any other means). The more “general” functions your ecosystem has, the more problems it can solve in a consistent way. General functions also make your functions simpler and allow for better monitoring as it would already be built into functionality.
However, having everything prepared and ready out of the box puts your developers into single tracks. This means that if you need to take a different route, it will become complicated.
What serverless gives you?
There are a few things that you can immediately spot right away:
- Deployment simplicity – If a function transfer to a host is done properly (it could be just a binding to a handler function repository), it may totally take away deployment pains. Furthermore, it could even take away your Dev, QA, and Prod environments as handler functions are content in an ecosystem. Moving the handler to production would be the same as publishing a page in a CMS system. The same simplicity allows you to quickly spin up new required functionality.
- Scalability – Serverless becomes extremely efficient when scaling. As the function host is always the same, and any host can execute any function, the load is distributed evenly on all function hosts. If you see performance degradation, you can spin up a new function host. In these “cloud” times this directly translates to cost that you pay for your infrastructure. When you can utilize your resources efficiently you can reduce your infrastructure costs.
- Efficiency – Resource consumption would be calculated by the time taken for the function to execute versus resources used by a microservice. This means that you will have significantly less “idle time” in your infrastructure, as various functions can run on a same host.
What serverless takes away?
- Complexity – Serverless does bring much larger complexity into play. You should have a proper ecosystem in place. To spin up your own ecosystem is a large effort and you will have to maintain it while it lives. It also adds to coding complexity as all the functions are distributed.
- Vendor lock in – If you choose to use a vendor, such as AWS or Iron.io, you are locking yourself in and will have to play by vendor rules. If a service closes down you’ll need to potentially migrate everything.
- Flexibility – Your developers will have to follow rules. If they need some unconventional solutions (unsupported by the ecosystem), they will have to either work around or abandon serverless and create a standard microservice.
- Tooling – Developers will also lose a portion of their tooling. It will be difficult to debug deployed functions. If you are building your own ecosystem, think about tooling up front (such as ecosystem emulators, tracing for live functions, metrics and anything that lets developers track down possible issues quickly).
Where does serverless fit?
Although abstract, the answer is that it depends. Firstly, this model does not fit everywhere. When answering questions such as, “Should we build or buy?” is similar to any product or service:
- How much it will cost (immediately and annually)
- How much effort will be required to maintain ALL functions
- How flexible is it?
- Do you get vendor lock in?
- Can your organization handle the risk if service is shut down for good?
A good example of where serverless would be a good fit is in an industry focused on the Internet of things (IoT). IoT requires a large amount of functionality for relatively simple functions (such as device updates). Also, the number of device counts could range from just a few to millions. For a manufacturer, software for a device is just part of the product. However, they need the capability to create and deploy functionality quickly and at the same time have the ability to scale easily and quickly. A traditional approach may be too slow and cumbersome for those manufacturers. Therefore, having architecture that helps to streamline delivery would be a huge competitive advantage.