Serverless architecture is here, it’s been proven, and companies from start-ups to Fortune 500 are jumping on this rapidly evolving bandwagon. If you aren’t familiar with the concept, read this first and then come back so I can convince you why you should want this.
1 – Avoid Under-Utilization
This one should be obvious, but it is the primary reason serverless is my preferred design approach, so I’ll say it anyway. In a world in which you are paying-per-use, if you don’t use–you don’t pay. No commitment and zero upfront costs. Imagine you just launched an amazing app, and no one showed up. That would be terrible, yes, but imagine if you were also paying to keep dozens of servers idle while waiting for everyone to show up. You are clearly worse off in the second scenario.
Still confused? Serverless means you are paying per request, per unit of data stored, and per second of compute time. The more you use–the more you pay. Cloud service providers are more than happy to charge us via a metered business model. Fortunately, this exposes another benefit of serverless: cost transparency. With the right instrumentation and analytics, you could attach costs to a business unit, team, product line, or even a single end-user. This kind of transparency would be incredibly challenging, if not impossible, in traditional server-based architectures.
2 – Focus on Delivering Business Value
Serverless means no hardware to setup, no servers to provision, and no OS or database to install. Write a function, deploy the function, and execute the function. Alternatively, you don’t have to reinvent the wheel or rollout DIY functions. There are services on the market for various tasks: authentication, SMS, marketing and transactional emails, user analytics, customer payment, AI, and so many more. This kind of rapid development means you can quickly prototype an idea and get it into the hands of your users sooner. This reduces the time it takes to realize the business impact (good and bad) and allows you to experiment with more ideas at a much faster pace.
3 – Skip Hardware & OS Overhead
This one is right in the name. No servers in production means no servers to manage. AWS (Amazon), Azure (Microsoft), and Google are more than happy to take our money in exchange for this abstraction. Personally, I am more than happy to check some boxes and hand over my money. Security & monitoring? Taken care of. Patch Tuesday? Not my problem. Broken server? I’m not thinking about it.
This is not to say there is nothing to think about, but your internal IT team and developers (if you have any) can focus on more interesting problems.
4 – Seamlessly Scale to Infinity*
Maybe not actual infinity, but you can scale to incredibly large loads and volumes using services likes Lambda and DynamoDB (AWS). If your needs are beyond the default AWS Lambda limits (1,000 concurrent function executions), then I’d like to hear from you. Keep in-mind that 1,000 concurrent executions could translate to 5,000 executions per second if each function takes approximately 200 ms. That is before implementing a cache layer or using multiple AWS Regions.
For fun, let’s run a function 5 times per second for an entire month (30 days), which would result in 12,960,000 Lambda executions.Assuming 200 ms compute time @128 MB of RAM, it would cost exactly $7.99. Breakdown: $2.59 for the requests, and $5.40 for the compute time.
You can compress the timeframe and still only pay $7.99. In other words, could your data center handle an additional 12.9 million requests across a single month? Maybe. What about an extra 12.9 million in a single day? In a single hour?
Economies of scale allow us, non-cloud providers, to tap into immense power and flexibility on-demand at significantly lower costs with little-to-no commitment.
5 – Events over Data
The last decade has been largely focused on data. Bringing it to the web. Pushing it to the cloud. Visualizing, analyzing, and dumping it into cubes. Part of this effort was justified because sometimes it takes a ton of data to answer the certain questions. Having said that, it might make sense to think of the business problem as a series of events instead of the aggregation of data.
Imagine a pipe. Now imagine “stuff” flowing through this pipe; product purchases, invoices received, creation of new customers, support tickets, and application errors. Many different applications can be pouring events into this pipe, and many applications can be watching and responding this stream of events. You may have heard of stream processing or log processing; this is approximately what they are referring to. This idea aligns quite well with philosophies like Agile Development and Domain-Drive-Design. Create a bunch of producer (event creating) applications. Later down the road as requirements and needs become clear; create various consumers (event readers) to analyze or respond to the events.
Jumping back on topic, serverless architectures allow us to run with this idea incredibly quickly and with minimal commitment. AWS, for example, already has the infrastructure in place to: create streams via Kinesis, create functions via Lambda, and create rules to execute specific functions based on the events in the stream. This drastically reduces the amount of “glue” code we must write to get the ball rolling. Our functions become succinct poems and are focused on solving business problems.
For the record, you could implement an event-based architecture in a traditional server approach, but scaling and/or accessibility would likely become a problem. I am also willing to bet it would take longer to get started.
There are many other benefits to serverless architecture, but they all revolve around cost reduction, faster delivery, and rethinking how we solve problems.