To underline the ability of this system to automatically scale with the parameters you define, within the boundaries that you define, here is an example that stress tests with wrk, a load-testing tool:
This is, in our opinion, the most important defining characteristic of Serverless Deployments. However, it's not the only one, as we will see next.
We selected these demos in particular to underline a very important point. We think Serverless can be a very general computing model. One that does not require new protocols, new APIs and can support every programming language and framework without large rewrites.
Here are three of the underlying ideas behind this new architecture.
Serverless enables engineers to focus on code rather than managing servers, VMs, registries, clusters, load balancers, availability zones, and so on.
This, in turn, allows you to define your engineering workflow solely around source control and its associated tools (like pull requests). Our recent GitHub integration, therefore, makes it possible to deploy a Docker container in the cloud solely by creating a Dockerfile.
It is not sufficient to ignore that the infrastructure is there, or forget about it. The execution model must make it so that manual intervention, inspection, replication, and monitoring or alert-based uptime assurance is completely unnecessary, which takes us to our next two points.
A very common category of failure of software applications is associated with failures that occur after programs get into states that the developers didn't anticipate, usually arising after many cycles.
In other words, programs can fail unexpectedly from accumulating state over a long lifespan of operation. Perhaps the most common example of this is a memory leak: the unanticipated growth of irreclaimable memory that ultimately concludes in a faulty application.
Serverless means never having to "try turning it off and back on again"
Serverless models completely remove this category of issues, ensuring that no request goes unserviced during the recycling, upgrading or scaling of an application, even when it encounters runtime errors.
Your deployment instances are constantly recycling and rotating. Because of the request-driven nature of scheduling execution, combined with limits such as maximum execution length, you avoid many common operational errors completely.