With Now 2.0, it's easier, faster, and more scalable than ever to write applications that take advantage of serverless functions, with minimal or no code changes.
One of the most fascinating use-cases for such functions is server-side rendering (SSR). In other words, when visitors or search engines go to your web app, they can receive everything they need in just one network hop.
In most front-end circles, a User Interface is described as a function of the state of an application: UI = F(S) is a relatively common equation among UI engineers. With SSR, UI essentially becomes a cloud function of the app's data.

Advantages of Server-Side Rendering

  • No spinners when the page is first loaded. The data users want comes pre-rendered as HTML.
  • Search engines can index your content faster and more effectively. This talk from Google's I/O Conference 2018 sheds more light on how web applications are indexed and crawled. Without server rendering, websites take about an entire week before they are indexed by search engines.

  • A page can be fully or partially rendered if scripts fail to load on the client-side.
  • SSR can be applied to mobile apps as a way of rolling updates faster than with store reviews.

The Example: A Reddit Client

Now 2.0 emphasizes support for multiple entrypoints into your deployment, also known as Lambdas (λ). In other words, if the user visits /ssr/special-route, we have the opportunity to only execute the code relevant to that route. This makes deployment faster and first-load times instant.
For this example, we'll combine static-asset hosting with SSR to show the two approaches to serving content side-by-side:
  • /style.css
  • /ssr/:framework/
  • /img/:subreddit.png
The demo is a Reddit client, built with 5 popular front-end solutions:

A Collection of Server-Rendered UIs.

It is important to note that lit-html does not have first-class support for Server-Side Rendering at this time, so we have used a reimplementation of lit-html for Node.
Each version of the client gets its data from a local file in order to remove variance when benchmarking. Adding /?fetch=NAME_OF_SUBREDDIT to the URL will instead cause the client to fetch real data from a subreddit of your choosing, using the Reddit API.
Each project uses node-fetch internally as a way of interacting with the Reddit API.
For each example, we'll measure:
  • Cold bootup cost: we will instrument the cost of require()-ing the frameworks involved and the overall time taken for a Lambda to boot up.
  • Function Package Size: how big the resulting JS bundle is for each Lambda.
  • Throughput: we will measure the number of requests per second that each Lambda can serve for each of the front-end solutions, giving us a picture of overall network throughput.

The Plan

In order to create the Reddit client 5 ways, it is important to maintain as much consistency and similarity across implementations as possible. React, Preact, and vhtml each support similar hyperscript-style APIs, allowing easier code reuse between them. Consider,
// If React,
const h = React.createElement

// If not React,
const h = require('vhtml') // or
const { h } = require('preact')

// This tree stays the same irrespective of framework.
h('div', {}, 'Hello, world!')
Vue and lit-html have relatively different approaches, Vue works with a Vue instance, and lit-html uses tagged template literals. For these reasons, these variants were distinctly written while paying close attention to preserving identical behavior and functionality.
// Vue
renderToString(new Vue({
    data: () => ({ mockData }),
    template: \`
        <li v-for="thing in mockData">{{ thing.name }}</li>

// lit-html
    \${mockData.map(thing => \`<li>\${thing.name}</li>\\n\`)}

Cold Start

A cold start occurs when a Lambda is invoked (read: a URL is accessed) after remaining idle for a while. This process can take some time depending on the requirements of the lambda. We wrapped require() calls to each framework with console.time and console.timeEnd as a way of documenting the given framework's boot up time when a given Lambda is cold. This made the time taken transparent in the logs. The average of these benchmarks were taken into consideration.
Finally, we used hey to send load to web servers running our application in order to measure their throughput in terms of requests per second.


Our experiment has yielded the following results: vhtml appears to be the lightest and fastest tool for server-rendering User Interfaces. vhtml does not ship with a Virtual DOM implementation (unlike Preact, React, Vue, and most others), but instead serializes directly to HTML strings. Because of this functionality, it is significantly smaller than other solutions, while providing a familiar developer experience.
When used in conjunction with Preact (and Preact's virtual DOM) on the client-side, we imagine smaller builds and quicker performance for web applications.

Cold Start (avg. ms)
Lambda Size (kB)
Throughput (req/s)

Benchmark Results of Server-Rendered UIs


With Now 2.0's serverless execution model, you pay for what you use. Web applications built using lighter frameworks lead to significantly lower costs, while providing your users (and search bots) with all of the benefits of Server Side Rendering.
If you'd like to practice and experience the power of Lambdas and Now 2.0 for yourself, we would welcome pull requests that implement Lambdas with other UI frameworks not included in this project.
We look forward to building a serverless, secure and stable web together.