Progressive Progressive Web Apps, Part 1

DZone 's Guide to

Progressive Progressive Web Apps, Part 1

Progressive web apps are pretty cool. But is it possible to build a truly progressive web app using nothing by JavaScripot? Read on to get a look at the experiment.

· Web Dev Zone ·
Free Resource

I like Progressive Web Apps. I like the model it offers for how you build good, solid, reliable websites and apps. I like the principle platform API - service worker - that enables the PWA model to work.

One of the traps that we have fallen into is "App Shell." The App Shell model says that your site should present a complete shell of your application (so that you van experience something even when you are offline) and you then control how and when to pull in content.

The App Shell

The App Shell model is roughly analogous to an "SPA" (Single Page App) - you load the shell, then every subsequent navigation is handled by JS directly in your page. It works well in many cases.

I don't believe that App Shell is the only nor the best model, and as always your choice varies from situation to situation; my own blog, for example, uses a simple "Stale-Whilst-Revalidate" pattern where every page is cached as you navigate around the site and updates will be displayed in a later refresh; in this post, I would like to explore a model that I have recently experimented with.

To App Shell or Not to App Shell

In the classic model of App Shell, it is nearly impossible to support a progressive render and I wanted to achieve a truly "Progressive" model for building a site with service workers that held the following properties:

  • It works without JS.
  • It works when there is no support for a Service Worker.
  • It is fast.

I set out to demonstrate this by creating a project that I've always wanted to build: A River of News + TweetDeck Hybrid. For a given collection of RSS feeds, render them in a column fashion.

Feed Deck - please ignore the styling

The "Feed Deck" is a good reference experience for experimenting with Service Workers and progressive enhancements. It has a server-rendered component, it has the need for a "shell" to show something to the user quickly and it has dynamically generated content that needs to be updated regularly. Finally, because it is a personal project, I don't need too much server infrastructure for saving user configuration and authentication.

I achieved most of this and I have learned a lot during the process. Some things still require JS, but the application, in theory, functions without JS; I long for Node.js to have more in common with DOM APIs; I built it entirely on Chrome OS with Glitch, but this final piece is a story for another day.

I set some definitions of what "Works" means early on in the project.

  • "It works without JS" - content loads on the screen and there is a clear path for it for everything working without JS in the future (or there is a clear justification about why it was not enabled). I can't just say "nah."
  • "It works when there is no support for a Service Worker" - everything should load, function, and be blazingly fast but I am happy if it doesn't work offline everywhere.

But that wasn't the only story, if we had JS and support for a service worker, I had a mandate to ensure:

  • It loaded instantly.
  • It was reliable and had predictable performance characteristics.
  • It worked fully offline.

Mea culpa: If you look at the code and you run it in an older browser there is a strong chance it won't work, I did choose to use ES6, however, this is not an insurmountable hurdle.

If we were to focus on building an experience that functioned without JavaScript enabled, then it holds that we should render as much as possible on the server.

Finally, I had a secondary goal: I wanted to explore how feasible it was to share logic between your Service Worker and you Server.... I tell a lie, this was the thing that excited me the most and a lot of the benefits of the progressive story fell out of the back of this.

What Came First, the Server of the Service Worker?

It was both at the same time. I have to render from the server, but because the service worker sits between the browser and the network I had to think about the interplay between the two.

I was in a lucky position in that I didn't have a lot of unique server logic so I could tackle the problem holistically and both at the same time. The principles that I followed were to think about what I wanted to achieve with the first render of the page (the experience that every user would get) and subsequent renders of the page (the experience that engaged users would get) both with and without a service worker.

First render - there would be no service worker available so I needed to ensure that the first render contained as much of the page content as possible and have it generated on the server.

If the user has a browser that supports the service worker then I can do a couple of interesting things. I already have the template logic created on the server and there isn't anything special about them, then they should be the exact same templates that I would use directly on the client. The service worker can fetch the templates at oninstall time and store them for later use.

Feed Deck - First load

Second render without service worker - It should act exactly like the first render. We might benefit from normal HTTP caching, but the theory is the same: render the experience quickly.

Second render with service worker - It should act exactly like the first server render, but, all inside the service worker. I don't have the traditional shell. If you look at the network all you see is the fully stitched together HTML: structure and content.

"The Render" - Streaming Is Our Friend

I was trying to be as progressive as possible which means that I need to render as much as possible on the server quickly. I had a challenge. If I merged all the data from all the RSS feeds then the first render would be blocked by network requests to RSS feeds and thus we would slow down the first render.

I chose the following path:

  • Render the head of the page - it's relatively static and getting this to the screen quickly aides with perceived performance.
  • Render the structure of the page based on the configuration (the columns) - for a given user this is currently static and making it visible quickly is important for users.
  • Render the column data if we have the content cached and available; we can do this on both the server and service worker
  • Render the footer of the page that contains the logic to dynamically update the contents of the page periodically.

With these constraints in mind, everything needs to be asynchronous and I need to get everything out on the network as quickly as possible.

There is a real dearth of streaming templating libraries on the web. I used streaming-dot by my good friend and colleague Surma which is a port of the templating framework doT but with added generators so that it can write to a Node or DOM stream and not block on the entire content being available.

Rendering the column data (i.e, what was in a feed) is the most important piece and this at the moment requires JavaScript on the client for the first load. The system is set up to be able to render everything on the server for the first load but I chose not to block on the network.

If the data has already been fetched and it is available in the service worker then we can get this out to the user quickly, even if it can quickly become stale.

The code to render the content whilst being aysnc is relatively procedural and follows the model described earlier: we render the header to the stream when the template is ready, then render the body contents to the stream which, in turn, may be waiting on content which, when available, will also be flushed to the stream. Finally when everything is ready we add in the footer and flush that to the response stream.

Below is the code that I use on the server and the service worker.

const root = (dataPath, assetPath) => {

  let columnData = loadData(`${dataPath}columns.json`).then(r => r.json());

  let headTemplate = getCompiledTemplate(`${assetPath}templates/head.html`);
  let bodyTemplate = getCompiledTemplate(`${assetPath}templates/body.html`);
  let itemTemplate = getCompiledTemplate(`${assetPath}templates/item.html`);

  let jsonFeedData = fetchCachedFeedData(columnData, itemTemplate);

   * Render the head from the cache or network
   * Render the body.
     * Body has template that brings in config to work out what to render
     * If we have data cached let's bring that in.
   * Render the footer - contains JS to data bind client request.

  const headStream = headTemplate.then(render => render({ columns: columnData }));
  const bodyStream = jsonFeedData.then(columns => bodyTemplate.then(render => render({ columns: columns })));
  const footStream = loadTemplate(`${assetPath}templates/foot.html`);

  let concatStream = new ConcatStream;

  headStream.then(stream => stream.pipeTo(concatStream.writable, { preventClose:true }))
                .then(() => bodyStream)
                .then(stream => stream.pipeTo(concatStream.writable, { preventClose: true }))
                .then(() => footStream)
                .then(stream => stream.pipeTo(concatStream.writable));

  return Promise.resolve(new Response(concatStream.readable, { status: "200" }))

With this model in place, it was actually relatively simple to get the above code and process working on the server and in the service worker.

Tune back in tomorrow to go over the rest of the code and see the thrilling conclusion! 

client-side javascript, progressive web applications, server-side, web application development, web dev

Published at DZone with permission of Paul Kinlan , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}