Tutorial: Single Page Interface Web Sites
Join the DZone community and get the full member experience.
Join For FreeSeveral weeks ago The Single Page Interface Manifesto was published, promoting a "new" world of web sites based on a single page simulating pages when needed.
Now it's time for a tutorial to build this kind of SPI web sites based on ItsNat framework.
The Single Page Interface (SPI) approach is not new, it has been used since the early days of DOM. AJAX has made SPI mainstream thanks to some web frameworks, server and client centric, today most of the innovation in web is taken place in the SPI centric way of development.
Beyond SPI web applications... SPI web sites SEO compatible
In spite of SPI is very popular on pure web applications, is almost fully unknown on web sites. Web sites are based on (many) pages because they have some strong requirements like compatibility with search engines (SEO), bookmarking, Back/Forward buttons and some services based on pages like visit counters.
The Single Page Interface Manifesto showed how we can provide these features on SPI applications. The trickiest problem is Search Engine Optimization (SEO compatibility), this requirement implies that our web site must be page based for web crawlers and at the same time SPI for normal (with JavaScript enabled) users.
ItsNat is a server-centric framework ready to create this kind of SPI web sites; ItsNat approach is very simple, the server keeps a DOM tree of the page in server and any change to the DOM tree in server with Java W3C DOM API is automatically replicated in client by ItsNat. Client events are transported to the server by AJAX and converter to Java W3C DOM Events when there is some event listener registered in server.
Templating is based on pure HTML files (view logic is coded with Java W3C DOM code), these templates can be pages and fragments. Fragment templates are pure HTML files which only the content in <body> is used (optionally <head> can be used), loaded when needed by developers and converted to DOM. User code in any time can insert new markup loaded from fragment templates with DOM API into the DOM tree of the single page. ItsNat automatically inserts this new code in client using innerHTML when possible. When innerHTML property is set any browser processes this markup with the built-in native parser, hence performance is ever better than parsing fully a web page on load time (including full change of the page with innerHTML) because parsing and building a complete DOM Document object is ever more time consuming than any fragment.
Avoding repetition in templates
Almost any web site repeats the same pattern, headers and footers are shared between all pages with small modifications and in the simplest case there is one zone or content area going to be changed in a page basis. In page based development this implies all templates repeat headers and footers usually with some ugly parameterized include directives.
In ItsNat and SPI, header and footer are designed once in the single page template of our web site and each page is now a page fragment only including the markup going to be included into the content area of the single page. Headers and footers can be changed accordingly using DOM APIs or with other small page fragments when needed.
How can our web site be page based and SPI at the same time?
So far we have seen how ItsNat works in SPI and how to convert web sites to SPI, no word about the promise of SEO, because any JavaScript code is ignored by web crawlers including any JavaScript code sent by ItsNat to automatically synchronize the client page.
The obvious solution is to develop two web sites, one for web crawlers (page based) and one (SPI) for normal users with JavaScript enabled. This is not needed with ItsNat.
ItsNat has a key feature named fast-load mode.
If fast load mode is disabled the initial page sent to the client as markup is the page template and DOM operations done in load time in server by developer code are sent as JavaScript. This mode is not SEO friendly because the page template, now the structural pattern of our web site, does not contain the content markup required, this content markup is inserted from page fragments depending on the initial state (previously the page), going to be initially loaded, that is calling DOM APIs hence sent as JavaScript code ignored by web crawlers.
In fast-load mode (the default mode), the DOM tree is serialized as markup after developer code is executed, because developer code includes into the DOM tree the content markup of the initial state required, this markup is also sent to the client as markup therefore web crawlers “see” this markup.
In summary, the same code which changes the DOM tree in any time is the same code which builds the initial page.
The tutorial shows these ideas with a concrete example including source code.
The problem of bookmarking (dual links), Back/Forward buttons detection and simulation (URL change detection with timers) and visit counters (with iframes) are also covered problems.
Links:
Full Tutorial
Source code and binaries
Running online
Opinions expressed by DZone contributors are their own.
Comments