Most (widely-used) pre-HTML5 web standards really did think of the web as 'stuff marked up'. The whole concept, and even the name, of XML shows this. But even the W3C/WHATWG switch from XHTML2.0 to HTML5 suggests that markup, even awesome markup, doesn't represent where the web is going.
As a result, much of HTML5, and many of the emerging web technologies often grouped loosely with HTML5, focus less on markup and more on stuff.
But with the rise of the mobile web, and the appearance of natural UI devices like Kinect, even the 'stuff' is beginning to look second-fiddle at times.
So what replaces 'stuff'? Experience, of course: people don't want things, they want experiences. This means, among other things, experiencing sense-objects other than pure text.
And many emerging web technologies also re-focus on user experience, other than pure text. Canvas, native audio/video, WebGL, CSS3 -- the list goes on -- not to mention AI like Siri -- are designed specifically to provide something more than text.
But for developers, stuff besides text is notoriously hard to work with -- let alone things besides stuff, such as, well, user experience. For example: Kinect is an awesome technology, but not much software its capabilities anywhere near capacity (although this may be changing). And it took five years for game developers to provide a user experience that finally delivers on Wii's promise. (Even Wii! which isn't remotely as complex or multipotential as Kinect.)
So now, as experience-focused web standards emerge but have not yet fully matured, the experience-focused web developer -- always programmed to program for graceful degradation -- faces another twist in the emerging-standards dilemna. Given that I can degrade gracefully, how much degredation is permissible while still achieving a very specific user experience?
Take, for instance, any application attempting to provide real-time user experience. There is a huge, huge gulf between real-time and almost-real-time. How many milliseconds? That depends on the application in many ways -- but, however many seconds 'real-time' means, the application needs to serve the experience in no more than precisely that number of milliseconds.
These were the kinds of issues discussed at the Keeping It Real-Time conference earlier this month. And during one session in particular, the tension between web standard (im)maturity and user experience came to a head.
Here's how it happened.
which of course isn't even 'not real-time' -- it just isn't an application at all.
Scott blogged about this, and suggested a technique for graceful degradation:
Long polling is a technique that lets it look like your browser has a "persistent connection" to the server when in fact you've just got a really "persistent client."...Long Polling is not as efficient [as Web Sockets] but it works nicely and is a totally reasonable fallback when Web Sockets support is unavailable. The latest preview release of IE10 includes Web Sockets support. But my mom doesn't know about Web Sockets and shouldn't have to.
The 10,000 people on the planet that care about Web Sockets are not your customers, and while using Web Sockets might get you mentioned on TechCrunch, supporting only Web Sockets is a great way to cut your potential audience in half.
Which all sounds quite reasonable, even traditional.
But more interesting was the reply from &yet, the folks who created &!.
&yet's Henrik Joreteg retorted:
We shipped an app that requires WebSockets. Here's why:
Users care about their experience.
I think this is something the web has ignored for far too long so I'll say it again:
Users only care about their experience.
We didn't require Web Sockets because we're enamored with the technology, we actually require it precisely because it provides the best user experience.
Henrik suggests some parallels with Apple, which are interesting, but his basic reasoning is:
I've said this over and over again: web developers who are building single-page applications are in direct competition with native applications.
That's a strong claim, and Henrik repeats it several times just to make sure you understand how strongly he means it.
But does providing no experience for some people help? Does supporting only a seriosuly-real-time standard, the sort of standard Web Sockets makes possible, make your web app competetive with native apps -- and simultaneously not annoy people who can't use it at all?
I don't know, but Henrik thinks it does, and he thinks Apple thinks so too. Both Scott's (traditional) and Henrik's (radical) posts are worth reading -- especially as you begin to develop for user experience more and more directly, wondering exactly how to use fledgling web standards like Web Sockets and HTML5.
And I don't know who's right in this dispute. What do you think?