"Performance" is multi-dimensional; so is "compatibility" and "scalability"... Well, whatever it is, it surely tells us more than a single bit of information!
People often write in such simplistic extremes, and I'm starting to notice that many who work in computing -- sometimes even experienced decision-makers -- operate as though being able to say the words means that they understand the concepts. Here's an example:
Intellectual honesty recognizes complexity
Christopher Blizzard recently wrote on "intellectual honesty and HTML5". He makes several appropriate and even important points about, for instance, the profoundly mixed messages browser providers send when they advertise their conformance standards in ways that undercut interoperability. For him, "HTML5 is in a dangerous place since everyone wants to own it, but everyone is in a different place in terms of ... even what it means."
As part of his evidence, Blizzard includes a polychromatic table labeled "... support of currently displayed [HTML5] feature lists", denominated in percentages: Firefox 3.6, for example, is measured at 90%. Blizzard's a thoughtful, energetic guy, and I assume he recognizes the limits of his presentation. What has alarmed me, though, is the extent to which others with whom I've chatted about HTML5 believe percentages like those that appear in his article. Among the difficulties:
- HTML5 isn't finished;
- HTML5 isn't a single standard;
- the table isn't explicit about what a feature is. Is <audio>, for instance, one feature? Eighteen closely-related features?
- does "support" mean "parse in some way, even if not the way other browsers do", or "interpret identically with Mozilla" or ... ?
- features -- however they're segmented -- are all given the same weight, without regard to their importance in use; and
- even for a specific browser version, not all platform-specific implementations behave identically on all HTML5 features.
Despite all this, I encounter arguments based on these measurements where the discussants appear to believe that they've captured something real: "there's no point in testing for Safari because it's stagnant [that is, its quoted support percentage doesn't change from 4.0 to 4.1]."
An important part of engineering and project management is, of course, to be able to make good decisions with incomplete information. There's value in measurements like those which populate Blizzard's table. Nearly all of that value erodes away, though, when the reader's comprehension is so impoverished that he believes the measurements capture everything there is to know about, in this case, HTML5 compatibility. That is the pattern that sets off my alarms.
"Scalability", specifically in databases, is subject to the same disease. Scalability certainly is an important concept or idea. To reduce it to a boolean variable, though -- "DB2 isn't scalable; MySQL is scalable"-- is such a bad strategy as to leave me nearly speechless. "[E]s ist nicht einmal falsch!"
Any meaningful decision about database performance probably needs to juggle at least:
- scaling schema size;
- scaling table size;
- scaling network connectors; and
- distinction of read and write operations,
let alone capabilities for configuration of partitions, stripes, replicas, server-side procedures, multi-processing, and caching. Simplistic reduction of all that to a single figure or even bit tells more about the analyst than it does the database management system.
Yet, I often cross paths with practitioners who speak exactly that way. The only sure remedy I know is to be explicit and objective: what exactly are we trying to scale? How will we know if we're successful? Are we targeting a specific measurement? This kind of care requires a bit more effort than a knee-jerk, but it's far more likely to lead to a good outcome: purchase of an economical platform, or construction of an application back-end which actually keeps up with user demands.
It's good to make our ideas simple. It's bad to make them too simple.