There's some really interesting things about this switch:
- Native XPath is blazing fast. For a majority of CSS selectors it completely trumps using native DOM methods (like
getElementsByTagName, for example). Sometimes it pays to special-case your code for selectors like
- No one is analyzing the performance of browser XPath queries. Or, if they are, it's certainly not public. I'm working on some new XPath performance tests, in order to bring them some more visibility, and hope to have them released this week.
- XPath, while incredibly useful, is a black box. The developer has no control over how fast the results come back - or if they are even correct. Contrast this with traditional DOM scripting (where you can fine-tune your queries to perfection). Browsers will always be bound to have some bugs in their implementations. For example, Safari 3 isn't capable of doing "-of-type" or ":empty" style CSS selectors, nor is any browser able to access the 'checked' property, or namespaced attributes (all noted in Prototype's implementation) which means that they have to fall back to a traditional DOM scripting model.
- Internet Explorer is a dead-end. Since most users want a CSS selector implementation that will work against HTML documents - and IE is unable to provide one - all CSS Selector implementations must provide two (2) side-by-side selector engines in order to handle these cases (not to mention the aforementioned cases where browsers provide unexpected behavior).
A couple things to take away from all of this:
- These implementations are black boxes that are unable to be modified by the developer (leaving them vulnerable to browser bugs).
- A dual DOM-only CSS selector engine must be provided well into the foreseeable future, by libraries, in order to account for browser mis-implementations.
I should, also, probably answer the inevitable question: "Why doesn't jQuery have an XPath CSS Selector implementation?" For now, my answer is: I don't want two selector implementations - it makes the code base significantly harder to maintain, increases the number of possible cross-browser bugs, and drastically increases the filesize of the resulting download. That being said, I'm strongly evaluating XPath for some troublesome selectors that could, potentially, provide some big performance wins to the end user. In the meantime, we've focused on optimizing the actual selectors that most people use (which are poorly represented in speed tests like SlickSpeed) but we hope to rectify in the future.