Reputation: 7362
I cannot find in the official docs, or within any of the many articles I've read how CSS is parsed with compound selectors.
Side note: Obviously there are reasons for compound selectors in some specificity needs; and yes, descendant selectors are expensive.
Most articles simply validate that CSS is read right-to-left like this:
div.some-class li a
The authors of these articles state something like:
First, all anchor elements are matched, then the parser looks for a list-item as an ancestor, then it looks for an ancestor div with the class of "some-class."
In these descriptions, it is appearing that the CSS parser looks at space-separated combinators as single units instead of reading right-to-left within a given compound selector.
So a very common argument I see online and at work is that div.some-class
is faster than .some-class
because, "It only has to look at div
s that have that class." But, that would only make sense if CSS was read left-to-right, OR if in a compound selector there's an exception for better performance that it finds the element collection first before seeing if there's a matching class.
However, using the example above, my understanding is this:
All
a
elements are matched, then if there is anli
ancestor it's still matched, then it looks for ANY ELEMENT ancestor with a class of "some-class", THEN it checks if that element is adiv
. If so, the styles will be applied.
The real question:
1) Is div.some-class
still read right-to-left in that compound form; or,
2) as a compound selector, does the CSS parser find all ancestor div
s first, then see if they have that class?
An official source of the answer is what I'm most interested in.
Possible answer: Assuming the document.querySelectorAll
uses the CSS parsing engine and not a JavaScript "version" of it, I found the following:
Based on a test I did with 200,000
p
elements on a page, and all with the class of "p" on them. Querying.p
repeated in a loop 100X, vsp.p
showed that.p
is the fastest in Chrome 53. Selectingp.p
takes 1.71X as long. I repeated the process 8 times and averaged the numbers to get the difference..p
= 2,358 ms andp.p
= 4,036 ms.
function p() {
var d = Date.now();
var a = [];
var i = 0;
function fn() {
a.push(document.querySelectorAll(".p").length);
}
for (;i<100;i++) {
fn();
}
console.log(".p = " + (Date.now() - d));
}
function pp() {
var d = Date.now();
var a = [];
var i = 0;
function fn() {
a.push(document.querySelectorAll("p.p").length);
}
for (;i<100;i++) {
fn();
}
console.log("p.p = " + (Date.now() - d));
}
In Chrome 53, it appears that compound selectors are in fact still read right-to-left, making element.class compound selectors much slower than selecting by class alone, and the same with attributes instead of classes.
In IE11, it's mostly the inverse. Though not significantly faster, compound selectors with element.class
or element[attribute]
were actually faster than getting an the elements by class or attribute alone.
Upvotes: 0
Views: 1255
Reputation:
It is not clear that querySelectorAll
is a valid way of testing the performance of basic CSS matching, since they are two separate problems: whereas querySelectorAll
is trying to find elements matching some selector, basic CSS matching is trying to find the selector matching some element. It is entirely possible that they are implemented quite differently internally. For instance, if I was implementing querySelectorAll('div span')
, I might choose to first find all the div
elements, and then find span
descendants, whereas if I was trying to find the rule to match the <span>
in <div><span></span></div>
, I would look for rules ending in span
, then check their ancestors.
Having said that, if you have performance problems with your CSS, I would suggest that rather than concerning yourself with internal details of the CSS engine, such as the order in which the components of p.p
are matched, you might be better off going back and reviewing some basic principles of your CSS. In particular, in case you are using SASS etc., make sure you are not falling into the temptation of overly qualified, overly nested selectors, such as would be generated by structures which unnecessarily mirror the structure of the HTML, as in
.page-content {
.article {
.section {
.quote {
It could easily be the case that a single rule for .quote
is all you need, without the ancestor qualifiers. As a very general rule, CSS architectures involving independent, single-purpose classes which are composed are going to perform better than ones with long chains of selectors trying to match HTML structure.
In addition, make sure you are following basic best practices such as not specifying the element type when it is not necessary, such as is often the case with div.div
instead of just .div
.
Of course, you should find and weed out unused rules if you haven't already done that.
Upvotes: 0
Reputation: 723448
Compound selectors are not necessarily evaluated in any specific order. For example, most if not all implementations optimize for ID, class and type selectors to match fast or fail fast (at least Gecko does according to Boris Zbarsky), then evaluate attribute selectors and pseudo-classes as necessary.
It's not feasible to predict how exactly any given browser, let alone all of them, will evaluate a compound selector, let alone each compound selector in a complex selector containing more than one, but what we do know is that right-to-left matching starts from the rightmost compound selector and steps leftward until matching fails.
It's important to note that this is merely an implementation detail that's agreed upon by vendors — you could implement selector matching however you like, but so long as you match the right elements with the right selectors, your implementation will be standards-compliant.
But what's most important is that, in the real world, none of this is likely to matter. Write selectors that are readable and meaningful, don't unnecessarily overqualify them, avoid specificity hacks where possible, and you should be good.
Upvotes: 3