Reputation: 6700
I found some different and conflicting answers on this topic.
I am building an application which works mostly with html dynamically generated by jQuery, based on results acquired from underlying API in form of JSON data.
I was told by some of my collegues (personally), that the best way would be to do something like this:
var ul = $("<ul>").addClass("some-ul");
$.each(results, function(index) {
ul.append($("<li>").html(this).attr("id", index));
});
$("body").append($("<div>").attr("id", "div-id").addClass("some-div").append(ul));
etc. The reason I was told it was that "updates the DOM directly instead of parsing html to achieve it".
However, I see lots of code like this (same example):
var toAppend = '<div class="some-div" id="div-id"><ul>';
$.each(results, function(index) {
toAppend += '<li id="' + index + '">' + this + '</li>';
});
toAppend += '</ul></div>'
Which I personally consider as not as elegant - but is it better? I googled the issue for a couple of minutes and found this article. Basically, it is about increasing performance drastically by using string concatenation - my "second way".
The main issue of this article is that it has been released in 2009 and discussed jQuery version is 1.3. Today, the current release is version 1.6.4 which can behave quite differently. And this is the issue of most articles on the subject I have already found and I'm also somehow suspicious about their credibility.
That's why I have decided to post the question here and ask - which method of generating DOM is actually the proper one, based on performance?
IMPORTANT EDIT:
I have written a little benchmark to test which approach is better considering performance.
jsFiddle - concatenation version
jsFiddle - array join version
Code:
var text = "lorem ipsum";
var strings = $("#strings");
var objects = $("#objects");
var results = $("#results");
// string concatenation
var start = new Date().getTime();
var toAppend = ['<div class="div-class" id="div-id1"><ul class="ul-class" id="ul-id1">'];
for (var i = 1; i <= 20000; i++) {
toAppend[i] = '<li class="li-class" id="li-id1-' + i + '">' + text + '</li>';
}
toAppend[i++] = '</ul></div>';
results.append(toAppend.join(""));
strings.html(new Date().getTime() - start);
// jquery objects
var start = new Date().getTime();
var ul = $("<ul>").attr("id", "ul-id2").addClass("ul-class");
for (var i = 0; i < 20000; i++) {
ul.append($("<li>").attr("id", "li-id2-" + i).addClass("li-class"));
}
results.append($("<div>").attr("id", "div-id2").addClass("div-class").append(ul));
objects.html(new Date().getTime() - start);
It seems that operating on strings is faster (in Firefox 7 about 7 times) than using jQuery objects and methods. But I can be wrong, especially if there are any mistakes or performance-decreasing bugs in this "benchmark's" code. Feel free to make any changes.
Note: I used Array join
because of the article mentioned earlier instead of actual concatenation.
EDIT: Based on suggestion by @hradac, I used actual string concatenation in the benchmark and it did in fact improve the times.
Upvotes: 21
Views: 32179
Reputation: 19528
First of all, this kind of micro-benchmarking almost never tells you what you want to really know. Second of all, your benchmarks are varied and not equivalent. For example, your first example generates lines that look like this:
<li class="li-class" id="li-id1-493">lorem ipsum</li>
and your second lines like this:
<li id="li-id2-0" class="li-class"></li>
Notice the different element order and the lack of "lorem ipsum" text. Nor is there any attempt to clean out the results div between tests to avoid performance issues as a result of the first 20K results already being there.
But beyond these issues is the question, "Is performance on this really disrupting the client side user experience?" Seriously? You're rendering such a quantity of text this way that you're seeing noticeable differences between the alternative methods rendering the text?
I'll harken back to what others have said, use a templating engine. The fastest ones are quite quick indeed and even have pre-compilation options to allow you to re-render the same template and get quick results. So don't believe me. Instead, believe a demonstration. Here's my jsFiddle to demonstrate the performance of the new JsRender library that is supposed to replace the jQuery Template engine...
Note: It can take several seconds for JsRender to load into the Fiddle. It's because I'm pulling it straight out of GitHub and that's not something that GitHub is particularly good at. I don't recommend that in actual practice. It doesn't change the timings though and it's necessary until jsFiddle starts incorporating templating engines as options.
Notice that the second example, much closer to a real-world example generates 20,000 lines using JSON as its starting point in time approximately the same as your fastest test (< 50ms difference on my machine). Note also that both the code and the template are much clearer and easier to work with than any mess of appends and string concatenation is ever going to be. How many iterations am I going to need to get my template right vs. what you're doing?
Use something simple and stop wasting time on this level of micro optimization when it's probably not even necessary. Instead use templates like this (or any of several other good templating engines) and make sure that you've got expires headers turned on, you're using a CDN, you've got gzip compression turned on on your server, etc. All the stuff that YSlow is telling you to do, because that will completely swamp the effects of what you're looking at here.
Upvotes: 12
Reputation: 95057
Basically, the first method uses multiple method calls to .innerHTML to create the result while the second method only uses one. This is the primary area that causes the difference in the amount of time that it takes to execute.
I would suggest a 3rd method using an array if the strings get extremely large.
var toAppend = ['<div class="some-div" id="div-id"><ul>'];
$.each(results, function(index) {
toAppend.push('<li id="' + index + '">' + this + '</li>');
});
toAppend.push('</ul></div>');
$(target).append(toAppend.join(""));
I generally use the array method only to be consistant.
Edit: hradac is right, the concatenate method is faster now.
Upvotes: 2
Reputation: 13640
Your first approach is better. The idea is not to touch the DOM until you have to. In both examples, you create the UL in memory and then at the end you attach it to the DOM with body.append()
.
However, the preferred way to build a tree is in your first example. The string concatenation is a bit slower (of course, we're talking milliseconds). However, if you have to do this many times per page, that could become significant.
I'd clean up your code a bit, but only for readability:
var div = $("<div>").attr("id", "div-id").addClass("some-div");
var ul = $("<ul>").addClass("some-ul");
$.each(results, function(index) {
var li = $("<li>").html(this).attr("id", index);
ul.append(li);
});
div.append(ul);
// you haven't touched the DOM yet, everything thus far has been in memory
$("body").append(div); // this is the only time you touch the DOM
Upvotes: 2
Reputation: 25421
Check out jquery templates -
http://api.jquery.com/category/plugins/templates/
Upvotes: 6
Reputation: 171511
Generally, I think readability trumps performance unless you are actually having issues. I would opt for the second approach as it is easily recognizable as a standard HTML snippet, and I think less prone to errors.
With the first method, I have to parse a lot of code in my minhd just to imagine what the final string will look like (and thus where it might need modifications).
Upvotes: 4