Reputation: 912
This snippet was taken from a DataCamp example:
for (log in logs) {
info <- c(info, log$timestamp)
}
I am new to R, but I have a gut feeling this runs with O(n2) time complexity (which means that doubling the input size would cause it run 4 times as long).
Does it?
And if it does, how could building up info
iteratively be done in a faster way?
Upvotes: 0
Views: 42
Reputation: 24726
You are visiting every element in logs
once over the course of the loop, so this code runs with O(n) time complexity. O(n2) would be if for every element in the list you visited every other element in the list, i.e.:
for (log in logs) {
for (log in logs) {
info <- c(info, log$timestamp)
}
}
That being said, just because something has a linear complexity or a polynomial complexity doesn't mean it's a good idea (though all things being equal one is definitely better than the other). There are other factors to consider, such as how efficient each individual step in the iteration is, or how big your collection is.
Time complexity can tell you an overall story, but it doesn't tell you everything. There could be another approach to doing the same operation that has the same (or even worse) time complexity but has much better operational efficiency resulting in a faster overall algorithm. For example, the code info <- sapply(logs, `$`, "timestamp")
that r2evans provided in their comment presumably has the same time complexity but processes the entire collection more efficiently and would eliminate the need for you to explicitly construct a loop.
Upvotes: 2