The Limit Breaker
The Limit Breaker

Reputation: 35

Are the Performances of these two programs the same?

Say for example you have a linked list 1->2->3->4->5->6->NULL and you want to calculate the total of the even indices of that linked list (assuming that the 1st index starts with 1 and the size of the linked list is even)

First Approach

int total = 0;
int count = 0;
Node *ptr = head;
while(ptr != NULL)
{
    if(count % 2 == 0)
     {
       total += ptr->data;
     }
     count++;
     ptr = ptr->next;
}
 

Second Approach

int total = 0;
Node *ptr = head;
while(ptr != NULL)
{  
    total += ptr->data;
     ptr = ptr->next->next;
}

So after I did these two approaches do they have the same performance?

Upvotes: 1

Views: 72

Answers (1)

paddy
paddy

Reputation: 63481

I read your question again and will answer that probably the second method is slightly faster.

Now, the comments section immediately highlighted that it's also more dangerous. You have actually specified that the assumption is the number of nodes in the list is even. If that is a guaranteed and enforceable precondition, then it's technically okay to do this.

Even a smart optimizing compiler has no way of knowing about this precondition of even list-length, so the very best it could likely achieve is to recognize that count is only used for controlling whether total is updated and so the loop could be unrolled as follows:

// Possible automatic compiler optimization of First Approach
while (ptr)
{
    total += ptr->data;
    ptr = ptr->next;

    // Skip over every second node
    if (ptr) ptr = ptr->next;
}

In basic terms, what we now have is one more pointer test (branch) per loop iteration than your Second Approach has. This results in more instructions (specifically a branching instruction) and so the code will technically be (slightly) slower.

Of course, the actual impact of this is likely to be very small. Your main bottleneck is pointer indirection and fetches from memory, rather than the pointer test itself. If the memory used by each node is not mostly contiguous, you'll run into caching problems on large lists (which in practice affects performance by about a factor of 100).

What I mean to indicate by all the above, is that the benefits of your special optimization based on the precondition of even list-length suffers from diminishing returns.

Given that it is inherently unsafe unless very well-documented in the code and/or protected by a list "evenness" test (if you store the node count somewhere), I would recommend coding defensively by using your First Approach or use my equivalent and (arguably) tidier version of that.

Upvotes: 3

Related Questions