Reputation: 2227
Currently, I am doing a project for my university, where I do implement known data structures (array, linked list, BST, etc.) and I have to measure the times some operations on them require. For example, the first one for me was array. I've measured the time for adding an element in the middle of an array (with moving further values ofc, so n = n + 1
. It of course gave me O(n)
, where n is a number of elements. Now I have to check for adding an element to the beginning of array and appending an element to it (adding to the end). I've got 2 simple algorithms (I cannot use STL
or boost::
):
Adding to the beginning:
void array::beginning_append(int value)
{
int *temp = new int[n + 1];
memcpy(temp + 1, table, sizeof(int) * n);
n = n + 1;
temp[0] = value;
delete[] table;
table = new int[n];
memcpy(table, temp, sizeof(int) * n);
delete[] temp;
}
Appending to the end:
void array::append(int value)
{
int *temp = new int[n + 1];
memcpy(temp, table, sizeof(int) * n);
temp[n] = value;
n = n + 1;
delete[] table;
table = new int[n];
memcpy(table, temp, sizeof(int) * n);
delete[] temp;
}
Those are methods or array
class, where table
and n
are members of this class. Now those do not differ that much, I thought their times will be the same, and they are (I checked with QueryPerformanceCounter
for big numbers of elements, like 500000
, and it gave me O(n)
complexity), but my lecturer said that adding to beginning will have O(n)
computational complexity, but adding or removing an element from the end will have complexity of O(1)
, so it will be const, no matter of the number of elements. So I would like to ask you guys if my algorithms are bad, I mean they do unnecessary stuff, and that's why they rely on number of elements, or my lecturer was just wrong
Upvotes: 3
Views: 153
Reputation: 73366
Both complexities are:
O(n)
because you are copying the entire array.
See more about complexity of memcpy()
here.
Tip: You could easily skip the second copy, by freeing the memory table
is pointing to, and then making it point temp
. That way you will copy only once, not twice. However, the overall time complexity will not change.
Example:
void array::prepend(int value)
{
int *temp = new int[n + 1];
memcpy(temp + 1, table, sizeof(int) * n);
n = n + 1;
temp[0] = value;
delete[] table;
table = temp;
}
Pro tip: How do you detect/avoid Memory leaks in your (Unmanaged) code?
Hunch: A clever implementation of realloc()
should execute faster than a brute memcpy()
to a new array. The reason is that if realloc()
is able to grow/shrink your array without having to copy the whole thing into a new location (which can occur when there is simple not enough contiguous space), then it will be faster. realloc()
does not mean O(1) always.
Upvotes: 3
Reputation: 12047
Adding to the end can be achieved in O(1)
when the buffer is big enough to hold the new element. Usually, when a reallocation is done, the memory size is increased by more than what is needed at the moment, to not have to do reallocations too often.
In this case you would have an average complexity of O(1)
and a worst-time complexity of O(n)
when a reallocation occurs.
Upvotes: 1