Reputation: 7953
I understand that memmove
in C (cstring library) handles overlaps nicely "at the cost of slower runtime" (see this post). I was wondering why this additional runtime cost? It seems to me that any overlap problem could be fixed by copying backwards instead of forward, am I wrong?
As a toy example, here are two versions of a "right-shift" function, that shifts the contents of an array by one element on the right:
// Using memmove
template <typename T>
void shift_right( T *data, unsigned n )
{
if (n)
{
data[n-1].~T();
memmove( data+1, data, (n-1)*sizeof(T) );
new (data) T();
}
}
// Using copy_backward
template <typename Iterator>
void shift_right( Iterator first, Iterator last )
{
Iterator it = last;
std::copy_backward( first, --it, last );
}
Are they equivalent? Performance-wise, which one is best to use?
Note: judging by the comment of @DieterLücking, and despite the precautions taken, the above version using memmove
is unsafe in this situation.
Upvotes: 3
Views: 4082
Reputation:
The appropriate ways to copy or move are std::copy, std::copy_n, std::copy_backward and std::move. A proper C++ library will use memcpy or memmove if applicable. Hence there is no need to go for undefined results if the copied or moved sequence holds no trivial data.
Note: Here the std::move is the template 'OutputIterator move(InputIterator first, InputIterator last, OutputIterator result);' (For @Void)
Upvotes: 2
Reputation: 75755
from the post you actually linked (emphasis mine):
memcpy just loops, while memmove performs a test to determine which direction to loop in to avoid corrupting the data. These implementations are rather simple. Most high-performance implementations are more complicated (involving copying word-size blocks at a time rather than bytes).
Upvotes: 2
Reputation: 106227
Assuming a good implementation, the only "extra cost" of memmove
is the initial check (an add and a compare-and-branch) to decide whether to copy front-to-back or back-to-front. This cost is so completely negligible (the add and compare will be hidden by ILP and the branch is perfectly predictable under normal circumstances) that on some platforms, memcpy
is just an alias of memmove
.
In anticipation of your next question ("if memcpy isn't significantly faster than memmove, why does it exist?"), there are a few good reasons to keep memcpy
around. The best one, to my mind, is that some CPUs essentially implement memcpy as a single instruction (rep/movs
on x86, for example). These HW implementations often have a preferred (fast) direction of operation (or they may only support copying in one direction). A compiler may freely replace memcpy
with the fastest instruction sequence without worrying about these details; it cannot do the same for memmove
.
Upvotes: 8
Reputation: 22133
Memmove figures out for you whether to copy backward or forward; it is also highly optimized for this task (i.e. copying in SSE-optimized blocks as much as feasible).
It is unlikely that you can do any better by calling any generic STL algorithm (the best they could do is to call memcopy or memmove behind the scenes), but of course you could answer this question simply by running your code and timing it.
Upvotes: 2