Reputation: 868
How does the standard C function 'memcpy' work? It has to copy a (large) chunk of RAM to another area in the RAM. Since I know you cannot move straight from RAM to RAM in assembly (with the mov instruction) so I am guessing it uses a CPU register as the intermediate memory when copying?
But how does it copy? By blocks (how would it copy by blocks?), by individual bytes (char) or the largest data type they have (copy in long long double's - which is 12 bytes on my system).
EDIT: Ok apparently you can move data from RAM to RAM directly, I am not an assembly expert and all I have learnt about assembly is from this document (X86 assembly guide) which mentions in the section about the mov instruction that you cannot move from RAM to RAM. Apparently this isn't true.
Upvotes: 32
Views: 36902
Reputation: 145909
A trivial implementation of memcpy
is:
while (n--) *s2++ = *s1++;
But glibc
usually uses some clever implementations in assembly code. memcpy
calls are usually inlined.
On x86, the code checks if the size parameter is a literal multiple of 2
or a multiple of 4
(using gcc
builtins functions) and uses a loop with movl
instruction (copy 4
bytes) otherwise it calls the general case.
The general case uses the fast block copy assembly using rep
and movsl
instructions.
Upvotes: 8
Reputation: 727017
The implementation of memcpy
is highly specific to the system in which it is implemented. Implementations are often hardware-assisted.
Memory-to-memory mov instructions are not that uncommon - they have been around since at least PDP-11
times, when you could write something like this:
MOV FROM, R2
MOV TO, R3
MOV R2, R4
ADD LEN, R4
CP: MOV (R2+), (R3+) ; "(Rx+)" means "*Rx++" in C
CMP R2, R4
BNE CP
The commented line is roughly equivalent to C's
*to++ = *from++;
Contemporary CPUs have instructions that implement memcpy
directly: you load special registers with the source and destination addresses, invoke a memory copy command, and let CPU do the rest.
Upvotes: 15
Reputation: 13955
Depends. In general, you couldn't physically copy anything larger than the largest usable register in a single cycle, but that's not really how machines work these days. In practice, you really care less about what the CPU is doing and more about the characteristics of DRAM. The memory hierarchy of the machine is going to play a crucial determining role in performing this copy in the fastest possible manner (e.g., are you loading whole cache-lines? What's the size of a DRAM row with respect to the copy operation?). An implementation might instead choose to use some kind of vector instructions to implement memcpy
. Without reference to a specific implementation, it's effectively a byte-for-byte copy with a one-place buffer.
Here's a fun article that describes one person's adventure into optimizing memcpy
. The main take-home point is that it is always going to be targeted to a specific architecture and environment based on the instructions you can execute inexpensively.
Upvotes: 27