Michael
Michael

Reputation: 7809

Confusion about CUDA unified virtual memory

I have some confusion about unified virtual memory.

The documentation behind the link (http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#unified-virtual-address-space) says it can be used when...

When the application is run as a 64-bit process, a single address space is used for the host and all the devices of compute capability 2.0 and higher.

But this link (http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements) says it needs:

a GPU with SM architecture 3.0 or higher (Kepler class or newer)

Furthermore, the first link says that I can use cudaHostAlloc. The second one then uses cudaMallocManaged.

Are there 2 different things between this 'Unified' term or is the documentation just a bit incoherent?

Upvotes: 3

Views: 3321

Answers (1)

George
George

Reputation: 5681

You are refering to Unified Virtual Address Space which is not the same as Unified memory which was introduced since CUDA 6.0 and for architecture 3.0 or higher and eliminates the need for explicit data transfer from host to device

unified memory

unified memory2

You can check also:

here , and here

Upvotes: 3

Related Questions