Reputation: 81
I'm trying to allocate shared gpu memory (has nothing to do with shared memory technology) with cuda. The memory is shared between an intel and nvidia gpu. To allocate memory I'm using cudaMallocManaged and the maximum allocation size is 2GB (which is also the case for cudaMalloc), so the size of the dedicated memory.
Is there a way to allocate gpu shared memory or RAM from host, which can then be used in kernel?
Upvotes: 1
Views: 2607
Reputation: 152173
I assume the objective here is to be able to access more than 2GB of memory from your CUDA code, running on your MX150 GPU. The "shared" memory you highlighted is part of the windows graphics system and is not directly accessible from CUDA.
The only option you have is to switch to linux. You can then "oversubscribe" your GPU memory with cudaMallocManaged (i.e. allocate more than 2GB).
Oversubscription of GPU memory is not supported in windows WDDM driver model, and the MX150 only supports WDDM driver model in windows.
Upvotes: 4