Matvey
Matvey

Reputation: 163

GPU memory questions

I have 3 questions about gpu memory:

  1. Why my application takes a different amount of GPU memory on different machines (with different graphic card)?

  2. What happens when there is not enough memory on GPU for my application? Can RAM memory be used instead? Who is responsible for this memory management?

  3. I saw a strange behavior of GPU memory: My application starts with 2.5/4 GB GPU memory. When running some function, the GPU memory reaches the maximum (4 GB)and then immediately falls down to illogical values (less than was allocated before this function). How it could be explained ?

Upvotes: 0

Views: 186

Answers (1)

talonmies
talonmies

Reputation: 72349

  1. Why my application takes a different amount of GPU memory on different machines (with different graphic card)?

Because the GPUs are different. Code sizes, minimum runtime resource requirements, page sizes, etc can be different between GPUs, driver versions, and toolkit versions.

  1. What happens when there is not enough memory on GPU for my application

That would depend entirely on your application and how it handles runtime errors. But the CUDA runtime will simply return errors.

  1. Can RAM memory be used instead?

Possibly, if you have designed your application to use it. But automatically, no

  1. Who is responsible for this memory management?

You are.

  1. I saw a strange behavior of GPU memory: My application starts with 2.5/4 GB GPU memory. When running some function, the GPU memory reaches the maximum (4 GB)and then immediately falls down to illogical values (less than was allocated before this function). How it could be explained ?

The runtime detected an irrecoverable error (like a kernel trying to access invalid memory as the the result of a prior memory allocation failure) and destroyed the CUDA context held by your application, which releases all resources on the GPU associated with your application.

Upvotes: 1

Related Questions