MyronStewart
MyronStewart

Reputation: 21

GPU MHZ Utilization

I am developing a monitoring agent for GPU cards that is capable of providing real-time telemetry using CUDA and NVML libraries.

I want to understand a little more about GPU core operation vs how Intel/AMD CPU cores work.

One formula that can be used for CPUs is (cpumhz or Workload average peak CPU utilization (MHz)) as follows:

((CPUSPEED * CORES) /100) * CPULOAD = Workload average peak CPU utilization

More details are here https://vikernel.wordpress.com/tag/vmware-formulas/

So would it be correct that the same formula can be applied to GPUs. The exception would be CUDA cores/shaders in place of "CORES" or could I just multiple the current clock speed by the actual gpu clock usage being that a GPU has a core clock for its 1000s of cores/shaders.

For example:

((GRAPHICS_MHZ * CUDA_CORES) /100) * GPU_LOAD = GPU MHZ utilization

Upvotes: 2

Views: 634

Answers (2)

MyronStewart
MyronStewart

Reputation: 21

I think I found my answer based on how a GPU card works. Being that each core runs in parallel they are working a lot more effectively than a CPU core from what I have read.

With a CPU core, you can use the above formula, but if you want to see the mhz used on a gpu card, you can simple just use:

(GRAPHICS_MHZ * /100) * GPU_LOAD = GPU MHZ utilization

The good thing is that the GPU_LOAD you get back is a different calculation provided from a GPU card than what you get from a CPU card. If anyone has a different opinion, I would love to hear it.

Upvotes: 0

Panos Kalatzantonakis
Panos Kalatzantonakis

Reputation: 12683

Check out gpustat, it is a wrapper of nvidia-smi.


enter image description here

And GPUtil, it can fetch Maximum current relative load for a GPU

Upvotes: 2

Related Questions