Reputation: 321
I am using a GeForce GT 720 to do some basic calculations in Matlab.
I am simply doing matrix multiplication:
A = rand(3000,3000); % Define array using CPU
tic; % Start clock
Agpu = gpuArray(A); % Transfer data to GPU
Bgpu = Agpu*Agpu; % Perform computation on GPU
time = toc; % Stop clock
In this code, my clock is timing the data transfer to the GPU and matrix multiplication on the GPU, and I get time ~ 4 seconds. I suspect that the data transfer is taking much more time than the multiplication, so I isolate it with my timer:
A = rand(3000,3000); % Define array using CPU
tic; % Start clock
Agpu = gpuArray(A); % Transfer data to GPU
time = toc; % Stop clock
Bgpu = Agpu*Agpu; % Perform computation on GPU
and indeed it takes ~ 4 seconds. However, if I comment the last line of the code, so that no multiplicaiton is done, my code speeds up to ~0.02 seconds.
Does performing a computation with the GPU after transferring data to the GPU alter the speed of the data transfer?
Upvotes: 2
Views: 236
Reputation: 25140
I don't see this behaviour at all (R2017b, Tesla K20c) - for me, either way, the transfer takes 0.012 seconds. Note that if you're running this in a fresh MATLAB session each time, the very first time you run anything at all on the GPU takes a few seconds - perhaps that accounts for the 4 seconds?
In general, use gputimeit
to time stuff on the GPU to ensure you don't see strange results from the asynchronous nature of some GPU operations.
Upvotes: 2