Reputation: 1467
I'm experimenting with DSP.jl - the conv()
method, in particular. I'm using CUDANative
and CuArrays
to create arrays to be arguments to conv()
, so that the cuda versions of fft()
, etc. will be used. I'm using BenchmarkTools
to get performance data. I find that the Julia runtime complains about running out of CPU or GPU memory under odd circumstances. Here's my test setup:
using CUDAdrv, CUDAnative, CuArrays
using DSP
using FFTW
using BenchmarkTools
N = 120
A = rand(Float32, N, N, N);
B = rand(Float32, N, N, N);
A_d = cu(A);
B_d = cu(B);
function doConv(A, B)
C = conv(A, B)
finalize(C)
C = []
end
t = @benchmark doConv($A_d, $B_d)
display(t)
Here's an example of the odd behavior I mentioned. If I set N
to 120, my script runs to completion. If I set N
to 64, I get the "out of memory" error:ERROR: LoadError: CUFFTError(code 2, cuFFT failed to allocate GPU or CPU memory)
. I can run the smaller case first, get the error, then bump N
to the larger value and have the script complete successfully.
Is there something I should be doing differently to prevent this from happening?
Upvotes: 3
Views: 287