Reputation: 2311
I am trying to run the example add.cu
(see below) from this official nvidia tutorial using nvcc add.cu -o add_cuda; ./add_cuda
and get Segmentation fault (core dumped)
.
I installed the nvidia cuda toolkit on Ubuntu 18 using sudo apt install nvidia-cuda-toolkit
. I have a NVIDIA GF100GL Quadro 5000 and am using NVIDIA driver metapackage from nvidia-driver-390 (proprietary, tested)
I have little C++ experience, but the pure C++ code from the beginning of the tutorial compiled and ran correctly.
Following a comment, I added a check for the return of cudaMallocManaged
and got operation not supported
.
#include <iostream>
#include <math.h>
// Kernel function to add the elements of two arrays
__global__
void add(int n, float *x, float *y)
{
for (int i = 0; i < n; i++)
y[i] = x[i] + y[i];
}
int main(void)
{
int N = 1<<20;
float *x, *y;
// Allocate Unified Memory – accessible from CPU or GPU
cudaMallocManaged(&x, N*sizeof(float));
cudaMallocManaged(&y, N*sizeof(float));
// initialize x and y arrays on the host
for (int i = 0; i < N; i++) {
x[i] = 1.0f;
y[i] = 2.0f;
}
// Run kernel on 1M elements on the GPU
add<<<1, 1>>>(N, x, y);
// Wait for GPU to finish before accessing on host
cudaDeviceSynchronize();
// Check for errors (all values should be 3.0f)
float maxError = 0.0f;
for (int i = 0; i < N; i++)
maxError = fmax(maxError, fabs(y[i]-3.0f));
std::cout << "Max error: " << maxError << std::endl;
// Free memory
cudaFree(x);
cudaFree(y);
return 0;
}
Upvotes: 2
Views: 2472
Reputation: 1072
Your card belongs to fermi family with compute capability version 2.0. It does not support the Unified Memory as stated here:
K.1.1. System Requirements
Unified Memory has two basic requirements:
a GPU with SM architecture 3.0 or higher (Kepler class or newer)
a 64-bit host application and non-embedded operating system (Linux, Windows, macOS)
Upvotes: 3