Reputation: 1785
The question is that: is there a way to use the class "vector" in Cuda kernels? When I try I get the following error:
error : calling a host function("std::vector<int, std::allocator<int> > ::push_back") from a __device__/__global__ function not allowed
So there a way to use a vector in global section? I recently tried the following:
........ after that I was able to use the printf standard library function in my Cuda kernel.
is there a way to use the standard library class vector
in the way printf is supported in kernel code? This is an example of using printf in kernel code:
// this code only to count the 3s in an array using Cuda
//private_count is an array to hold every thread's result separately
__global__ void countKernel(int *a, int length, int* private_count)
{
printf("%d\n",threadIdx.x); //it's print the thread id and it's working
// vector<int> y;
//y.push_back(0); is there a possibility to do this?
unsigned int offset = threadIdx.x * length;
int i = offset;
for( ; i < offset + length; i++)
{
if(a[i] == 3)
{
private_count[threadIdx.x]++;
printf("%d ",a[i]);
}
}
}
Upvotes: 56
Views: 53807
Reputation: 131547
You can't use std::vector
in device-side code. Why?
The "formal" reason is that, to use code in your device-side function or kernel, that code itself has to be in a __device__
function; and the code in the standard library, including, std::vector
is not. (There's an exception for constexpr
code; and in C++20, std::vector
does have constexpr
methods, but CUDA does not support C++20 at the moment, plus, that constexpr
ness is effectively limited.)
The std::vector
class uses allocators to obtain more memory when it needs to grow the storage for the vectors you create or add into. By default (i.e. if you use std::vector<T>
for some T
) - that allocation is on the heap. While this could be adapted to the GPU - it would be quite slow, and incredibly slow if each "CUDA thread" would dynamically allocate its own memory.
Now, you could say "But I don't want to allocate memory, I just want to read from the vector!" - well, in that case, you don't need a vector per se. Just copy the data to some on-device buffer, and either pass a pointer and a size, or use a CUDA-capable span
, like in cuda-api-wrappers
or cuda-kat
.
Another option, though a bit "heavier", is to use the NVIDIA thrust library's device_vector
class. Under the hood, it's quite different from the standard library vector though.
Upvotes: 2
Reputation: 272
In the CUDA library Thrust, you can use thrust::device_vector<classT>
to define a vector on the device, and the data transfer between host STL vector
and device_vector
is very straightforward. You can refer to this useful link to find some useful examples.
Note however, that device_vector
itself can not be used in device code either. Only its pointers/iterators can be used there.
Upvotes: 14
Reputation: 483
I think you can implement a device vector by youself, because CUDA supports dynamic memory alloction in device codes. Operator new/delete are also supported. Here is an extremely simple prototype of device vector in CUDA, but it does work. It hasn't been tested sufficiently.
template<typename T>
class LocalVector
{
private:
T* m_begin;
T* m_end;
size_t capacity;
size_t length;
__device__ void expand() {
capacity *= 2;
size_t tempLength = (m_end - m_begin);
T* tempBegin = new T[capacity];
memcpy(tempBegin, m_begin, tempLength * sizeof(T));
delete[] m_begin;
m_begin = tempBegin;
m_end = m_begin + tempLength;
length = static_cast<size_t>(m_end - m_begin);
}
public:
__device__ explicit LocalVector() : length(0), capacity(16) {
m_begin = new T[capacity];
m_end = m_begin;
}
__device__ T& operator[] (unsigned int index) {
return *(m_begin + index);//*(begin+index)
}
__device__ T* begin() {
return m_begin;
}
__device__ T* end() {
return m_end;
}
__device__ ~LocalVector()
{
delete[] m_begin;
m_begin = nullptr;
}
__device__ void add(T t) {
if ((m_end - m_begin) >= capacity) {
expand();
}
new (m_end) T(t);
m_end++;
length++;
}
__device__ T pop() {
T endElement = (*m_end);
delete m_end;
m_end--;
return endElement;
}
__device__ size_t getSize() {
return length;
}
};
Upvotes: 6
Reputation: 767
you can't use std::vector
in device code, you should use array instead.
Upvotes: 7
Reputation: 773
You can't use the STL in CUDA, but you may be able to use the Thrust library to do what you want. Otherwise just copy the contents of the vector to the device and operate on it normally.
Upvotes: 26