GeekLee
GeekLee

Reputation: 161

warning: calling a __host__ function from a __host__ __device__ function is not allowed

I referenced almost all similar questions but did not find an answer. Error checking is recommended by many people, So I tried to use CHECKED_CALL() type macro to make the program strong, but my code has encountered two problems:

  1. As the title says, I got a warning message, but before I used #pragma hd_warning_disable, I got the error message:

    cuEntityIDBuffer.cu(9): error: identifier "stderr" is undefined in device code

  2. When I compiled the maintest.cpp, I got another error:

EDIT:

 g++ -c maintest.cpp -std=c++11
 cuEntityIDBuffer.h:1:27: fatal error: thrust/reduce.h: No such file or directory

However, it works fine when compiling cuEntityIDBuffer.cu cuEntityIDBuffer.h is also included in this file.
nvcc -arch=sm_35 -Xcompiler '-fPIC' -dc cuEntityIDBuffer.cu

Both cuEntityIDBuffer.cu and maintest.cpp #include "cuEntityIDBuffer.h", but maintest.cpp throws an error, I have no ideas about it.

The code is below:

cuEntityIDBuffer.h

#include <thrust/reduce.h>
#include <thrust/execution_policy.h>
#include <stdio.h>
#include <assert.h>
#include <cuda_runtime.h>

#ifdef __CUDACC__
#define CUDA_CALLABLE_MEMBER __host__ __device__
#else
#define CUDA_CALLABLE_MEMBER
#endif

class cuEntityIDBuffer
{
public:
    CUDA_CALLABLE_MEMBER cuEntityIDBuffer();
    CUDA_CALLABLE_MEMBER cuEntityIDBuffer(unsigned int* buffer);
    CUDA_CALLABLE_MEMBER void cuCallBackEntityIDBuffer(unsigned int* buffer);
    CUDA_CALLABLE_MEMBER ~cuEntityIDBuffer();
    CUDA_CALLABLE_MEMBER void cuTest();
private:
    size_t buffersize;
    unsigned int* cuBuffer;
};

cuEntityIDBuffer.cu

#include "cuEntityIDBuffer.h"
#include <stdio.h>
#pragma hd_warning_disable
#define nTPB 256
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
   if (code != cudaSuccess) 
   {
      fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
      if (abort) exit(code);
   }
}

__global__ void mykernel(unsigned int* buffer)
{
    int idx = threadIdx.x + (blockDim.x * blockIdx.x);
    buffer[idx]++;
    //other things.
}

cuEntityIDBuffer::cuEntityIDBuffer()
{
    buffersize=1024;
    gpuErrchk(cudaMalloc(&cuBuffer, buffersize * sizeof(unsigned int)));
}

cuEntityIDBuffer::cuEntityIDBuffer(unsigned int* buffer)
{
    buffersize=1024;
    gpuErrchk(cudaMalloc(&cuBuffer, buffersize * sizeof(unsigned int)));
    gpuErrchk(cudaMemcpy(cuBuffer,buffer,buffersize*sizeof(unsigned int),cudaMemcpyHostToDevice));
}

void cuEntityIDBuffer::cuCallBackEntityIDBuffer(unsigned int* buffer)
{
    gpuErrchk(cudaMemcpy(buffer,cuBuffer,buffersize*sizeof(unsigned int),cudaMemcpyDeviceToHost));
}

cuEntityIDBuffer::~cuEntityIDBuffer()
{
    gpuErrchk(cudaFree((cuBuffer)));
}

void cuEntityIDBuffer::cuTest()
{
    mykernel<<<((buffersize+nTPB-1)/nTPB),nTPB>>>(cuBuffer);
    gpuErrchk(cudaPeekAtLastError());
}

maintest.cpp

#include "cuEntityIDBuffer.h"
#include <iostream>

int main(int argc, char const *argv[])
{
    unsigned int *h_buf;
    h_buf=malloc(1024*sizeof(unsigned int));
    cuEntityIDBuffer d_buf(h_buf);
    d_buf.cuTest();
    d_buf.cuCallBackEntityIDBuffer(h_buf);
    return 0;
}

Is it the wrong way that I used the CHECKED_CALL() type macro or is there a problem with my code organization? any suggestion is appreciated.

Upvotes: 4

Views: 6934

Answers (1)

Robin Thoni
Robin Thoni

Reputation: 1731

Your methods are defined as __host__ and __device, which means they will be compiled once for CPU and once for the device. I don't see any big issue for the CPU version. However, you have two problems for the device version:

  • cuEntityIDBuffer.cu(9): error: identifier "stderr" is undefined in device code is very clear, you're trying to use the CPU variable stderr in device code.

  • warning: calling a __host__ function from a __host__ __device__ function is not allowed is the same kind of problem: without any of __host__, __device__ or __global__ attribute, symbols are implicitly set to __host__, which means in your case that the device version of your methods is trying to use gpuAssert which is only on CPU side.

For cuEntityIDBuffer.h:1:27: fatal error: thrust/reduce.h: No such file or directory, as @Talonmies pointed out, any Thrust code has to be built using nvcc.

Upvotes: 3

Related Questions