Jane
Jane

Reputation: 1

How to compile C++ with CUB library?

I am using the CUB device function just like the example here (https://forums.developer.nvidia.com/t/cub-library/37675/2). I was able to compile the .cu source file in the above example using nvcc.

However, I wonder if it is possible to call CUB device function in .cpp source file and compile the .cpp source file (using nvcc or g++)? I know its possible for thrust, since the example here works for me.

Currently I simply move the main function into a new main.cpp file and include the cub header file in main.cpp, but I failed to compile it using nvcc or g++ because of the same errors, part of the error message:

/home/xx/cub/cub/block/specializations/../../block/../util_type.cuh:261:5: error: ‘__host__’ does not name a type; did you mean ‘__loff_t’?
         __host__ __device__ __forceinline__ NullType& operator =(const T&) { return *this; }
         ^~~~~~~~
         __loff_t
/home/xx/cub/cub/block/specializations/../../block/../util_type.cuh:316:19: error: ‘short4’ was not declared in this scope
     __CUB_ALIGN_BYTES(short4, 8)
                       ^
/home/xx/cub/cub/block/specializations/../../block/../util_type.cuh:314:52: error: ISO C++ forbids declaration of ‘__align__’ with no type [-fpermissive]
         { enum { ALIGN_BYTES = b }; typedef __align__(b) t Type; };
                                                        ^
/home/xx/cub/cub/block/specializations/../../block/../util_type.cuh:545:9: error: ‘__host__’ does not name a type; did you mean ‘__loff_t’?
             __host__ __device__ __forceinline__ CubVector operator+(const CubVector &other) const {         \
             ^
/home/xx/cub/cub/block/specializations/../../block/../util_arch.cuh:64:38: error: ‘__host__’ does not name a type; did you mean ‘CUhostFn’?
         #define CUB_RUNTIME_FUNCTION __host__ __device__
                                      ^
/home/xx/cub/cub/device/../iterator/arg_index_input_iterator.cuh:144:25: error: ‘__forceinline__’ does not name a type; did you mean ‘__thrust_forceinline__’?
     __host__ __device__ __forceinline__ ArgIndexInputIterator(
                         ^~~~~~~~~~~~~~~
                         __thrust_forceinline__
/home/xx/cub/cub/device/device_reduce.cuh:148:12: error: ‘cudaError_t’ does not name a type; did you mean ‘cudaError_enum’?
     static cudaError_t Reduce(
            ^~~~~~~~~~~
            cudaError_enum

Here are my source files:

device.h

#pragma once

#include <cub/cub.cuh>

void scan_on_device();

device.cu

#include "device.h"

void scan_on_device()
{
  // Declare, allocate, and initialize device pointers for input and output
  int num_items = 7;
  int *d_in;
  int h_in[]  = {8, 6, 7, 5, 3, 0, 9};
  int sz = sizeof(h_in)/sizeof(h_in[0]);
  int *d_out; // e.g., [ , , , , , , ]
  cudaMalloc(&d_in,  sz*sizeof(h_in[0]));
  cudaMalloc(&d_out, sz*sizeof(h_in[0]));
  cudaMemcpy(d_in, h_in, sz*sizeof(h_in[0]), cudaMemcpyHostToDevice);
  printf("\nInput:\n");
  for (int i = 0; i < sz; i++) printf("%d ", h_in[I]);
  // Determine temporary device storage requirements
  void *d_temp_storage = NULL;
  size_t temp_storage_bytes = 0;
  cub::DeviceScan::InclusiveSum(d_temp_storage, temp_storage_bytes, d_in, d_out, num_items);
  // Allocate temporary storage
  cudaMalloc(&d_temp_storage, temp_storage_bytes);
  // Run inclusive prefix sum
  cub::DeviceScan::InclusiveSum(d_temp_storage, temp_storage_bytes, d_in, d_out, num_items);
// d_out s<-- [8, 14, 21, 26, 29, 29, 38]
  cudaMemcpy(h_in, d_out, sz*sizeof(h_in[0]), cudaMemcpyDeviceToHost);
  printf("\nOutput:\n");
  for (int i = 0; i < sz; i++) printf("%d ", h_in[i]);
  printf("\n");
}

host.cpp

#include "device.h"
#include <cub/cub.cuh>

int main(void)
{
    scan_on_device();

    return 0;
}

I tried to compile them in three steps:

nvcc -O2 -c device.cu -I/home/xx/cub
g++  -O2 -c host.cpp  -I/usr/local/cuda/include/ -I/home/xx/cub
g++ -o tester device.o host.o -L/usr/local/cuda/lib64 -lcudart

The first step went well, but the second step gives the above errors. Any ideas are appreciated. Maybe I mess up some links (to cuda or cub)?

Upvotes: 0

Views: 1701

Answers (1)

Robert Crovella
Robert Crovella

Reputation: 152279

The cub header library (e.g. cub.cuh) contains CUDA C++ code. Such code cannot be compiled by an ordinary host compiler like g++. You will get compile errors if you try to do so.

However your project doesn't require cub.cuh to be in your device.h header file, nor does it require cub.cuh to be compiled by g++. The only thing needed in your device.h header file is the function prototype for scan_on_device().

Therefore if you include the cub header file in the function implementation file device.cu, and remove it elsewhere in your project, your code will compile.

Upvotes: 1

Related Questions