redolent
redolent

Reputation: 4259

Is there a way to specify __device__ for an entire file? (Nvidia Cuda Compiler)

I am importing a library and I get this error when compiling:

go.cu(61): error: calling a __host__ function("TinyJS::Interpreter::Interpreter()") from a __global__ function("capnduk_kernel") is not allowed

...is there a way to port an entire file (TinyJS) to run on the device?

I've checked the compiler documentation, and it doesn't look like there's a way to do this. I'm guessing the only way is to rewrite the file by hand, which is a can of worms.

Upvotes: 0

Views: 334

Answers (2)

einpoklum
einpoklum

Reputation: 132096

While NVCC does not support this (as Robert points out), this is an option for run-time compilation, via the NVRTC library:

Documentation lists the following compilation option:

--device-as-default-execution-space (-default-device)

Treat entities with no execution space annotation as __device__ entities.

Notes:

  • With this being the case, I would consider submitting a bug report to NVIDIA and asking them to add this option to NVCC.
  • clang++ supports compiling CUDA, perhaps it has such a flag.
  • This NVRTC is also supported by the Modern-C++ wrappers library for CUDA, which is more convenient to use than working with NVRTC directly. (Caveat: That's my own library.)

Upvotes: 0

Robert Crovella
Robert Crovella

Reputation: 152143

There isn't a way to do this with nvcc. It will require manual effort.

Upvotes: 2

Related Questions