Reputation: 732
I wanted to use GPU in Magick++ functions of my project. I've built ImageMagick using this tutorial (OpenCL enabled). The convert.exe file after the built says that OpenCL is enabled. I've included the necessary headers and libs (CORE_RL_Magick++_.lib, CORE_RL_MagickCore_.lib, CORE_RL_MagickWand_.lib). I've also set a IMAGEMAGICK_OPENCL_CACHE_DIR system variable with a path (to store the necessary file). I'm using Windows and Visual Studio.
This is my code:
...
#include <Magick++.h>
#include <MagickCore/MagickCore.h>
#include <MagickWand/MagickWand.h>
#include <MagickCore\accelerate-private.h>
...
using namespace Magick;
int main(int argc, char *argv[])
{
EnableOpenCL();
InitializeMagick(NULL);
EnableOpenCL(); // Executed this after InitializeMagick too to be sure. Maybe it's needed to be executed after InitializeMagick().
Image img;
img.read("F:/tmp/test/22/7.png");
// These two functions should use GPU. But they don't. :
img.gaussianBlur(15, 3);
img.edge();
}
The code gives no errors but as you can see my code has functions that ImageMagick can use GPU in them. But it does not. It only uses CPU. Also, ImageMagick generates no files in the path set by IMAGEMAGICK_OPENCL_CACHE_DIR.
What part of it is wrong?
Edit:
C:\Users\User1>convert -version
Version: ImageMagick 7.0.8-4 Q16 x64 2018-06-29 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2018 ImageMagick Studio LLC
License: http://www.imagemagick.org/script/license.php
Visual C++: 190024210
Features: Cipher DPC HDRI Modules OpenCL OpenMP
Delegates (built-in): bzlib cairo flif freetype gslib heic jng jp2 jpeg lcms lqr lzma openexr pangocairo png ps raw rsvg tiff webp xml zlib
Also, my GPU supports OpenCL.
Upvotes: 0
Views: 930
Reputation: 24419
Updated Answer
If setting MAGICK_OCL_DEVICE
has no effect, and/or EnableOpenCl
returns false
, then OpenCL kernels can not be loaded one the targeted platform/device. The behavior of ImageMagick is to emit a DelegateWarning
(not error), and fail-back to the CPU.
I suspect that this is the case, and that the reasons why are logged to file named magick_badcl_build.log
, and the actual OpenCL code that failed is written to magick_badcl.cl
. Both files should be posted to the developer forums for review.
Also, the environment variable should be MAGICK_OPENCL_CACHE_DIR
, not IMAGEMAGICK_OPENCL_CACHE_DIR
. I think the documentation is wrong.
From opencl.c
home=GetEnvironmentValue("MAGICK_OPENCL_CACHE_DIR"); if (home == (char *) NULL) { home=GetEnvironmentValue("XDG_CACHE_HOME"); if (home == (char *) NULL) home=GetEnvironmentValue("LOCALAPPDATA"); if (home == (char *) NULL) home=GetEnvironmentValue("APPDATA"); if (home == (char *) NULL) home=GetEnvironmentValue("USERPROFILE"); }
Original Answer
Use the method InitImageMagickOpenCL
to control the OpenCL device management, not EnableOpenCl
(which will auto select the best device, and the best device is usually the CPU).
cl_uint platformCount;
cl_platform_id platforms[4];
cl_uint deviceCount;
cl_device_id devices[8];
cl_device_id * user_selected_GPU = nullptr;
clGetPlatformIDs(4, platforms, &platformCount);
// Grap the first GPU off the first platform.
// !!! Check docs as this is a _very_ bad example. !!!
if (platformCount > 0) {
clGetDeviceIDs(platforms[0], CL_DEVICE_TYPE_GPU, 8, devices, &deviceCount);
if (deviceCount > 0) {
user_selected_GPU = devices[0];
}
}
if (user_selected_GPU) {
MagickCore::ExceptionInfo * exception = MagickCore::AcquireExceptionInfo();
MagickCore::InitImageMagickOpenCL(
MagickCore::MAGICK_OPENCL_DEVICE_SELECT_USER,
user_selected_GPU,
nullptr,
exception
);
} else {
// No GPU found...
}
Usually you won't need to bother defining what device to run on as ImageMagick will pull system & environment information. On my mac, for example, the CPU will always be selected out of the pool of devices. If I would prefer to use GPU directly, I can define this at runtime with environment variables.
MAGICK_OCL_DEVICE=GPU ./my_opencl_enabled_magick_application
Upvotes: 1