Reputation: 21
in window environment,is build onnxruntime with --use_dml,my command is "
.\build.bat --update --build --build_shared_lib --build_wheel --config RelWithDebInfo --use_dml --cmake_generator "Visual Studio 16 2019" --parallel --skip_tests".
in the end ,the build is successful,the whl file is generated. then
pip3 install *.whl
.so i can use python to inference.but an error happend. my code is "
session_1 = onnxruntime.InferenceSession(model_path,options)"
the error is "2021-03-10 19:03:59.3618026 [W:onnxruntime:, inference_session.cc:411 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider." and the inference time is very long . who can tell me why?
Upvotes: 1
Views: 1408
Reputation: 165
According the doc (https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider.html#configuration-options), you should use the two following parameters when you declare your session :
options.enable_mem_pattern = False
options.execution_mode = rt.ExecutionMode.ORT_SEQUENTIAL
Upvotes: 0