Vladimir
Vladimir

Reputation: 1

Bazel build error with Cuda on Windows 10, how resolve it?

I'm trying to compile a Tensorflow core, with CUDA support (I need cc 8.6 for GeForce3090) on Windows 10 64bit via Bazel by this instruction: https://www.tensorflow.org/install/source_windows

I use this build string: bazel build --config=opt --config=cuda --define=no_tensorflow_py_deps=true //tensorflow/tools/pip_package:build_pip_package

I get error, like bellow:

How I can resolve this?

Thanx.

  > D:\Python\tensorflow>bazel build --config=opt --config=cuda
    > --define=no_tensorflow_py_deps=true //tensorflow/tools/pip_package:build_pip_package Extracting Bazel
    > installation... WARNING: Ignoring JAVA_HOME, because it must point to
    > a JDK, not a JRE. Starting local Bazel server and connecting to it...
    > WARNING: The following configs were expanded more than once: [cuda,
    > using_cuda]. For repeatable flags, repeats are counted twice and may
    > lead to unexpected behavior. INFO: Options provided by the client:  
    > Inherited 'common' options: --isatty=1 --terminal_columns=237 INFO:
    > Reading rc options for 'build' from d:\python\tensorflow\.bazelrc:  
    > Inherited 'common' options: --experimental_repo_remote_exec INFO:
    > Options provided by the client:   'build' options:
    > --python_path=C:/Program Files/Python37/python.exe INFO: Reading rc options for 'build' from d:\python\tensorflow\.bazelrc:   'build'
    > options: --apple_platform_type=macos --define
    > framework_shared_object=true --define open_source_build=true
    > --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --define=tensorflow_enable_mlir_generated_gpu_kernels=0 --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=short_logs --config=v2 INFO: Reading rc options for 'build' from d:\python\tensorflow\.tf_configure.bazelrc:   'build' options:
    > --action_env PYTHON_BIN_PATH=C:/Program Files/Python37/python.exe --action_env PYTHON_LIB_PATH=C:/Program Files/Python37/lib/site-packages --python_path=C:/Program
    > Files/Python37/python.exe --config=xla --action_env
    > CUDA_TOOLKIT_PATH=C:/Program Files/NVIDIA GPU Computing
    > Toolkit/CUDA/v11.1 --action_env TF_CUDA_COMPUTE_CAPABILITIES=5.2,8.6
    > --config=cuda --action_env TF_CONFIGURE_IOS=0 INFO: Found applicable config definition build:short_logs in file
    > d:\python\tensorflow\.bazelrc: --output_filter=DONT_MATCH_ANYTHING
    > INFO: Found applicable config definition build:v2 in file
    > d:\python\tensorflow\.bazelrc: --define=tf_api_version=2
    > --action_env=TF2_BEHAVIOR=1 INFO: Found applicable config definition build:xla in file d:\python\tensorflow\.bazelrc:
    > --define=with_xla_support=true INFO: Found applicable config definition build:cuda in file d:\python\tensorflow\.bazelrc:
    > --config=using_cuda --define=using_cuda_nvcc=true INFO: Found applicable config definition build:using_cuda in file
    > d:\python\tensorflow\.bazelrc: --define=using_cuda=true --action_env
    > TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain
    > --define=tensorflow_enable_mlir_generated_gpu_kernels=1 INFO: Found applicable config definition build:opt in file
    > d:\python\tensorflow\.tf_configure.bazelrc: --copt=/arch:SSE4.2
    > --define with_default_optimizations=true INFO: Found applicable config definition build:cuda in file d:\python\tensorflow\.bazelrc:
    > --config=using_cuda --define=using_cuda_nvcc=true INFO: Found applicable config definition build:using_cuda in file
    > d:\python\tensorflow\.bazelrc: --define=using_cuda=true --action_env
    > TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain
    > --define=tensorflow_enable_mlir_generated_gpu_kernels=1 INFO: Found applicable config definition build:windows in file
    > d:\python\tensorflow\.bazelrc: --copt=/W0 --copt=/D_USE_MATH_DEFINES
    > --host_copt=/D_USE_MATH_DEFINES --cxxopt=/std:c++14 --host_cxxopt=/std:c++14 --config=monolithic --copt=-DWIN32_LEAN_AND_MEAN --host_copt=-DWIN32_LEAN_AND_MEAN --copt=-DNOGDI --host_copt=-DNOGDI --copt=/experimental:preprocessor --host_copt=/experimental:preprocessor --linkopt=/DEBUG --host_linkopt=/DEBUG --linkopt=/OPT:REF --host_linkopt=/OPT:REF --linkopt=/OPT:ICF --host_linkopt=/OPT:ICF --experimental_strict_action_env=true --verbose_failures --distinct_host_configuration=false INFO: Found applicable config definition build:monolithic in file d:\python\tensorflow\.bazelrc:
    > --define framework_shared_object=false DEBUG: Rule 'io_bazel_rules_go' indicated that a canonical reproducible form can be obtained by
    > modifying arguments shallow_since = "1557349968 -0400" DEBUG:
    > Repository io_bazel_rules_go instantiated at:   no stack
    > (--record_rule_instantiation_callstack not enabled) Repository rule
    > git_repository defined at:  
    > C:/users/vladimir/_bazel_vladimir/sd7zocps/external/bazel_tools/tools/build_defs/repo/git.bzl:195:33:
    > in <toplevel> INFO: Repository local_config_cuda instantiated at:   no
    > stack (--record_rule_instantiation_callstack not enabled) Repository
    > rule cuda_configure defined at:  
    > D:/python/tensorflow/third_party/gpus/cuda_configure.bzl:1418:33: in
    > <toplevel> 

------------------------------------------------------------------------
**ERROR: An error occurred during the fetch of repository
    > 'local_config_cuda':**    Traceback (most recent call last):
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 1388
    >                 _create_local_cuda_repository(<1 more arguments>)
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 1064,
    > in _create_local_cuda_repository
    >                 _find_libs(repository_ctx, <2 more arguments>)
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 599,
    > in _find_libs
    >                 _check_cuda_libs(repository_ctx, <2 more arguments>)
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 501,
    > in _check_cuda_libs
    >                 execute(repository_ctx, <1 more arguments>)***
    >         File "D:/python/tensorflow/third_party/remote_config/common.bzl", line 217,
    > in execute
    >                 fail(<1 more arguments>) Repository command failed "C:/Program" is not an internal or external command, executable
    > program, or batch file. 

---------------------------------------------------------------------------

ERROR: Skipping
    > '//tensorflow/tools/pip_package:build_pip_package': no such package
    > '@local_config_cuda//cuda': Traceback (most recent call last):
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 1388
    >                 _create_local_cuda_repository(<1 more arguments>)
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 1064,
    > in _create_local_cuda_repository
    >                 _find_libs(repository_ctx, <2 more arguments>)
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 599,
    > in _find_libs
    >                 _check_cuda_libs(repository_ctx, <2 more arguments>)
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 501,
    > in _check_cuda_libs
    >                 execute(repository_ctx, <1 more arguments>)
    >         File "D:/python/tensorflow/third_party/remote_config/common.bzl", line 217,
    > in execute
    >                 fail(<1 more arguments>) Repository command failed "C:/Program" is not an internal or external command, executable
    > program, or batch file. WARNING: Target pattern parsing failed. ERROR:
    > no such package '@local_config_cuda//cuda': Traceback (most recent
    > call last):
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 1388
    >                 _create_local_cuda_repository(<1 more arguments>)
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 1064,
    > in _create_local_cuda_repository
    >                 _find_libs(repository_ctx, <2 more arguments>)
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 599,
    > in _find_libs
    >                 _check_cuda_libs(repository_ctx, <2 more arguments>)
    >         File "D:/python/tensorflow/third_party/gpus/cuda_configure.bzl", line 501,
    > in _check_cuda_libs
    >                 execute(repository_ctx, <1 more arguments>)
    >         File "D:/python/tensorflow/third_party/remote_config/common.bzl", line 217,
    > in execute
    >                 fail(<1 more arguments>) Repository command failed "C:/Program" is not an internal or external command, executable
    > program, or batch file. INFO: Elapsed time: 47.934s INFO: 0 processes.
    > FAILED: Build did NOT complete successfully (0 packages loaded)
    >     currently loading: tensorflow/tools/pip_package

Upvotes: 0

Views: 2883

Answers (1)

Vladimir
Vladimir

Reputation: 1

Summary: I was resolved this problem by to steps:

  1. Move Python from C:\Program Files\Python... to C:\Python... and after we need change correspondence string in PATH variable 2)Check and delete another bash installation,exept msys64, check: passway C:\msys64\usr\bin mast be present in your PATH variable

Upvotes: 0

Related Questions