Reputation: 2140
I experience something weird. When I run tensorflow program
it print out this information before running:
Colocations handled automatically by placer.
2019-07-10 10:36:53.985595: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-07-10 10:36:54.011139: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3192000000 Hz
2019-07-10 10:36:54.011914: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x562dbc64bb10 executing computations on platform Host. Devices:
2019-07-10 10:36:54.011928: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2019-07-10 10:36:54.113358: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-07-10 10:36:54.114017: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x562dbf2935a0 executing computations on platform CUDA. Devices:
2019-07-10 10:36:54.114028: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
2019-07-10 10:36:54.114235: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 10.91GiB freeMemory: 10.19GiB
2019-07-10 10:36:54.114245: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-07-10 10:36:54.115348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-07-10 10:36:54.115355: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-07-10 10:36:54.115359: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-07-10 10:36:54.115505: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9911 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:From /home/sgnbx/anaconda3/envs/py3t2/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
As you see it says:
totalMemory: 10.91GiB freeMemory: 10.19GiB
However when I check to see how much memory I have in command using this command:
free -g
I see this output:
total used free shared buff/cache available
Mem: 31 5 24 0 1 25
Swap: 0 0 0
Why tensorflow do not have access to the whole memory? I may have missed something, please let me know.
Upvotes: 1
Views: 121
Reputation: 16104
The tensorflow log at line tensorflow/core/common_runtime/gpu/gpu_device.cc:1433
is printing out the info on your GPU (device 0 - also note the source file name gpu_device.cc
) :
2019-07-10 10:36:54.114235: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with
properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 10.91GiB freeMemory: 10.19GiB
GeForce GTX 1080 Ti has a memory of 11GB.
The free
command displays amount of free and used memory in the system, not your display card.
Upvotes: 1