TseHsien
TseHsien

Reputation: 73

Cannot load .pb file input model: SavedModel format load failure: '_UserObject' object has no attribute 'add_slot'

I had converted my .h5 file to .pb file by using load_model and model.save as following

model = load_model("model_190-1.00.h5")
model.summary()
model.save("saved_model.pb")
converted = load_model(r"C:\Users\Hsien\Desktop\NCS2\OCT")
converted.summary()

Then I used model.summary() to make sure my original .h5 file and converted .pb file share the same structure.

But I got an error message when I tried to use mo_tf.py to transfer .pb file to IR format

hcl-lab@hcllab-SYS-5049A-TR:/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer$ sudo python mo_tf.py --saved_model_dir /home/hcl-lab/NCS2/OCT --input_shape [1,256,256,3] --data_type FP16
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      None
        - Path for generated IR:        /opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/.
        - IR output name:       saved_model
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,256,256,3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP16
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       None
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Use the config file:  None
        - Inference Engine found in:    /opt/intel/openvino_2021.4.689/python/python3.6/openvino
Inference Engine version:       2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version:        2021.4.1-3926-14e67d86634-releases/2021/4
2021-11-24 02:43:53.158753: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
2021-11-24 02:43:54.273204: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-11-24 02:43:54.273881: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-11-24 02:43:54.306060: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:1b:00.0 name: RTX A5000 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 64 deviceMemorySize: 23.69GiB deviceMemoryBandwidth: 715.34GiB/s
2021-11-24 02:43:54.306553: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties:
pciBusID: 0000:1c:00.0 name: RTX A4000 computeCapability: 8.6
coreClock: 1.56GHz coreCount: 48 deviceMemorySize: 15.74GiB deviceMemoryBandwidth: 417.29GiB/s
2021-11-24 02:43:54.306574: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-11-24 02:43:54.308779: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-11-24 02:43:54.308821: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-11-24 02:43:54.309547: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-11-24 02:43:54.309755: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-11-24 02:43:54.311122: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-11-24 02:43:54.311697: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-11-24 02:43:54.311815: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-11-24 02:43:54.314219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1
2021-11-24 02:43:54.314443: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-11-24 02:43:54.315010: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-11-24 02:43:54.438124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:1b:00.0 name: RTX A5000 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 64 deviceMemorySize: 23.69GiB deviceMemoryBandwidth: 715.34GiB/s
2021-11-24 02:43:54.438626: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 1 with properties:
pciBusID: 0000:1c:00.0 name: RTX A4000 computeCapability: 8.6
coreClock: 1.56GHz coreCount: 48 deviceMemorySize: 15.74GiB deviceMemoryBandwidth: 417.29GiB/s
2021-11-24 02:43:54.438656: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-11-24 02:43:54.438675: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-11-24 02:43:54.438683: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-11-24 02:43:54.438690: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-11-24 02:43:54.438698: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-11-24 02:43:54.438706: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-11-24 02:43:54.438714: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-11-24 02:43:54.438722: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-11-24 02:43:54.440675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0, 1
2021-11-24 02:43:54.440721: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-11-24 02:43:55.014738: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-11-24 02:43:55.014773: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      0 1
2021-11-24 02:43:55.014779: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0:   N N
2021-11-24 02:43:55.014782: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 1:   N N
2021-11-24 02:43:55.017101: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 22272 MB memory) -> physical GPU (device: 0, name: RTX A5000, pci bus id: 0000:1b:00.0, compute capability: 8.6)
2021-11-24 02:43:55.018332: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 14704 MB memory) -> physical GPU (device: 1, name: RTX A4000, pci bus id: 0000:1c:00.0, compute capability: 8.6)
[ FRAMEWORK ERROR ]  Cannot load input model: SavedModel format load failure: '_UserObject' object has no attribute 'add_slot'

I tried to do the same procedures both in Windows 10 and Ubuntu 18.04 with openvino_2021.4.689 and openvino_2021.4.752.

(I have put the .pb related files in the proper folder.)

Is it possible for me to do the wrong way to convert the two files (.h5 to .pb)?

Upvotes: 0

Views: 1665

Answers (2)

TseHsien
TseHsien

Reputation: 73

Thanks for Intel devs' help.

Finally I found where the shoe pinches.

I should use tf.saved_model.save rather than model.save when transferring .h5 file to .pb file.

The former will create only saved_model.pb file while the latter will create keras_metadata.pb and saved_model.pb files at the same time, which will lead to SavedModel format load failure with mo_tf.py.

Upvotes: 1

Rommel_Intel
Rommel_Intel

Reputation: 1413

You need to save the saved_model.pb file inside the saved_model folder, because the --saved_model_dir argument must provide a path to the SavedModel directory.

For instance, your current location is C:\Users\Hsien\Desktop\NCS2\OCT, move the model to C:\Users\Hsien\Desktop\NCS2\saved_model.

Upvotes: 0

Related Questions