Reputation: 614
When running launch_benchmark.py in intel model zoo github(https://github.com/IntelAI/models) with the below arguments
python launch_benchmark.py --data-location /home/user/coco/output/ --in-graph /home/user/ssd_resnet34_fp32_bs1_pretrained_model.pb --model-source-dir /home/user/tensorflow/models --model-name ssd-resnet34 --framework tensorflow --precision fp32 --mode inference --socket-id 0 --batch-size=1 --docker-image gcr.io/deeplearning-platform-release/tf-cpu.1-14 --accuracy-only
I am getting the below error:
Inference for accuracy check.
Traceback (most recent call last):
File "/tmp/benchmarks/scripts/tf_cnn_benchmarks/models/ssd_model.py", line 507, in postprocess
import coco_metric # pylint: disable=g-import-not-at-top
File "/tmp/benchmarks/scripts/tf_cnn_benchmarks/coco_metric.py", line 32, in
from pycocotools.coco import COCO
File "/workspace/models/research/pycocotools/coco.py", line 55, in
from . import mask as maskUtils
File "/workspace/models/research/pycocotools/mask.py", line 3, in
import pycocotools._mask as _mask
ImportError: No module named 'pycocotools._mask'
The PYTHONPATH is :"/home/user/Tensorflowmodels/models/research:/home/user/Tensorflowmodels/models/research/slim"
/home/user/cocoapi/PythonAPI was compiled with python3.6 and pycocotools was copied to /home/user/Tensorflowmodels/models/research.
The /home/user/IntelModelsAI/benchmarks/launch_benchmark.py is also run with python3.6
Upvotes: 1
Views: 103
Reputation: 614
There is a model workload container for SSD-ResNet34 FP32 inference.
This container has all the code, dependency code/installs, and pretrained models that are needed to run the model. All that you'll need to provide is the path to the preprocessed COCO dataset and an output directory where the log files will be written. The container also includes quick start scripts for common use cases. In your case, you can use the fp32_accuracy.sh script, which is using the same parameters mentioned above (like batch size 1, socket 0, and accuracy only).
Here's an example of how the container can be used to run the accuracy test for SSD-ResNet32:
DATASET_DIR=/home/user/coco/outputOUTPUT_DIR=/home/user/logs
docker run \--env DATASET_DIR=${DATASET_DIR} \--env OUTPUT_DIR=${OUTPUT_DIR} \--volume ${DATASET_DIR}:${DATASET_DIR} \--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \--privileged --init -t \intel/object-detection:tf-2.3.0-imz-2.2.0-ssd-resnet34-fp32-inference \/bin/bash quickstart/fp32_accuracy.sh
Upvotes: 0