Reputation: 1087
How can I find the input size of an onnx model? I would eventually like to script it from python.
With tensorflow I can recover the graph definition, find input candidate nodes from it and then obtain their size. Can I do something similar with ONNX (or even simpler)?
Upvotes: 15
Views: 29397
Reputation: 198
Please do NOT use input
as a variable name because it's a built-in function.
The first idea that comes to mind is that using the google.protobuf.json_format.MessageToDict()
method if I need the name, data_type, or some properties of a protobuf object. For example:
from google.protobuf.json_format import MessageToDict
model = onnx.load("path/to/model.onnx")
for _input in model.graph.input:
print(MessageToDict(_input))
will gives the output like:
{'name': '0', 'type': {'tensorType': {'elemType': 2, 'shape': {'dim': [{'dimValue': '4'}, {'dimValue': '3'}, {'dimValue': '384'}, {'dimValue': '640'}]}}}}
I'm not very clear whether every model.graph.input
is a RepeatedCompositeContainer
object or not, but it would be necessary to use the for
loop when it is a RepeatedCompositeContainer
.
Then you need to get the shape information from the dim
field.
model = onnx.load("path/to/model.onnx")
for _input in model.graph.input:
m_dict = MessageToDict(_input))
dim_info = m_dict.get("type").get("tensorType").get("shape").get("dim") # ugly but we have to live with this when using dict
input_shape = [d.get("dimValue") for d in dim_info] # [4,3,384,640]
If you need the only dim, please use message object instead.
model = onnx.load("path/to/model.onnx")
for _input in model.graph.input:
dim = _input.type.tensor_type.shape.dim
input_shape = [MessageToDict(d).get("dimValue") for d in dim] # ['4', '3', '384', '640']
# if you prefer the python naming style, using the line below
# input_shape = [MessageToDict(d, preserving_proto_field_name=True).get("dim_value") for d in dim]
One line version:
model = onnx.load("path/to/model.onnx")
input_shapes = [[d.dim_value for d in _input.type.tensor_type.shape.dim] for _input in model.graph.input]
Refs:
https://github.com/googleapis/python-vision/issues/70
Upvotes: 16
Reputation: 443
If you use onnxruntime instead of onnx for inference.
Try using the below code.
import onnxruntime as ort
model = ort.InferenceSession("model.onnx", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
input_shape = model.get_inputs()[0].shape
Upvotes: 16
Reputation: 211
Yes, provided the input model has the information. Note that inputs of an ONNX model may have an unknown rank or may have a known rank with dimensions that are fixed (like 100) or symbolic (like "N") or completely unknown. You can access this as below:
import onnx
model = onnx.load(r"model.onnx")
# The model is represented as a protobuf structure and it can be accessed
# using the standard python-for-protobuf methods
# iterate through inputs of the graph
for input in model.graph.input:
print (input.name, end=": ")
# get type of input tensor
tensor_type = input.type.tensor_type
# check if it has a shape:
if (tensor_type.HasField("shape")):
# iterate through dimensions of the shape:
for d in tensor_type.shape.dim:
# the dimension may have a definite (integer) value or a symbolic identifier or neither:
if (d.HasField("dim_value")):
print (d.dim_value, end=", ") # known dimension
elif (d.HasField("dim_param")):
print (d.dim_param, end=", ") # unknown dimension with symbolic name
else:
print ("?", end=", ") # unknown dimension with no name
else:
print ("unknown rank", end="")
print()
Upvotes: 18