Reputation: 67
I'm currently reading an image with deepLearning4j and trying to pass it to ONNX is giving me problems.
INDArray ndArray = loader.asMatrix(mat3).permute(0,2,3,1);
OnnxTensor tensor = OnnxTensor.createTensor(env, ndArray.data().asNioFloat(), ndArray.shape());
The problems is that they are not the same. The INDArray is like this:
[[[[ 103.0000 41.0000 155.0000]
[ 103.0000 42.0000 154.0000]
[ 102.0000 44.0000 153.0000]
[ 101.0000 45.0000 152.0000]
[ 101.0000 45.0000 152.0000]
.....
And the ONNX tensor is like this:
[103.0, 103.0, 102.0]
[101.0, 101.0, 101.0]
[101.0, 101.0, 101.0]
[101.0, 101.0, 101.0]
[101.0, 101.0, 101.0]
The overall shape I think is not the problem, that is how is printed but the columns of the NDArray are rows in ONNX
Upvotes: 0
Views: 210
Reputation: 3205
You can check our nd4j-onnxruntime module for that. We don't directly use the normal bindings though. We use javacpp's onnxruntime. The concept should be similar though:
Pointer inputTensorValuesPtr = ndArray.data().pointer();
Pointer inputTensorValues = inputTensorValuesPtr;
long sizeInBytes = ndArray.length() * ndArray.data().getElementSize();
/**
* static Value CreateTensor(const OrtMemoryInfo* info, void* p_data, size_t p_data_byte_count, const int64_t* shape, size_t shape_len,
* ONNXTensorElementDataType type)
*/
LongPointer dims = new LongPointer(ndArray.shape());
Value ret = Value.CreateTensor(
memoryInfo.asOrtMemoryInfo(),
inputTensorValues,
sizeInBytes,
dims,
ndArray.rank(),
onnxTypeForDataType(ndArray.dataType()));
Note that using NIOfloat and other similar concepts creates unnecessary overhead. Please consider using our interop module if possible.
You can see above with our own interop the data should just work as is.
I don't have much experience with the original onnxruntime bindings but I would check 2 points:
Upvotes: 1