Reputation: 127
I need to perform certain operations on eigen tensor. But I did not find any example or documentation.
I have a two tensors:
Eigen::Tensor<float,3> feature_buffer(K,45,7); feature_buffer.setZero();
VectorXi number_buffer(K);
I need to perform below operations on tensor.
feature_buffer[:, :, -3:] = feature_buffer[:, :, :3] - \
feature_buffer[:, :, :3].sum(axis=1, keepdims=True)/number_buffer.reshape(K, 1, 1)
The above code is numpy code. I did everything, but I am stuck at the final step.
Can someone please help me with this? I am stuck with this whole day.
Thanks in advance
Upvotes: 1
Views: 576
Reputation: 516
I believe the numpy
-operation is ill-posed in two places, where dimensions don't match up. I'm not super familiar with numpy ndarray
operations, so it could be a simple misunderstanding on my part, but if that operation succeeds, my guess is that numpy
can make educated guesses when some of the dimensions match up...
That said, I get the gist of what you are trying to accomplish, so I wrote down the equivalent C++ code below step by step. I took some liberties reinterpreting the operation to make dimensions match up properly: In the end, if it's not exactly the same operation, I hope just reading through the syntax may clear things up.
#include <unsupported/Eigen/CXX11/Tensor>
int main(){
long d0 = 10; // This is "K"
long d1 = 10;
long d2 = 10;
Eigen::Tensor<float,3> feature_buffer(d0,d1,d2);
Eigen::Tensor<float,1> number_buffer(d0);
feature_buffer.setRandom();
number_buffer.setRandom();
// Step 1) Define numpy "feature_buffer[:,:,-3:]" in C++
std::array<long,3> offsetA = {0, 0, d2-3};
std::array<long,3> extentA = {d0,d1,3};
auto feature_sliceA = feature_buffer.slice(offsetA,extentA);
// Note: feature_sliceA is a "slice" object: it does not own the data in feature_buffer,
// it merely points to a rectangular subregion inside of feature_buffer.
// If you'd rather make a copy of that data, replace "auto" with "Eigen::Tensor<float,3>".
// Step 2) Define numpy "feature_buffer[:, :, :3]" in C++
std::array<long,3> offsetB = {0, 0, 0};
std::array<long,3> extentB = {d0,d1,3};
auto feature_sliceB = feature_buffer.slice(offsetA,extentA);
// Step 3) Perform the numpy operation "feature_buffer[:, :, :3].sum(axis=1, keepdims=True)"
std::array<long,1> sumDims = {1};
std::array<long,3> newDims = {d0,1,3}; // This takes care of "keepdims=True": d1 is summed over, then kept as size 1.
Eigen::Tensor<float,3> feature_sum = feature_sliceB.sum(sumDims).reshape(newDims);
// Step 4) The numpy division "feature_buffer[:, :, :3].sum(axis=1, keepdims=True)/number_buffer.reshape(K, 1, 1)"
// looks ill-formed: There are fewer elements in [:, :, :3] than in number_buffer.reshape(K, 1, 1).
// To go head, we could interpret this as dividing each of the 3 "columns" (in dimension 2) by number_buffer:
// Something like: "feature_sum/number_buffer.reshape(d0, 1, 3)"
std::array<long,3> numBcast = {1,1,3};
std::array<long,3> numDims = {d0,1,1};
Eigen::Tensor<float,3> number_bcast = number_buffer.reshape(numDims).broadcast(numBcast);
// Step 5) Perform the division operation
Eigen::Tensor<float,3> feature_div = feature_sum/number_bcast;
// Step 6) Perform the numpy subtraction
// "feature_buffer[:, :, :3] - feature_buffer[:, :, :3].sum(axis=1, keepdims=True)/number_buffer.reshape(K, 1, 1)
// in our current program this corresponds to
// "feature_sliceB - feature_div"
// Actually, this is also ill-formed, since:
// feature_sliceB has dimensions (d0, d1, 3) = (10, 10, 3)
// feature_div has dimensions (d0, 1, 3) = (10, 1, 3)
//
// To go ahead we can reinterpret once again: Assume the subtraction happens once for each dimension 1.
// We use broadcast again to copy the contents of feature_div d1 times along dimension 1
std::array<long,3> divBcast = {1,10,1};
Eigen::Tensor<float,3> feature_div_bcast = feature_div.broadcast(divBcast);
// Step 7) Perform the main assignment operation
feature_sliceA = feature_sliceB - feature_div_bcast;
}
You can see the same code working on godbolt.
I did not consider performance here at all. I'm sure you can find better ways of writing this neatly.
Upvotes: 1