Reputation: 4721
In eigen, one can do quite easily tensor contractions using:
Tensor<double, 1> tensor1;
Tensor<double, 2> tensor2;
// fill with data so that
// tensor1 is of dimensions [10] and tensor2 of dimensions [5,10]
std::array<Eigen::IndexPair<int>, 1> product_dims1 = { IndexPair<int>(1, 0) };
auto tensor = tensor2.contract(tensor1, product_dims1);
// now tensor is of dimensions [5]
I am looking for a method that does the opposite of contraction, meaning it takes two tensors A and B, say of dimensions 5 x 10 and 3 x 2 and defines a new tensor C of dimensions 5 x 10 x 3 x 2 such that
C_ijkl = A_ij * B_kl
I could easily write such a method if necessary, but I get the sense it would be more optimized if I used a native eigen method. I also want to be able to use GPU support which is quite easy with eigen if you use the native methods.
Thanks.
Upvotes: 5
Views: 2075
Reputation: 516
The solution is perhaps too simple: You have to contract over no indices.
Eigen::array<Eigen::IndexPair<long>,0> empty_index_list = {};
Tensor<double, 2> A_ij(4, 4);
Tensor<double, 2> B_kl(4, 4);
Tensor<double, 4> C_ijkl = A_ij.contract(B_kl, empty_index_list);
Upvotes: 3
Reputation: 1
You can achieve an outer product by reshaping the the input tensors padding the dimensions with additional one-dimensional ones and then broadcast over the new dimensions.
For the two rank-2 and one rank-4 tensors you have C_ijkl = A_ij * B_kl
it would look like:
#include <Eigen/Core>
#include <unsupported/Eigen/CXX11/Tensor>
using namespace Eigen;
int main() {
Tensor<double, 2> A_ij(4, 4);
Tensor<double, 2> B_kl(4, 4);
Tensor<double, 4> C_ijkl(4, 4, 4, 4);
Tensor<double, 4>::Dimensions A_pad(4, 4, 1, 1);
array<int, 4> A_bcast(1, 1, 4, 4);
Tensor<double, 4>::Dimensions B_pad(1, 1, 4, 4);
array<int, 4> B_bcast(4, 4, 1, 1);
C_ijkl = A_ij.reshape(A_pad).broadcast(A_bcast) *
B_kl.reshape(B_pad).broadcast(B_bcast);
}
Upvotes: 0