Reputation: 706
I am developing an algorithm that involves a CPU case, allowing for NumPy, and a GPU case, allowing for PyTorch. The object will almost always be 4D. Two versions of the object are as follows.
B = [
[[[0.5000, 0.5625],
[0.5000, 0.5625]],
[[1.2500, 0.5000],
[0.5625, 0.6875]],
[[0.5625, 0.6250],
[0.5000, 0.5625]]]
]
B_array = np.array(B)
B_tensor = torch.Tensor(B)
I want to take the max of each 2D matrix, such that I get a result of:
max_array_fn(B_array) # returns array([0.5625, 1.250, 0.6250])
max_tensor_fn(B_tensor) # returns tensor([0.5625, 1.250, 0.6250])
Part of the solution was discussed here, but this is only for NumPy on CPU:
Max of each 2D matrix in 4D NumPy array
However, on the GPU, it seems that PyTorch does not use the same convention as NumPy.
If this was defined as a NumPy array, we could solve the problem using np.array(B, axis=(0,2,3))
. The usage of axis in PyTorch similar to the NumPy example is not supported, as suggested here:
PyTorch torch.max over multiple dimensions
Is there an alternative vectorization of the method? Why is it only able to be vectorized on the CPU with NumPy and not on GPU with PyTorch?
The correct solution should not use any loops and ideally one function call.
Upvotes: 0
Views: 2627
Reputation: 114986
Not the most elegant way:
B.max(dim=3)[0].max(dim=2)[0].max(dim=0)[0]
Upvotes: 2