LogicOnAbstractions
LogicOnAbstractions

Reputation: 149

Reshaping tensors in a 3D numpy matrix

I'm essentially trying to accomplish this and then this but with a 3D matrix, say (128,128,60,6). The 4th dimension is an array vector that represents the diffusion array at that voxel, e.g.:

d[30,30,30,:] = [dxx, dxy, dxz, dyy, dyz, dzz] = D_array

Where dxx etc. are diffusion for a particular direction. D_array can also be seen as a triangular matrix (since dxy == dyx etc.). So I can use those 2 other answers to get from D_array to D_square, e.g.

D_square = [[dxx, dxy, dxz], [dyx, dyy, dyz],[dzx, dzy, dzz]]

I can't seem to figure out the next step however - how to apply that unit transformation of a D_array into D_square to the whole 3D volume.

Here's the code snippet that works on a single tensor:

#this solves an linear eq. that provides us with diffusion arrays at each voxel in a 3D space
D = np.einsum('ijkt,tl->ijkl',X,bi_plus)

#our issue at this point is we have a vector that represents a triangular matrix.
# first make a tri matx from the vector, testing on unit tensor first
D_tri = np.zeros((3,3))
D_array = D[30][30][30]
D_tri[np.triu_indices(3)] = D_array
# then getting the full sqr matrix
D_square = D_tri.T + D_tri
np.fill_diagonal(D_square, np.diag(D_tri))

So what would be the numpy-way of formulating that unit transformation of the Diffusion tensor to the whole 3D volume all at once?

Upvotes: 1

Views: 658

Answers (2)

Divakar
Divakar

Reputation: 221744

Approach #1

Here's one using row, col indices from triu_indices for indexing along last two axes into an initialized output array -

def squareformnd_rowcol_integer(ar, n=3):
    out_shp = ar.shape[:-1] + (n,n)
    out = np.empty(out_shp, dtype=ar.dtype)

    row,col = np.triu_indices(n)

    # Get a "rolled-axis" view with which the last two axes come to the front
    # so that we could index into them just like for a 2D case
    out_rolledaxes_view = out.transpose(np.roll(range(out.ndim),2,0))    

    # Assign permuted version of input array into rolled output version
    arT = np.moveaxis(ar,-1,0)
    out_rolledaxes_view[row,col] = arT
    out_rolledaxes_view[col,row] = arT
    return out

Approach #2

Another one with the last two axes merged into one and then indexing with linear indices -

def squareformnd_linear_integer(ar, n=3):
    out_shp = ar.shape[:-1] + (n,n)
    out = np.empty(out_shp, dtype=ar.dtype)

    row,col = np.triu_indices(n)
    idx0 = row*n+col
    idx1 = col*n+row

    ar2D = ar.reshape(-1,ar.shape[-1])
    out.reshape(-1,n**2)[:,idx0] = ar2D
    out.reshape(-1,n**2)[:,idx1] = ar2D
    return out

Approach #3

Finally altogether a new method using masking and should be better with performance as most masking based ones are when it comes to indexing -

def squareformnd_masking(ar, n=3):
    out = np.empty((n,n)+ar.shape[:-1] , dtype=ar.dtype)

    r = np.arange(n)
    m = r[:,None]<=r

    arT = np.moveaxis(ar,-1,0)
    out[m] = arT
    out.swapaxes(0,1)[m] = arT
    new_axes = range(out.ndim)[2:] + [0,1]
    return out.transpose(new_axes)

Timings on (128,128,60,6) shaped random array -

In [635]: ar = np.random.rand(128,128,60,6)

In [636]: %timeit squareformnd_linear_integer(ar, n=3)
     ...: %timeit squareformnd_rowcol_integer(ar, n=3)
     ...: %timeit squareformnd_masking(ar, n=3)
10 loops, best of 3: 103 ms per loop
10 loops, best of 3: 103 ms per loop
10 loops, best of 3: 53.6 ms per loop

Upvotes: 2

A vectorized way to do it:

# Gets the triangle matrix
d_tensor = np.zeros(128, 128, 60, 3, 3)
triu_idx = np.triu_indices(3)
d_tensor[:, :, :, triu_idx[0], triu_idx[1]] = d
# Make it symmetric
diagonal = np.zeros(128, 128, 60, 3, 3)
idx = np.arange(3)
diagonal[:, :, :, idx, idx] = d_tensor[:, :, :, idx, idx]
d_tensor = np.transpose(d_tensor, (0, 1, 2, 4, 3)) + d_tensor - diagonal

Upvotes: 1

Related Questions