Reputation: 134
I have 440 images with the same size 924 x 640 and three channels. I load them via
image_data = []
for filename in iglob(os.path.join(store, '*.jpg')):
image_data.append(plt.imread(filename))
Then I make a numpy ndarray from this list:
image_np_orig = np.array(image_data)
This array has a shape (440,)
and it consists of elements with shape of (924, 640, 3)
. I want to make some t-SNE transformations on this array of images, so I want to reshape the array to make it's shape look like (440, 1)
:
image_np = image_np_orig.reshape(image_np_orig.shape[0], -1)
I expect to see an array image_np
of shape (440, 1)
where each element of the first dimension (axis=0
) is an array of shape (924, 640, 3)
. However I get an array image_np
of shape (440, 1)
, where each element of the first dimension is an array of shape (1,)
and in these arrays each element of their respective first dimensions is of shape (924, 640, 3)
.
I've tried
image_np = image_np_orig[:, np.newaxis]
with the same results.
I`ve also tried
image_np = np.stack(image_np_orig)
which lead to image_np
with the shape of (440, 924, 640, 3)
and then I got the mistake during the t-SNE transform:
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, init='pca')
X_tsne = tsne.fit_transform(image_np)
returns ValueError: Found array with dim 4. Estimator expected <=2.
It may be relevant that image_np_orig
has dtype object
and image_np_orig[0]
has dtype uint8
. If this is relevant then how can I reshape arrays of different types?
Upvotes: 3
Views: 3352
Reputation: 3722
From what I understand, you have an array of shape (440, 1, 924, 640, 3)
, but you actually need (440, 924, 640, 3)
Try:
image_np = image_np_orig.squeeze()
This will squeeze out the unnwanted dimension.
Upvotes: 2
Reputation: 150735
I'm not sure why the first approach doesn't work for you. But since image_np = np.stack(image_np_orig)
returns the 4D
data, you can go from there:
image_np = np.stack(image_np_orig).reshape(len(image_np_orig), -1)
Upvotes: 0