blue-sky
blue-sky

Reputation: 53786

Get matrix dimensions from pytorch layers

Here is an autoencoder I created from Pytorch tutorials :

epochs = 1000
from pylab import plt
plt.style.use('seaborn')
import torch.utils.data as data_utils
import torch
import torchvision
import torch.nn as nn
from torch.autograd import Variable

cuda = torch.cuda.is_available()
FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
import numpy as np
import pandas as pd
import datetime as dt


features = torch.tensor(np.array([ [1,2,3],[1,2,3],[100,200,500] ]))

print(features)

batch = 10
data_loader = torch.utils.data.DataLoader(features, batch_size=2, shuffle=False)

encoder = nn.Sequential(nn.Linear(3,batch), nn.Sigmoid())
decoder = nn.Sequential(nn.Linear(batch,3), nn.Sigmoid())
autoencoder = nn.Sequential(encoder, decoder)

optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.001)

encoded_images = []
for i in range(epochs):
    for j, images in enumerate(data_loader):
    #     images = images.view(images.size(0), -1) 
        images = Variable(images).type(FloatTensor)
        optimizer.zero_grad()
        reconstructions = autoencoder(images)
        loss = torch.dist(images, reconstructions)
        loss.backward()
        optimizer.step()

#     encoded_images.append(encoder(images))

# print(decoder(torch.tensor(np.array([1,2,3])).type(FloatTensor)))

encoded_images = []
for j, images in enumerate(data_loader):
    images = images.view(images.size(0), -1) 
    images = Variable(images).type(FloatTensor)

    encoded_images.append(encoder(images))

I can see the encoded images do have newly created dimension of 10. In order to understand the matrix operations going on under the hood I'm attempting to print the matrix dimensions of encoder and decoder but shape is not available on nn.Sequential

How to print the matrix dimensions of nn.Sequential ?

Upvotes: 0

Views: 492

Answers (1)

Shai
Shai

Reputation: 114786

A nn.Sequential is not a "layer", but rather a "container". It can store several layers and manage their execution (and some other functionalities).
In your case, each nn.Sequential holds both the linear layer and the non-linear nn.Sigmoid activation. To get the shape of the weights of the first layer in a nn.Sequential you can simply do:

encoder[0].weight.shape

Upvotes: 2

Related Questions