Reputation: 397
learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.7)
learn.fit_one_cycle(1, 1e-2)
I have trained fastai model as above. I can get prediction as below
preds, targets = learn.get_preds()
But instead I want penultimate layer embeddings of model learn
(This practise is common for CNN models). Could you help me how to do it?
Upvotes: 0
Views: 741
Reputation: 1105
I'm not sure if you want a classifier but anyway...
learn.model
gives you back the model architecture. Then learn.model[0]
would be an encoder learn.model[1]
the other part of the model.
Example:
To access first linear layer in SequentialEx (architecture below) you would do it using following command
learn.model[0].layers[0].ff.layers[0]
SequentialRNN(
(0): TransformerXL(
(encoder): Embedding(60004, 410)
(pos_enc): PositionalEncoding()
(drop_emb): Dropout(p=0.03)
(layers): ModuleList(
(0): DecoderLayer(
(mhra): MultiHeadRelativeAttention(
(attention): Linear(in_features=410, out_features=1230, bias=False)
(out): Linear(in_features=410, out_features=410, bias=False)
(drop_att): Dropout(p=0.03)
(drop_res): Dropout(p=0.03)
(ln): LayerNorm(torch.Size([410]), eps=1e-05, elementwise_affine=True)
(r_attn): Linear(in_features=410, out_features=410, bias=False)
)
(ff): SequentialEx(
(layers): ModuleList(
(0): Linear(in_features=410, out_features=2100, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.03)
(3): Linear(in_features=2100, out_features=410, bias=True)
(4): Dropout(p=0.03)
(5): MergeLayer()
(6): LayerNorm(torch.Size([410]), eps=1e-05, elementwise_affine=True)
)
)
)
Upvotes: 1