afsara_ben
afsara_ben

Reputation: 672

in transformers _greedy_search(), generate next token hidden state from prev decoder hidden state

In utils.py of transformers: _greedy_search() function generates sequences of token ids for models with a language modeling head using greedy decoding. As model_inputs, input_ids (or token) are taken to generate next token. I want to generate the next hidden state from the previous decoder hidden state and skip generating tokens altogether. Here in forward pass to get next token,

outputs = self(
      **model_inputs,
      return_dict=True,
      output_attentions=output_attentions,
      output_hidden_states=output_hidden_states,
     )

I want to only pass the output_hidden_states and skip generating input_ids. Is this possible? Or will the tokens be generated always in this function? My purpose is to see if by skipping integer token generation (and only generating the hidden states) we can reduce computation or not. I only need the decoder hidden states for my downstream task.

Upvotes: 0

Views: 65

Answers (0)

Related Questions