Chris M
Chris M

Reputation: 33

why is encoder.json not found when running GPT2 small model

good evening,

caveat, im not a python or machine learning expert

I'm trying to run the small instance of GPT2 , after the hype I wanted to check it out. So far I've downloaded all the prerequisites. Python, regex, tensorflow etc. but when it comes to running the script to generate the sample from the model im being thrown the following error

'''File "C:*****\F******y\Desktop\Python\gpt-2\src\encoder.py", line 109, in get_encoder with open(os.path.join(models_dir, model_name, 'encoder.json'), 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: 'models\124M\encoder.json'''

when i'm calling the script i switch into the directory that holds the file and run ''' generate_unconditional_samples.py --top_k 40 ''' from the command line

the script itself looks like this

#!/usr/bin/env python3

import fire
import json
import os
import numpy as np
import tensorflow as tf

import model, sample, encoder

def sample_model(
    model_name='124M',
    seed=None,
    nsamples=0,
    batch_size=1,
    length=None,
    temperature=1,
    top_k=0,
    top_p=1,
    models_dir='U**r\F****y\Desktop\Python\gpt-2\models',
):
    """
    Run the sample_model
    :model_name=124M : String, which model to use
    :seed=None : Integer seed for random number generators, fix seed to
     reproduce results
    :nsamples=0 : Number of samples to return, if 0, continues to
     generate samples indefinately.
    :batch_size=1 : Number of batches (only affects speed/memory).
    :length=None : Number of tokens in generated text, if None (default), is
     determined by model hyperparameters
    :temperature=1 : Float value controlling randomness in boltzmann
     distribution. Lower temperature results in less random completions. As the
     temperature approaches zero, the model will become deterministic and
     repetitive. Higher temperature results in more random completions.
    :top_k=0 : Integer value controlling diversity. 1 means only 1 word is
     considered for each step (token), resulting in deterministic completions,
     while 40 means 40 words are considered at each step. 0 (default) is a
     special setting meaning no restrictions. 40 generally is a good value.
     :models_dir : path to parent folder containing model subfolders
     (i.e. contains the <model_name> folder)
    """
    models_dir = os.path.expanduser(os.path.expandvars(models_dir))
    enc = encoder.get_encoder(model_name, models_dir)
    hparams = model.default_hparams()
    with open(os.path.join(models_dir, model_name, 'hparams.json')) as f:
        hparams.override_from_dict(json.load(f))

    if length is None:
        length = hparams.n_ctx
    elif length > hparams.n_ctx:
        raise ValueError("Can't get samples longer than window size: %s" % hparams.n_ctx)

    with tf.Session(graph=tf.Graph()) as sess:
        np.random.seed(seed)
        tf.set_random_seed(seed)

        output = sample.sample_sequence(
            hparams=hparams, length=length,
            start_token=enc.encoder['<|endoftext|>'],
            batch_size=batch_size,
            temperature=temperature, top_k=top_k, top_p=top_p
        )[:, 1:]

        saver = tf.train.Saver()
        ckpt = tf.train.latest_checkpoint(os.path.join(models_dir, model_name))
        saver.restore(sess, ckpt)

        generated = 0
        while nsamples == 0 or generated < nsamples:
            out = sess.run(output)
            for i in range(batch_size):
                generated += batch_size
                text = enc.decode(out[i])
                print("=" * 40 + " SAMPLE " + str(generated) + " " + "=" * 40)
                print(text)

if __name__ == '__main__':
    fire.Fire(sample_model)

'''

can anyone advise what I might be doing wrong - im sure its really obvious but i've been trying all sorts of stuff for about 4 hours with no luck

any advice is much appreciated

Upvotes: 1

Views: 2068

Answers (1)

Livmortis
Livmortis

Reputation: 151

you have to download the model before run the scrpt:

python3 download_model.py 124M

the 'download_model.py' is in the root directory of project.

Upvotes: 0

Related Questions