Salma Bouzid
Salma Bouzid

Reputation: 65

How to increase speed of this ner model implemented from scratch using 1 million labeled sentences

I would like to use spacy's NER model to train a model from scratch using 1 Million sentences. The model has only two types of entities. This is the code I am using. Since, I can't share the data, I created a dummy dataset.

My main issue is that the model is taking too long to train. I would appreciate it if you can highlight any error in my code or suggest other methods to try to fasten training.

TRAIN_DATA = [ ('Ich bin in Bremen', {'entities': [(11, 17, 'loc')]})] * 1000000



import spacy
import random
from spacy.util import minibatch, compounding

def train_spacy(data,iterations):
    TRAIN_DATA = data
    nlp = spacy.blank('de')  
    if 'ner' not in nlp.pipe_names:
        ner = nlp.create_pipe('ner')
        nlp.add_pipe(ner, last=True)


    # add labels
    for _, annotations in TRAIN_DATA:
         for ent in annotations.get('entities'):
            ner.add_label(ent[2])

    other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
    with nlp.disable_pipes(*other_pipes):  
        optimizer = nlp.begin_training()
        for itn in range(iterations):
            print("Statring iteration " + str(itn))
            random.shuffle(TRAIN_DATA)
            losses = {}  
            batches = minibatch(TRAIN_DATA, size=compounding(100, 64.0, 1.001))
            for batch in batches:        
                texts, annotations = zip(*batch)
                nlp.update(texts, annotations, sgd=optimizer, drop=0.35, losses=losses)
            print("Losses", losses)

    return nlp



model = train_spacy(TRAIN_DATA, 20)



Upvotes: 2

Views: 294

Answers (1)

Resul Saparov
Resul Saparov

Reputation: 58

Maybe you can try this:

batches = minibatch(TRAIN_DATA, size=compounding(1, 512, 1.001))

Upvotes: 1

Related Questions