BIPLAB ROY
BIPLAB ROY

Reputation: 11

Why using lime with BERT giving memory error

I am using LIME to visualize my finetuned BERT model. I don't know why it is taking too much memory and killed by the system. Here is my code:

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

model = BertForSequenceClassification.from_pretrained(f'{BASE_PATH}results/{MODEL}/', num_labels=4)

def _proba(texts):
    encodings = tokenizer(texts, truncation=True, padding=True, max_length=250, return_tensors='pt')
    pred = model(**encodings)
    softmax = Softmax(dim = 1)
    prob = softmax(pred.logits).detach().numpy()
    return prob
    
    
explainer = LimeTextExplainer(class_names=['A', 'B', 'C', 'D'])


idx = 0
exp = explainer.explain_instance(test_texts[idx], _proba, num_features=4)

exp.save_to_file('/lime_vis.html')

I am running this code in a server which has RAM of 64GB. I am curious how it can still give memory error.

I have tried to run it on colab, kaggle also, it just takes all the memory for a single example.

Upvotes: 1

Views: 139

Answers (0)

Related Questions