Reputation: 1
I'm trying to fine-tune the LLaMA 3.1 8 billion parameters model using the QLoRA technique with the help of the 4-bit bitsandbytes library on a mental health conversations dataset from Hugging Face. However, when I run the code, I'm encountering a torch.cuda.OutOfMemoryError
. I've tried using multiple GPUs and also higher GPU memory, but the error persists.
Here's my code:
import torch
from torch.utils.data import Dataset, DataLoader
from transformers import Trainer, AutoModelForCausalLM, AutoTokenizer
from datasets import Dataset, load_dataset
from peft import get_peft_model, LoraConfig, TaskType
import numpy as np
from transformers import BitsAndBytesConfig, TrainingArguments
# BitsAndBytes configuration which loads in 4-bit
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_storage=torch.bfloat16,
)
# Load model and tokenizer using huggingface
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3.1-8B-Instruct",
quantization_config=bnb_config,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
# Load mental health counseling dataset
dataset = load_dataset("Amod/mental_health_counseling_conversations")
# Data preprocessing functions
def generate_prompt(Context, Response):
return f"""
You are supposed to reply to the questions as a professional therapist
Question: {Context}
Answer: {Response}
"""
def format_for_llama(example):
prompt = generate_prompt(example['Context'], example['Response'])
return {
"text": prompt.strip()
}
formatted_dataset = dataset['train'].map(format_for_llama)
tokenizer.pad_token = tokenizer.eos_token
# Collate function for DataLoader
def collate_fn(examples):
input_ids = torch.stack([example['input_ids'] for example in examples])
attention_mask = torch.stack([example['attention_mask'] for example in examples])
return {
'input_ids': input_ids,
'attention_mask': attention_mask
}
train_dataloader = DataLoader(tokenized_dataset, collate_fn=collate_fn, batch_size=10)
# PEFT configuration (adding trainable adapters)
peft_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
inference_mode=False,
r=16,
lora_alpha=32,
lora_dropout=0.1,
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj"
]
)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
# Training arguments hyperparameters
args = TrainingArguments(
output_dir="./models",
save_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=10,
num_train_epochs=10,
weight_decay=0.01,
logging_dir='logs',
logging_strategy="epoch",
remove_unused_columns=False,
eval_strategy="no",
load_best_model_at_end=False,
)
# Trainer initialization and training
trainer = Trainer(
model=model,
args=args,
train_dataset=tokenized_dataset,
data_collator=collate_fn
)
trainer.train()
When I run this code, I get the following error:
OutOfMemoryError Traceback (most recent call last)
Cell In[42], line 7
1 trainer = Trainer(
2 model=model,
3 args=args,
4 train_dataset=tokenized_dataset,
5 data_collator=collate_fn
6 )
----> 7 trainer.train()
File /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1938, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1936 hf_hub_utils.enable_progress_bars()
1937 else:
-> 1938 return inner_training_loop(
1939 args=args,
1940 resume_from_checkpoint=resume_from_checkpoint,
1941 trial=trial,
1942 ignore_keys_for_eval=ignore_keys_for_eval,
1943 )
File /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2279, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2276 self.control = self.callback_handler.on_step_begin(args, self.state, self.control)
2278 with self.accelerator.accumulate(model):
-> 2279 tr_loss_step = self.training_step(model, inputs)
...
1857 else:
-> 1858 ret = input.softmax(dim, dtype=dtype)
1859 return ret
OutOfMemoryError: CUDA out of memory. Tried to allocate 1.25 GiB. GPU 0 has a total capacty of 47.54 GiB of which 1.05 GiB is free. Process 3704361 has 46.47 GiB memory in use. Of the allocated memory 45.37 GiB is allocated by PyTorch, and 808.03 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I have tried using multiple GPU instances on RunPod, as shown in this screenshot, but it's resulting in the same error.
How to resolve this?
Upvotes: 0
Views: 315
Reputation: 3182
Which types of GPU are you using?
Some things I would try:
bnb_4bit_use_double_quant = True
in BitsAndBytesConfig
r
in LoraConfig
to 8 or 4per_device_train_batch_size
and per_device_eval_batch_size
, e.g. to 4 or 2Upvotes: 0