Niklas Heidloff
Niklas Heidloff

Reputation: 962

LoRA Fine-Tuning on CPU Error: Using `load_in_8bit=True` requires Accelerate

I'm trying to follow the instructions from the great blog post Efficient Large Language Model training with LoRA and Hugging Face to fine-tune a small model via LoRA. To prepare as much as possible, I'm trying to run everything locally via CPU first.

When running 'AutoModelForSeq2SeqLM.from_pretrained' I get the following error:

ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes`

I've tried various ways to import accelerate and bitsandbytes but without success. The fine-tuning code without LoRA works (Fine-tune FLAN-T5 for chat & dialogue summarization). That one doesn't use accelerate.

Does anyone know how to fix this issue? Or is accelerate not supported on CPU?

Here is what I've tried:

!pip install "peft"
!pip install "transformers" "datasets" "accelerate" "evaluate" "bitsandbytes" loralib --upgrade --quiet
or
!pip install "peft==0.2.0"
!pip install "transformers==4.27.2" "datasets==2.9.0" "accelerate==0.17.1" "evaluate==0.4.0" "bitsandbytes==0.37.1" loralib --upgrade --quiet
or
!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git 
!pip install -q -U git+https://github.com/huggingface/peft.git
!pip install -q -U git+https://github.com/huggingface/accelerate.git

Update Nov 27th: I get the same error when running on a V100 GPU.

Upvotes: 0

Views: 695

Answers (1)

Niklas Heidloff
Niklas Heidloff

Reputation: 962

This works now:

pip install "accelerate==0.17.1"
pip install "peft==0.2.0"
pip install "transformers==4.27.2" "datasets" "evaluate==0.4.0" "bitsandbytes==0.41.2.post2" loralib
pip install rouge-score tensorboard py7zr scipy

Upvotes: 0

Related Questions