Worker1432
Worker1432

Reputation: 141

Cannot download Llama 3.2 3B model using Unsloth and Hugging Face

I want to locally fine-tune using my own dataset and then save the Llama 3.2-3B model locally too. I have an Anaconda setup and I'm on the base environment, where I can see clearly that unsloth and hugging face are both installed.

However everytime I simply try to access the required model, I get a bunch of errors. Here are the errors, in order:

Here's the code:

from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = None
load_in_4bit = True 

fourbit_models = [
    "unsloth/Llama-3.2-1B-bnb-4bit",
    "unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
    "unsloth/Llama-3.2-3B-bnb-4bit",
    "unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
]

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/Llama-3.2-3B-Instruct",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
    token = "hf_somethingprivate",
)

I have already:

What's wrong and how to resolve it?

Upvotes: 0

Views: 124

Answers (0)

Related Questions