Khoi V
Khoi V

Reputation: 662

How to use decapoda-research / llama-7b-hf with fine tuning LoRA in LLaMA.cpp?

Currently after fine tune model decapoda-research / llama-7b-hf with tool https://github.com/zetavg/LLaMA-LoRA-Tuner. Now I try to use it in LLaMA.cpp with tutorial: https://github.com/ggerganov/llama.cpp/discussions/1166

As far as I know, I need convert LoRA model to GGML to use. But decapoda-research / llama-7b-hf has 33 files.

So how can I merge multiple bin files into 1 and load fine tuning data?

Upvotes: 1

Views: 1023

Answers (1)

user160357
user160357

Reputation: 1526

You would need to use the "hf to gguf" converter that is available in llama.cpp repo.

Upvotes: 1

Related Questions