Open LLM model fine tuning in local machine

I would like to use an open source or free LLM model which is helpful in text processing and summarization, which is also capable of engaging in chats for extracting meaningful content. Are there tools available for free or opensource to help me fine tune the model by feeding my documents?

I am using ollama docker container for running llama2 and mistral. How do I fine tune these models using my text data in local? Should I use APIs for training them with prompts?

Upvotes: 1

Views: 1267

Answers (2)

aliarda
aliarda

Reputation: 3

You can also use LlamaFactory it is one of simplest yet most effective tools that I have tried. You just edit a .yaml file -which is really simple- then just:

llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml

And If you have a mac then you can try mlx-lm for even more simple tool. Looks like this:

python lora.py --model <path_to_model> \
           --train \
           --iters 600

You can find more info about both of these from their github pages:

https://github.com/hiyouga/LLaMA-Factory

https://github.com/ml-explore/mlx-examples/tree/main/lora

Upvotes: 0

MarioZ
MarioZ

Reputation: 318

The easiest way to fine-tune any llm model on your data is with huggingface's autotrain terminal app. Some thing like this:

autotrain llm
--train
--model ${MODEL_NAME}
--project-name ${PROJECT_NAME}
--data-path data/
--text-column text
--lr ${LEARNING_RATE} \

....

More information you will find here: https://huggingface.co/docs/autotrain/llm_finetuning

Upvotes: 0

Related Questions