Fine-tuning scripts for Llama3.2-Vision series

#27
by 2U1 - opened

https://github.com/2U1/Llama3.2-Vision-Ft

I made a code for fine-tuning Llama3.2-Vision.
It can use

  • LoRA/QLoRA
  • Felxible for freezing modules
  • setting different learning rates for each modules

However It need to be developed for some other features. Feedbacks and issues are welcome.
PRs and helps are also welcome!

Meta has released fine-tuning recipes for vision models as well. Check it out - https://github.com/meta-llama/llama-recipes/blob/main/recipes/quickstart/finetuning/finetune_vision_model.md this might help improve your recipe as well.

Meta Llama org

@2U1 I'm part of the team working on the llama-recipes repo, please feel free to learn from the script above for your implementation and reach out if you have any Qs!

@doitbuildit @Sanyam Thanks I'll take a look at it.

Sign up or log in to comment