--- language: - ko - en license: mit --- # Model Card for free-evo-qwen72b-v0.8 ## 1st place : 2024 4th May - avg. 81.28 [Open Llm Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) but this kicked away. maybe the explanation was not enough. ## Method - We were inspired by this [Sakana project](https://sakana.ai/evolutionary-model-merge/) ## Process - 1. two models with the same architecture are needed so fine-tune a model to create a gap between the two of them. - 2. merge original one and fine-tuned one - 3. evaluate the merged model - 4. merge again it with original model - 5. evaluate again - 6. keep going until evaluate avg is higher then original one that's it. simple. ## Base Architecture - QWEN2 ## Base Models - several QWEN2 based models