Jacaranda commited on
Commit
6775e74
1 Parent(s): 17f6a94

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ pipeline_tag: question-answering
15
  ## Model Details
16
  UlizaLlama is a 7B Parameters language model that builds upon the foundation of [Jacaranda/kiswallama-pretrained](https://huggingface.co/Jacaranda/kiswallama-pretrained). Jacaranda/kiswallama-pretrained is a large language model continually-pretrained with 321,530,045 swahili tokens and a customized tokenizer with a swahili vocabulary of 20,000 tokens to extend the capabilities of [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b). It offers significant improvements in both encoding and decoding for Swahili text, surpassing the Swahili performance of [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b). Moreover, Jacaranda/kiswallama-pretrained excels in providing accurate next-word completions in Swahili, a capability which [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b) falls short of.
17
  ### Model Description
18
- - Origin: Adaptation of the Jacaranda/kiswallama-pretrained model.
19
  - Data: Instructional dataset in Swahili and English consisting of prompt-response pairs.
20
  - Training: Alignment to standard methodologies, incorporation of task-centric heads, neural network weight optimization via backpropagation, and task-specific adjustments.
21
  - Fine-tuning: Utilized the LoRA approach, refining two matrices that mirror the main matrix from [Jacaranda/kiswallama-pretrained](https://huggingface.co/Jacaranda/kiswallama-pretrained). This Low Rank Adapter (LoRa) was vital for instruction-focused fine-tuning. Post-training, the developed LoRa was extracted, and Hugging Face's merge and unload() function facilitated the amalgamation of adapter weights with the base model. This fusion enables standalone inference with the merged model
 
15
  ## Model Details
16
  UlizaLlama is a 7B Parameters language model that builds upon the foundation of [Jacaranda/kiswallama-pretrained](https://huggingface.co/Jacaranda/kiswallama-pretrained). Jacaranda/kiswallama-pretrained is a large language model continually-pretrained with 321,530,045 swahili tokens and a customized tokenizer with a swahili vocabulary of 20,000 tokens to extend the capabilities of [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b). It offers significant improvements in both encoding and decoding for Swahili text, surpassing the Swahili performance of [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b). Moreover, Jacaranda/kiswallama-pretrained excels in providing accurate next-word completions in Swahili, a capability which [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b) falls short of.
17
  ### Model Description
18
+ - Origin: Adaptation of the Jacaranda/kiswallama-pretrained model which is continually pretrained from Meta/Llama2.
19
  - Data: Instructional dataset in Swahili and English consisting of prompt-response pairs.
20
  - Training: Alignment to standard methodologies, incorporation of task-centric heads, neural network weight optimization via backpropagation, and task-specific adjustments.
21
  - Fine-tuning: Utilized the LoRA approach, refining two matrices that mirror the main matrix from [Jacaranda/kiswallama-pretrained](https://huggingface.co/Jacaranda/kiswallama-pretrained). This Low Rank Adapter (LoRa) was vital for instruction-focused fine-tuning. Post-training, the developed LoRa was extracted, and Hugging Face's merge and unload() function facilitated the amalgamation of adapter weights with the base model. This fusion enables standalone inference with the merged model