emillykkejensen commited on
Commit
561d5c3
1 Parent(s): 6f09d04

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -5,11 +5,16 @@ tags:
5
  - trl
6
  - sft
7
  - generated_from_trainer
 
8
  datasets:
9
- - generator
10
  model-index:
11
  - name: Llama-3-instruct-dansk
12
  results: []
 
 
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,7 +23,7 @@ should probably proofread and complete it, then remove this comment. -->
18
  [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/emillykkejensen/LLM-instruct/runs/v8wdcn55)
19
  # Llama-3-instruct-dansk
20
 
21
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.9477
24
 
@@ -61,4 +66,4 @@ The following hyperparameters were used during training:
61
  - Transformers 4.41.0.dev0
62
  - Pytorch 2.2.0
63
  - Datasets 2.19.0
64
- - Tokenizers 0.19.1
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
+ - danish
9
  datasets:
10
+ - kobprof/skolegpt-instruct
11
  model-index:
12
  - name: Llama-3-instruct-dansk
13
  results: []
14
+ language:
15
+ - da
16
+ library_name: transformers
17
+ pipeline_tag: text-generation
18
  ---
19
 
20
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
23
  [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/emillykkejensen/LLM-instruct/runs/v8wdcn55)
24
  # Llama-3-instruct-dansk
25
 
26
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the [kobprof/skolegpt-instruct](https://huggingface.co/datasets/kobprof/skolegpt-instruct) dataset.
27
  It achieves the following results on the evaluation set:
28
  - Loss: 0.9477
29
 
 
66
  - Transformers 4.41.0.dev0
67
  - Pytorch 2.2.0
68
  - Datasets 2.19.0
69
+ - Tokenizers 0.19.1