tyzhu commited on
Commit
9cf6849
1 Parent(s): 6a00ffe

Model save

Browse files
Files changed (1) hide show
  1. README.md +61 -75
README.md CHANGED
@@ -1,26 +1,13 @@
1
  ---
2
- license: other
3
- base_model: Qwen/Qwen1.5-4B
4
  tags:
5
  - generated_from_trainer
6
- datasets:
7
- - tyzhu/lmind_nq_train6000_eval6489_v1_qa
8
  metrics:
9
  - accuracy
10
  model-index:
11
  - name: lmind_nq_train6000_eval6489_v1_qa_3e-5_lora2
12
- results:
13
- - task:
14
- name: Causal Language Modeling
15
- type: text-generation
16
- dataset:
17
- name: tyzhu/lmind_nq_train6000_eval6489_v1_qa
18
- type: tyzhu/lmind_nq_train6000_eval6489_v1_qa
19
- metrics:
20
- - name: Accuracy
21
- type: accuracy
22
- value: 0.5511794871794872
23
- library_name: peft
24
  ---
25
 
26
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -28,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
28
 
29
  # lmind_nq_train6000_eval6489_v1_qa_3e-5_lora2
30
 
31
- This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_nq_train6000_eval6489_v1_qa dataset.
32
  It achieves the following results on the evaluation set:
33
- - Loss: 2.8024
34
- - Accuracy: 0.5512
35
 
36
  ## Model description
37
 
@@ -66,64 +53,63 @@ The following hyperparameters were used during training:
66
 
67
  ### Training results
68
 
69
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
70
- |:-------------:|:-------:|:----:|:---------------:|:--------:|
71
- | 1.963 | 0.9973 | 187 | 1.6439 | 0.5695 |
72
- | 1.6099 | 2.0 | 375 | 1.6183 | 0.5733 |
73
- | 1.524 | 2.9973 | 562 | 1.6164 | 0.5744 |
74
- | 1.3938 | 4.0 | 750 | 1.6376 | 0.5729 |
75
- | 1.2685 | 4.9973 | 937 | 1.6846 | 0.5699 |
76
- | 1.1591 | 6.0 | 1125 | 1.7547 | 0.5673 |
77
- | 1.0444 | 6.9973 | 1312 | 1.8395 | 0.5643 |
78
- | 0.9535 | 8.0 | 1500 | 1.9008 | 0.5613 |
79
- | 0.8235 | 8.9973 | 1687 | 2.0268 | 0.5592 |
80
- | 0.7635 | 10.0 | 1875 | 2.0937 | 0.5568 |
81
- | 0.6978 | 10.9973 | 2062 | 2.1558 | 0.5570 |
82
- | 0.6615 | 12.0 | 2250 | 2.2400 | 0.5552 |
83
- | 0.6262 | 12.9973 | 2437 | 2.2687 | 0.5556 |
84
- | 0.5958 | 14.0 | 2625 | 2.3582 | 0.5537 |
85
- | 0.5778 | 14.9973 | 2812 | 2.3960 | 0.5534 |
86
- | 0.5661 | 16.0 | 3000 | 2.4322 | 0.5534 |
87
- | 0.5277 | 16.9973 | 3187 | 2.4828 | 0.5515 |
88
- | 0.5211 | 18.0 | 3375 | 2.5106 | 0.5516 |
89
- | 0.5189 | 18.9973 | 3562 | 2.5706 | 0.5515 |
90
- | 0.5166 | 20.0 | 3750 | 2.5422 | 0.5526 |
91
- | 0.5132 | 20.9973 | 3937 | 2.5948 | 0.5509 |
92
- | 0.5115 | 22.0 | 4125 | 2.6048 | 0.5512 |
93
- | 0.5083 | 22.9973 | 4312 | 2.5811 | 0.5521 |
94
- | 0.5081 | 24.0 | 4500 | 2.5662 | 0.5513 |
95
- | 0.4862 | 24.9973 | 4687 | 2.6429 | 0.5522 |
96
- | 0.4845 | 26.0 | 4875 | 2.6020 | 0.5534 |
97
- | 0.4869 | 26.9973 | 5062 | 2.6339 | 0.5522 |
98
- | 0.4862 | 28.0 | 5250 | 2.6162 | 0.5524 |
99
- | 0.4856 | 28.9973 | 5437 | 2.6764 | 0.5526 |
100
- | 0.4871 | 30.0 | 5625 | 2.6703 | 0.5526 |
101
- | 0.4863 | 30.9973 | 5812 | 2.6787 | 0.5533 |
102
- | 0.4884 | 32.0 | 6000 | 2.6848 | 0.5528 |
103
- | 0.467 | 32.9973 | 6187 | 2.6689 | 0.5531 |
104
- | 0.4694 | 34.0 | 6375 | 2.7013 | 0.5525 |
105
- | 0.4712 | 34.9973 | 6562 | 2.7065 | 0.5521 |
106
- | 0.4733 | 36.0 | 6750 | 2.6707 | 0.5523 |
107
- | 0.4752 | 36.9973 | 6937 | 2.6757 | 0.5532 |
108
- | 0.4744 | 38.0 | 7125 | 2.7016 | 0.5534 |
109
- | 0.4759 | 38.9973 | 7312 | 2.7263 | 0.5526 |
110
- | 0.4759 | 40.0 | 7500 | 2.7360 | 0.5525 |
111
- | 0.4569 | 40.9973 | 7687 | 2.7580 | 0.5524 |
112
- | 0.4585 | 42.0 | 7875 | 2.7459 | 0.5521 |
113
- | 0.4602 | 42.9973 | 8062 | 2.7965 | 0.5522 |
114
- | 0.4631 | 44.0 | 8250 | 2.7995 | 0.5516 |
115
- | 0.4615 | 44.9973 | 8437 | 2.7972 | 0.5519 |
116
- | 0.4647 | 46.0 | 8625 | 2.8381 | 0.5519 |
117
- | 0.4663 | 46.9973 | 8812 | 2.7762 | 0.5535 |
118
- | 0.4672 | 48.0 | 9000 | 2.8142 | 0.5526 |
119
- | 0.4505 | 48.9973 | 9187 | 2.7870 | 0.5528 |
120
- | 0.4537 | 49.8667 | 9350 | 2.8024 | 0.5512 |
121
 
122
 
123
  ### Framework versions
124
 
125
- - PEFT 0.5.0
126
- - Transformers 4.41.1
127
  - Pytorch 2.1.0+cu121
128
- - Datasets 2.19.1
129
- - Tokenizers 0.19.1
 
1
  ---
2
+ license: llama2
3
+ base_model: meta-llama/Llama-2-7b-hf
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - accuracy
8
  model-index:
9
  - name: lmind_nq_train6000_eval6489_v1_qa_3e-5_lora2
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # lmind_nq_train6000_eval6489_v1_qa_3e-5_lora2
17
 
18
+ This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 2.4443
21
+ - Accuracy: 0.5966
22
 
23
  ## Model description
24
 
 
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
57
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
58
+ | 2.0369 | 1.0 | 187 | 1.2953 | 0.6128 |
59
+ | 1.2821 | 2.0 | 375 | 1.2741 | 0.6146 |
60
+ | 1.1987 | 3.0 | 562 | 1.2715 | 0.6162 |
61
+ | 1.066 | 4.0 | 750 | 1.3011 | 0.6151 |
62
+ | 0.9381 | 5.0 | 937 | 1.3728 | 0.6126 |
63
+ | 0.8238 | 6.0 | 1125 | 1.4599 | 0.6091 |
64
+ | 0.7289 | 7.0 | 1312 | 1.5455 | 0.6064 |
65
+ | 0.6559 | 8.0 | 1500 | 1.6359 | 0.6026 |
66
+ | 0.5733 | 9.0 | 1687 | 1.7149 | 0.6006 |
67
+ | 0.5336 | 10.0 | 1875 | 1.8006 | 0.5989 |
68
+ | 0.5116 | 11.0 | 2062 | 1.8851 | 0.5982 |
69
+ | 0.4934 | 12.0 | 2250 | 1.9262 | 0.5982 |
70
+ | 0.4823 | 13.0 | 2437 | 1.9413 | 0.5974 |
71
+ | 0.47 | 14.0 | 2625 | 2.0121 | 0.5967 |
72
+ | 0.4661 | 15.0 | 2812 | 2.0250 | 0.5968 |
73
+ | 0.462 | 16.0 | 3000 | 1.9805 | 0.5990 |
74
+ | 0.4357 | 17.0 | 3187 | 2.0656 | 0.5976 |
75
+ | 0.4348 | 18.0 | 3375 | 2.0308 | 0.5979 |
76
+ | 0.4331 | 19.0 | 3562 | 2.0629 | 0.5990 |
77
+ | 0.4341 | 20.0 | 3750 | 2.0815 | 0.5983 |
78
+ | 0.434 | 21.0 | 3937 | 2.1253 | 0.5968 |
79
+ | 0.4335 | 22.0 | 4125 | 2.1789 | 0.5971 |
80
+ | 0.4346 | 23.0 | 4312 | 2.1455 | 0.5952 |
81
+ | 0.4326 | 24.0 | 4500 | 2.1990 | 0.5971 |
82
+ | 0.4139 | 25.0 | 4687 | 2.1890 | 0.5976 |
83
+ | 0.4139 | 26.0 | 4875 | 2.1939 | 0.5968 |
84
+ | 0.4162 | 27.0 | 5062 | 2.2190 | 0.5965 |
85
+ | 0.4177 | 28.0 | 5250 | 2.2781 | 0.5955 |
86
+ | 0.4173 | 29.0 | 5437 | 2.2681 | 0.5976 |
87
+ | 0.4187 | 30.0 | 5625 | 2.2996 | 0.5959 |
88
+ | 0.4199 | 31.0 | 5812 | 2.2395 | 0.5981 |
89
+ | 0.4213 | 32.0 | 6000 | 2.2991 | 0.5957 |
90
+ | 0.4015 | 33.0 | 6187 | 2.3223 | 0.5952 |
91
+ | 0.4058 | 34.0 | 6375 | 2.3266 | 0.5957 |
92
+ | 0.4056 | 35.0 | 6562 | 2.3779 | 0.5946 |
93
+ | 0.4078 | 36.0 | 6750 | 2.3453 | 0.5951 |
94
+ | 0.4097 | 37.0 | 6937 | 2.3379 | 0.5965 |
95
+ | 0.4105 | 38.0 | 7125 | 2.3624 | 0.5969 |
96
+ | 0.4116 | 39.0 | 7312 | 2.3846 | 0.5962 |
97
+ | 0.4121 | 40.0 | 7500 | 2.3748 | 0.5945 |
98
+ | 0.3973 | 41.0 | 7687 | 2.3797 | 0.5956 |
99
+ | 0.3985 | 42.0 | 7875 | 2.3599 | 0.5967 |
100
+ | 0.4014 | 43.0 | 8062 | 2.3475 | 0.5971 |
101
+ | 0.4032 | 44.0 | 8250 | 2.3937 | 0.5987 |
102
+ | 0.4028 | 45.0 | 8437 | 2.3863 | 0.5967 |
103
+ | 0.4027 | 46.0 | 8625 | 2.4195 | 0.5956 |
104
+ | 0.4046 | 47.0 | 8812 | 2.3832 | 0.5970 |
105
+ | 0.4067 | 48.0 | 9000 | 2.3805 | 0.5973 |
106
+ | 0.3923 | 49.0 | 9187 | 2.4460 | 0.5957 |
107
+ | 0.3949 | 49.87 | 9350 | 2.4443 | 0.5966 |
108
 
109
 
110
  ### Framework versions
111
 
112
+ - Transformers 4.34.0
 
113
  - Pytorch 2.1.0+cu121
114
+ - Datasets 2.18.0
115
+ - Tokenizers 0.14.1