shimmyshimmer commited on
Commit
3ee63d4
1 Parent(s): 2cab55e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +207 -655
README.md CHANGED
@@ -2,7 +2,7 @@
2
  language:
3
  - en
4
  library_name: transformers
5
- license: llama3.1
6
  tags:
7
  - llama-3
8
  - llama
@@ -16,7 +16,7 @@ tags:
16
 
17
  We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
18
 
19
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
20
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
21
 
22
  ## ✨ Finetune for Free
@@ -25,6 +25,7 @@ All notebooks are **beginner friendly**! Add your dataset, click "Run All", and
25
 
26
  | Unsloth supports | Free Notebooks | Performance | Memory use |
27
  |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
 
28
  | **Llama-3 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less |
29
  | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
30
  | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
@@ -41,13 +42,19 @@ All notebooks are **beginner friendly**! Add your dataset, click "Run All", and
41
  ## Special Thanks
42
  A huge thank you to the Meta and Llama team for creating and releasing these models.
43
 
44
- ## Model Information
45
 
46
- The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
47
 
48
- **Model developer**: Meta
49
 
50
- **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
 
 
 
 
 
 
51
 
52
 
53
  <table>
@@ -58,10 +65,6 @@ The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a
58
  </td>
59
  <td><strong>Params</strong>
60
  </td>
61
- <td><strong>Input modalities</strong>
62
- </td>
63
- <td><strong>Output modalities</strong>
64
- </td>
65
  <td><strong>Context length</strong>
66
  </td>
67
  <td><strong>GQA</strong>
@@ -72,94 +75,72 @@ The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a
72
  </td>
73
  </tr>
74
  <tr>
75
- <td rowspan="3" >Llama 3.1 (text only)
76
  </td>
77
- <td rowspan="3" >A new mix of publicly available online data.
78
  </td>
79
  <td>8B
80
  </td>
81
- <td>Multilingual Text
82
- </td>
83
- <td>Multilingual Text and code
84
- </td>
85
- <td>128k
86
  </td>
87
  <td>Yes
88
  </td>
89
- <td rowspan="3" >15T+
90
  </td>
91
- <td rowspan="3" >December 2023
92
  </td>
93
  </tr>
94
  <tr>
95
  <td>70B
96
  </td>
97
- <td>Multilingual Text
98
- </td>
99
- <td>Multilingual Text and code
100
- </td>
101
- <td>128k
102
  </td>
103
  <td>Yes
104
  </td>
105
- </tr>
106
- <tr>
107
- <td>405B
108
- </td>
109
- <td>Multilingual Text
110
- </td>
111
- <td>Multilingual Text and code
112
- </td>
113
- <td>128k
114
- </td>
115
- <td>Yes
116
  </td>
117
  </tr>
118
  </table>
119
 
120
 
121
- **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
122
 
123
- **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
124
 
125
- **Model Release Date:** July 23, 2024.
126
 
127
- **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
128
 
129
- **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
130
-
131
- Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
132
 
133
 
134
  ## Intended Use
135
 
136
- **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases.
137
 
138
- **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
139
 
140
- **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
141
 
142
  ## How to use
143
 
144
- This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase.
145
 
146
  ### Use with transformers
147
 
148
- Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
149
-
150
- Make sure to update your transformers installation via `pip install --upgrade transformers`.
151
 
152
  ```python
153
  import transformers
154
  import torch
155
 
156
- model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
157
 
158
  pipeline = transformers.pipeline(
159
  "text-generation",
160
  model=model_id,
161
  model_kwargs={"torch_dtype": torch.bfloat16},
162
- device_map="auto",
163
  )
164
 
165
  messages = [
@@ -167,119 +148,106 @@ messages = [
167
  {"role": "user", "content": "Who are you?"},
168
  ]
169
 
 
 
 
 
 
 
 
 
 
 
 
170
  outputs = pipeline(
171
- messages,
172
  max_new_tokens=256,
 
 
 
 
173
  )
174
- print(outputs[0]["generated_text"][-1])
175
  ```
176
 
177
- Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
178
-
179
- ### Use with `llama`
180
 
181
- Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
182
 
183
  To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
184
 
185
  ```
186
- huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct
187
  ```
188
 
189
- ## Hardware and Software
190
 
191
- **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
192
-
193
- **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
194
 
 
195
 
196
- **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
197
 
198
 
199
  <table>
200
  <tr>
201
  <td>
202
  </td>
203
- <td><strong>Training Time (GPU hours)</strong>
204
- </td>
205
- <td><strong>Training Power Consumption (W)</strong>
206
- </td>
207
- <td><strong>Training Location-Based Greenhouse Gas Emissions</strong>
208
- <p>
209
- <strong>(tons CO2eq)</strong>
210
- </td>
211
- <td><strong>Training Market-Based Greenhouse Gas Emissions</strong>
212
- <p>
213
- <strong>(tons CO2eq)</strong>
214
- </td>
215
- </tr>
216
- <tr>
217
- <td>Llama 3.1 8B
218
- </td>
219
- <td>1.46M
220
  </td>
221
- <td>700
222
  </td>
223
- <td>420
224
- </td>
225
- <td>0
226
  </td>
227
  </tr>
228
  <tr>
229
- <td>Llama 3.1 70B
230
  </td>
231
- <td>7.0M
232
  </td>
233
  <td>700
234
  </td>
235
- <td>2,040
236
- </td>
237
- <td>0
238
  </td>
239
  </tr>
240
  <tr>
241
- <td>Llama 3.1 405B
242
  </td>
243
- <td>30.84M
244
  </td>
245
  <td>700
246
  </td>
247
- <td>8,930
248
- </td>
249
- <td>0
250
  </td>
251
  </tr>
252
  <tr>
253
  <td>Total
254
  </td>
255
- <td>39.3M
256
- <td>
257
- <ul>
258
-
259
- </ul>
260
  </td>
261
- <td>11,390
262
  </td>
263
- <td>0
264
  </td>
265
  </tr>
266
  </table>
267
 
268
 
269
 
270
- The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
271
 
272
 
273
  ## Training Data
274
 
275
- **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
 
 
276
 
277
- **Data Freshness:** The pretraining data has a cutoff of December 2023.
278
 
 
279
 
280
- ## Benchmark scores
281
 
282
- In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library.
283
 
284
  ### Base pretrained models
285
 
@@ -290,241 +258,175 @@ In this section, we report the results for Llama 3.1 models on standard automati
290
  </td>
291
  <td><strong>Benchmark</strong>
292
  </td>
293
- <td><strong># Shots</strong>
294
- </td>
295
- <td><strong>Metric</strong>
296
- </td>
297
  <td><strong>Llama 3 8B</strong>
298
  </td>
299
- <td><strong>Llama 3.1 8B</strong>
300
  </td>
301
- <td><strong>Llama 3 70B</strong>
302
  </td>
303
- <td><strong>Llama 3.1 70B</strong>
304
  </td>
305
- <td><strong>Llama 3.1 405B</strong>
306
  </td>
307
  </tr>
308
  <tr>
309
- <td rowspan="7" >General
310
- </td>
311
- <td>MMLU
312
  </td>
313
- <td>5
314
  </td>
315
- <td>macro_avg/acc_char
316
  </td>
317
- <td>66.7
318
  </td>
319
- <td>66.7
320
  </td>
321
  <td>79.5
322
  </td>
323
- <td>79.3
324
- </td>
325
- <td>85.2
326
- </td>
327
- </tr>
328
- <tr>
329
- <td>MMLU-Pro (CoT)
330
- </td>
331
- <td>5
332
- </td>
333
- <td>macro_avg/acc_char
334
- </td>
335
- <td>36.2
336
- </td>
337
- <td>37.1
338
- </td>
339
- <td>55.0
340
- </td>
341
- <td>53.8
342
- </td>
343
- <td>61.6
344
  </td>
345
  </tr>
346
  <tr>
347
- <td>AGIEval English
348
  </td>
349
- <td>3-5
350
  </td>
351
- <td>average/acc_char
352
  </td>
353
- <td>47.1
354
- </td>
355
- <td>47.8
356
  </td>
357
  <td>63.0
358
  </td>
359
- <td>64.6
360
- </td>
361
- <td>71.6
362
  </td>
363
  </tr>
364
  <tr>
365
- <td>CommonSenseQA
366
- </td>
367
- <td>7
368
- </td>
369
- <td>acc_char
370
  </td>
371
  <td>72.6
372
  </td>
373
- <td>75.0
374
  </td>
375
- <td>83.8
376
  </td>
377
- <td>84.1
378
  </td>
379
- <td>85.8
380
  </td>
381
  </tr>
382
  <tr>
383
- <td>Winogrande
384
  </td>
385
- <td>5
386
- </td>
387
- <td>acc_char
388
- </td>
389
- <td>-
390
  </td>
391
- <td>60.5
392
  </td>
393
- <td>-
394
  </td>
395
- <td>83.3
396
  </td>
397
- <td>86.7
398
  </td>
399
  </tr>
400
  <tr>
401
- <td>BIG-Bench Hard (CoT)
402
- </td>
403
- <td>3
404
- </td>
405
- <td>average/em
406
  </td>
407
  <td>61.1
408
  </td>
409
- <td>64.2
410
  </td>
411
- <td>81.3
412
  </td>
413
- <td>81.6
414
  </td>
415
- <td>85.9
416
  </td>
417
  </tr>
418
  <tr>
419
- <td>ARC-Challenge
420
  </td>
421
- <td>25
422
  </td>
423
- <td>acc_char
424
  </td>
425
- <td>79.4
426
  </td>
427
- <td>79.7
428
- </td>
429
- <td>93.1
430
- </td>
431
- <td>92.9
432
  </td>
433
- <td>96.1
434
  </td>
435
  </tr>
436
  <tr>
437
  <td>Knowledge reasoning
438
  </td>
439
- <td>TriviaQA-Wiki
440
- </td>
441
- <td>5
442
- </td>
443
- <td>em
444
  </td>
445
  <td>78.5
446
  </td>
447
- <td>77.6
448
  </td>
449
- <td>89.7
450
  </td>
451
- <td>89.8
452
  </td>
453
- <td>91.8
454
  </td>
455
  </tr>
456
  <tr>
457
  <td rowspan="4" >Reading comprehension
458
  </td>
459
- <td>SQuAD
460
- </td>
461
- <td>1
462
- </td>
463
- <td>em
464
  </td>
465
  <td>76.4
466
  </td>
467
- <td>77.0
468
  </td>
469
- <td>85.6
470
  </td>
471
- <td>81.8
472
  </td>
473
- <td>89.3
474
  </td>
475
  </tr>
476
  <tr>
477
- <td>QuAC (F1)
478
- </td>
479
- <td>1
480
- </td>
481
- <td>f1
482
  </td>
483
  <td>44.4
484
  </td>
485
- <td>44.9
486
  </td>
487
- <td>51.1
488
  </td>
489
  <td>51.1
490
  </td>
491
- <td>53.6
492
  </td>
493
  </tr>
494
  <tr>
495
- <td>BoolQ
496
- </td>
497
- <td>0
498
- </td>
499
- <td>acc_char
500
  </td>
501
  <td>75.7
502
  </td>
503
- <td>75.0
504
  </td>
505
- <td>79.0
506
  </td>
507
- <td>79.4
508
  </td>
509
- <td>80.0
510
  </td>
511
  </tr>
512
  <tr>
513
- <td>DROP (F1)
514
- </td>
515
- <td>3
516
- </td>
517
- <td>f1
518
  </td>
519
  <td>58.4
520
  </td>
521
- <td>59.5
522
  </td>
523
- <td>79.7
524
  </td>
525
- <td>79.6
526
  </td>
527
- <td>84.8
528
  </td>
529
  </tr>
530
  </table>
@@ -536,531 +438,181 @@ In this section, we report the results for Llama 3.1 models on standard automati
536
 
537
  <table>
538
  <tr>
539
- <td><strong>Category</strong>
540
- </td>
541
  <td><strong>Benchmark</strong>
542
  </td>
543
- <td><strong># Shots</strong>
544
- </td>
545
- <td><strong>Metric</strong>
546
- </td>
547
- <td><strong>Llama 3 8B Instruct</strong>
548
  </td>
549
- <td><strong>Llama 3.1 8B Instruct</strong>
550
  </td>
551
- <td><strong>Llama 3 70B Instruct</strong>
552
  </td>
553
- <td><strong>Llama 3.1 70B Instruct</strong>
554
  </td>
555
- <td><strong>Llama 3.1 405B Instruct</strong>
556
  </td>
557
  </tr>
558
  <tr>
559
- <td rowspan="4" >General
560
- </td>
561
- <td>MMLU
562
- </td>
563
- <td>5
564
  </td>
565
- <td>macro_avg/acc
566
  </td>
567
- <td>68.5
568
  </td>
569
- <td>69.4
570
  </td>
571
  <td>82.0
572
  </td>
573
- <td>83.6
574
- </td>
575
- <td>87.3
576
- </td>
577
- </tr>
578
- <tr>
579
- <td>MMLU (CoT)
580
- </td>
581
- <td>0
582
- </td>
583
- <td>macro_avg/acc
584
- </td>
585
- <td>65.3
586
- </td>
587
- <td>73.0
588
- </td>
589
- <td>80.9
590
- </td>
591
- <td>86.0
592
- </td>
593
- <td>88.6
594
- </td>
595
- </tr>
596
- <tr>
597
- <td>MMLU-Pro (CoT)
598
- </td>
599
- <td>5
600
- </td>
601
- <td>micro_avg/acc_char
602
- </td>
603
- <td>45.5
604
- </td>
605
- <td>48.3
606
- </td>
607
- <td>63.4
608
- </td>
609
- <td>66.4
610
- </td>
611
- <td>73.3
612
- </td>
613
- </tr>
614
- <tr>
615
- <td>IFEval
616
- </td>
617
- <td>
618
- </td>
619
- <td>
620
- </td>
621
- <td>76.8
622
- </td>
623
- <td>80.4
624
- </td>
625
- <td>82.9
626
- </td>
627
- <td>87.5
628
- </td>
629
- <td>88.6
630
- </td>
631
- </tr>
632
- <tr>
633
- <td rowspan="2" >Reasoning
634
- </td>
635
- <td>ARC-C
636
- </td>
637
- <td>0
638
- </td>
639
- <td>acc
640
- </td>
641
- <td>82.4
642
- </td>
643
- <td>83.4
644
- </td>
645
- <td>94.4
646
- </td>
647
- <td>94.8
648
- </td>
649
- <td>96.9
650
  </td>
651
  </tr>
652
  <tr>
653
- <td>GPQA
654
  </td>
655
- <td>0
656
  </td>
657
- <td>em
658
  </td>
659
- <td>34.6
660
- </td>
661
- <td>30.4
662
  </td>
663
  <td>39.5
664
  </td>
665
- <td>41.7
666
- </td>
667
- <td>50.7
668
  </td>
669
  </tr>
670
  <tr>
671
- <td rowspan="4" >Code
672
- </td>
673
- <td>HumanEval
674
  </td>
675
- <td>0
676
  </td>
677
- <td>pass@1
678
  </td>
679
- <td>60.4
680
- </td>
681
- <td>72.6
682
  </td>
683
  <td>81.7
684
  </td>
685
- <td>80.5
686
- </td>
687
- <td>89.0
688
  </td>
689
  </tr>
690
  <tr>
691
- <td>MBPP ++ base version
692
- </td>
693
- <td>0
694
- </td>
695
- <td>pass@1
696
- </td>
697
- <td>70.6
698
- </td>
699
- <td>72.8
700
- </td>
701
- <td>82.5
702
- </td>
703
- <td>86.0
704
- </td>
705
- <td>88.6
706
- </td>
707
- </tr>
708
- <tr>
709
- <td>Multipl-E HumanEval
710
- </td>
711
- <td>0
712
- </td>
713
- <td>pass@1
714
- </td>
715
- <td>-
716
- </td>
717
- <td>50.8
718
- </td>
719
- <td>-
720
  </td>
721
- <td>65.5
722
- </td>
723
- <td>75.2
724
- </td>
725
- </tr>
726
- <tr>
727
- <td>Multipl-E MBPP
728
- </td>
729
- <td>0
730
- </td>
731
- <td>pass@1
732
- </td>
733
- <td>-
734
- </td>
735
- <td>52.4
736
- </td>
737
- <td>-
738
- </td>
739
- <td>62.0
740
- </td>
741
- <td>65.7
742
- </td>
743
- </tr>
744
- <tr>
745
- <td rowspan="2" >Math
746
- </td>
747
- <td>GSM-8K (CoT)
748
- </td>
749
- <td>8
750
- </td>
751
- <td>em_maj1@1
752
  </td>
753
- <td>80.6
754
  </td>
755
- <td>84.5
756
  </td>
757
  <td>93.0
758
  </td>
759
- <td>95.1
760
- </td>
761
- <td>96.8
762
- </td>
763
- </tr>
764
- <tr>
765
- <td>MATH (CoT)
766
- </td>
767
- <td>0
768
- </td>
769
- <td>final_em
770
- </td>
771
- <td>29.1
772
- </td>
773
- <td>51.9
774
- </td>
775
- <td>51.0
776
- </td>
777
- <td>68.0
778
- </td>
779
- <td>73.8
780
- </td>
781
- </tr>
782
- <tr>
783
- <td rowspan="4" >Tool Use
784
- </td>
785
- <td>API-Bank
786
- </td>
787
- <td>0
788
- </td>
789
- <td>acc
790
- </td>
791
- <td>48.3
792
- </td>
793
- <td>82.6
794
- </td>
795
- <td>85.1
796
- </td>
797
- <td>90.0
798
- </td>
799
- <td>92.0
800
- </td>
801
- </tr>
802
- <tr>
803
- <td>BFCL
804
- </td>
805
- <td>0
806
- </td>
807
- <td>acc
808
- </td>
809
- <td>60.3
810
- </td>
811
- <td>76.1
812
- </td>
813
- <td>83.0
814
- </td>
815
- <td>84.8
816
- </td>
817
- <td>88.5
818
- </td>
819
- </tr>
820
- <tr>
821
- <td>Gorilla Benchmark API Bench
822
- </td>
823
- <td>0
824
- </td>
825
- <td>acc
826
- </td>
827
- <td>1.7
828
- </td>
829
- <td>8.2
830
- </td>
831
- <td>14.7
832
- </td>
833
- <td>29.7
834
- </td>
835
- <td>35.3
836
  </td>
837
  </tr>
838
  <tr>
839
- <td>Nexus (0-shot)
840
- </td>
841
- <td>0
842
  </td>
843
- <td>macro_avg/acc
844
  </td>
845
- <td>18.1
846
  </td>
847
- <td>38.5
848
  </td>
849
- <td>47.8
850
  </td>
851
- <td>56.7
852
- </td>
853
- <td>58.7
854
- </td>
855
- </tr>
856
- <tr>
857
- <td>Multilingual
858
- </td>
859
- <td>Multilingual MGSM (CoT)
860
- </td>
861
- <td>0
862
- </td>
863
- <td>em
864
- </td>
865
- <td>-
866
- </td>
867
- <td>68.9
868
- </td>
869
- <td>-
870
- </td>
871
- <td>86.9
872
- </td>
873
- <td>91.6
874
- </td>
875
- </tr>
876
- </table>
877
-
878
- #### Multilingual benchmarks
879
-
880
- <table>
881
- <tr>
882
- <td><strong>Category</strong>
883
- </td>
884
- <td><strong>Benchmark</strong>
885
- </td>
886
- <td><strong>Language</strong>
887
- </td>
888
- <td><strong>Llama 3.1 8B</strong>
889
- </td>
890
- <td><strong>Llama 3.1 70B</strong>
891
- </td>
892
- <td><strong>Llama 3.1 405B</strong>
893
- </td>
894
- </tr>
895
- <tr>
896
- <td rowspan="9" ><strong>General</strong>
897
- </td>
898
- <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong>
899
- </td>
900
- <td>Portuguese
901
- </td>
902
- <td>62.12
903
- </td>
904
- <td>80.13
905
- </td>
906
- <td>84.95
907
- </td>
908
- </tr>
909
- <tr>
910
- <td>Spanish
911
- </td>
912
- <td>62.45
913
- </td>
914
- <td>80.05
915
- </td>
916
- <td>85.08
917
- </td>
918
- </tr>
919
- <tr>
920
- <td>Italian
921
- </td>
922
- <td>61.63
923
- </td>
924
- <td>80.4
925
- </td>
926
- <td>85.04
927
- </td>
928
- </tr>
929
- <tr>
930
- <td>German
931
- </td>
932
- <td>60.59
933
- </td>
934
- <td>79.27
935
- </td>
936
- <td>84.36
937
- </td>
938
- </tr>
939
- <tr>
940
- <td>French
941
- </td>
942
- <td>62.34
943
- </td>
944
- <td>79.82
945
- </td>
946
- <td>84.66
947
- </td>
948
- </tr>
949
- <tr>
950
- <td>Hindi
951
- </td>
952
- <td>50.88
953
- </td>
954
- <td>74.52
955
- </td>
956
- <td>80.31
957
- </td>
958
- </tr>
959
- <tr>
960
- <td>Thai
961
- </td>
962
- <td>50.32
963
- </td>
964
- <td>72.95
965
- </td>
966
- <td>78.21
967
  </td>
968
  </tr>
969
  </table>
970
 
971
 
972
 
973
- ## Responsibility & Safety
974
 
975
- As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
976
 
 
977
 
 
978
 
979
- * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
980
- * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
981
- * Provide protections for the community to help prevent the misuse of our models.
982
 
 
983
 
984
- ### Responsible deployment
985
 
986
- Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
987
 
 
988
 
989
- #### Llama 3.1 instruct
990
 
991
- Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper.
992
 
993
- **Fine-tuning data**
994
 
995
- We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
996
 
997
- **Refusals and Tone**
998
 
999
- Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
1000
 
 
1001
 
1002
- #### Llama 3.1 systems
1003
 
1004
- **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools.
1005
 
1006
- As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
1007
 
1008
 
1009
- #### New capabilities
1010
 
1011
- Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases.
1012
 
1013
- **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards.
1014
 
1015
- **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide.
1016
 
1017
 
1018
- ### Evaluations
 
1019
 
1020
- We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
1021
 
1022
- Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization.
1023
 
1024
- **Red teaming**
1025
 
1026
- For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets.
1027
 
1028
- We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
1029
 
 
1030
 
1031
- ### Critical and other risks
1032
 
1033
- We specifically focused our efforts on mitigating the following critical risk areas:
1034
 
1035
- **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
1036
 
1037
- To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons.
1038
 
1039
 
1040
- **2. Child Safety**
1041
 
1042
- Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
1043
 
1044
- **3. Cyber attack enablement**
1045
 
1046
- Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
1047
 
1048
- Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
1049
 
1050
- Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more.
1051
 
 
1052
 
1053
- ### Community
1054
 
1055
- Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
1056
 
1057
- We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
1058
 
1059
- Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
1060
 
 
1061
 
1062
- ## Ethical Considerations and Limitations
1063
-
1064
- The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
1065
 
1066
- But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
 
2
  language:
3
  - en
4
  library_name: transformers
5
+ license: llama3
6
  tags:
7
  - llama-3
8
  - llama
 
16
 
17
  We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
18
 
19
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
20
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
21
 
22
  ## ✨ Finetune for Free
 
25
 
26
  | Unsloth supports | Free Notebooks | Performance | Memory use |
27
  |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
28
+ | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
29
  | **Llama-3 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less |
30
  | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
31
  | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
 
42
  ## Special Thanks
43
  A huge thank you to the Meta and Llama team for creating and releasing these models.
44
 
45
+ ## Model Details
46
 
47
+ Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
48
 
49
+ **Model developers** Meta
50
 
51
+ **Variations** Llama 3 comes in two sizes 8B and 70B parameters in pre-trained and instruction tuned variants.
52
+
53
+ **Input** Models input text only.
54
+
55
+ **Output** Models generate text and code only.
56
+
57
+ **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
58
 
59
 
60
  <table>
 
65
  </td>
66
  <td><strong>Params</strong>
67
  </td>
 
 
 
 
68
  <td><strong>Context length</strong>
69
  </td>
70
  <td><strong>GQA</strong>
 
75
  </td>
76
  </tr>
77
  <tr>
78
+ <td rowspan="2" >Llama 3
79
  </td>
80
+ <td rowspan="2" >A new mix of publicly available online data.
81
  </td>
82
  <td>8B
83
  </td>
84
+ <td>8k
 
 
 
 
85
  </td>
86
  <td>Yes
87
  </td>
88
+ <td rowspan="2" >15T+
89
  </td>
90
+ <td>March, 2023
91
  </td>
92
  </tr>
93
  <tr>
94
  <td>70B
95
  </td>
96
+ <td>8k
 
 
 
 
97
  </td>
98
  <td>Yes
99
  </td>
100
+ <td>December, 2023
 
 
 
 
 
 
 
 
 
 
101
  </td>
102
  </tr>
103
  </table>
104
 
105
 
106
+ **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
107
 
108
+ **Model Release Date** April 18, 2024.
109
 
110
+ **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
111
 
112
+ **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
113
 
114
+ Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
 
 
115
 
116
 
117
  ## Intended Use
118
 
119
+ **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
120
 
121
+ **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
122
 
123
+ **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
124
 
125
  ## How to use
126
 
127
+ This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase.
128
 
129
  ### Use with transformers
130
 
131
+ See the snippet below for usage with Transformers:
 
 
132
 
133
  ```python
134
  import transformers
135
  import torch
136
 
137
+ model_id = "meta-llama/Meta-Llama-3-70B-Instruct"
138
 
139
  pipeline = transformers.pipeline(
140
  "text-generation",
141
  model=model_id,
142
  model_kwargs={"torch_dtype": torch.bfloat16},
143
+ device="auto",
144
  )
145
 
146
  messages = [
 
148
  {"role": "user", "content": "Who are you?"},
149
  ]
150
 
151
+ prompt = pipeline.tokenizer.apply_chat_template(
152
+ messages,
153
+ tokenize=False,
154
+ add_generation_prompt=True
155
+ )
156
+
157
+ terminators = [
158
+ pipeline.tokenizer.eos_token_id,
159
+ pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
160
+ ]
161
+
162
  outputs = pipeline(
163
+ prompt,
164
  max_new_tokens=256,
165
+ eos_token_id=terminators,
166
+ do_sample=True,
167
+ temperature=0.6,
168
+ top_p=0.9,
169
  )
170
+ print(outputs[0]["generated_text"][len(prompt):])
171
  ```
172
 
173
+ ### Use with `llama3`
 
 
174
 
175
+ Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
176
 
177
  To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
178
 
179
  ```
180
+ huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct
181
  ```
182
 
183
+ For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
184
 
185
+ ## Hardware and Software
 
 
186
 
187
+ **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
188
 
189
+ **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
190
 
191
 
192
  <table>
193
  <tr>
194
  <td>
195
  </td>
196
+ <td><strong>Time (GPU hours)</strong>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
197
  </td>
198
+ <td><strong>Power Consumption (W)</strong>
199
  </td>
200
+ <td><strong>Carbon Emitted(tCO2eq)</strong>
 
 
201
  </td>
202
  </tr>
203
  <tr>
204
+ <td>Llama 3 8B
205
  </td>
206
+ <td>1.3M
207
  </td>
208
  <td>700
209
  </td>
210
+ <td>390
 
 
211
  </td>
212
  </tr>
213
  <tr>
214
+ <td>Llama 3 70B
215
  </td>
216
+ <td>6.4M
217
  </td>
218
  <td>700
219
  </td>
220
+ <td>1900
 
 
221
  </td>
222
  </tr>
223
  <tr>
224
  <td>Total
225
  </td>
226
+ <td>7.7M
 
 
 
 
227
  </td>
228
+ <td>
229
  </td>
230
+ <td>2290
231
  </td>
232
  </tr>
233
  </table>
234
 
235
 
236
 
237
+ **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
238
 
239
 
240
  ## Training Data
241
 
242
+ **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
243
+
244
+ **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
245
 
 
246
 
247
+ ## Benchmarks
248
 
249
+ In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
250
 
 
251
 
252
  ### Base pretrained models
253
 
 
258
  </td>
259
  <td><strong>Benchmark</strong>
260
  </td>
 
 
 
 
261
  <td><strong>Llama 3 8B</strong>
262
  </td>
263
+ <td><strong>Llama2 7B</strong>
264
  </td>
265
+ <td><strong>Llama2 13B</strong>
266
  </td>
267
+ <td><strong>Llama 3 70B</strong>
268
  </td>
269
+ <td><strong>Llama2 70B</strong>
270
  </td>
271
  </tr>
272
  <tr>
273
+ <td rowspan="6" >General
 
 
274
  </td>
275
+ <td>MMLU (5-shot)
276
  </td>
277
+ <td>66.6
278
  </td>
279
+ <td>45.7
280
  </td>
281
+ <td>53.8
282
  </td>
283
  <td>79.5
284
  </td>
285
+ <td>69.7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
286
  </td>
287
  </tr>
288
  <tr>
289
+ <td>AGIEval English (3-5 shot)
290
  </td>
291
+ <td>45.9
292
  </td>
293
+ <td>28.8
294
  </td>
295
+ <td>38.7
 
 
296
  </td>
297
  <td>63.0
298
  </td>
299
+ <td>54.8
 
 
300
  </td>
301
  </tr>
302
  <tr>
303
+ <td>CommonSenseQA (7-shot)
 
 
 
 
304
  </td>
305
  <td>72.6
306
  </td>
307
+ <td>57.6
308
  </td>
309
+ <td>67.6
310
  </td>
311
+ <td>83.8
312
  </td>
313
+ <td>78.7
314
  </td>
315
  </tr>
316
  <tr>
317
+ <td>Winogrande (5-shot)
318
  </td>
319
+ <td>76.1
 
 
 
 
320
  </td>
321
+ <td>73.3
322
  </td>
323
+ <td>75.4
324
  </td>
325
+ <td>83.1
326
  </td>
327
+ <td>81.8
328
  </td>
329
  </tr>
330
  <tr>
331
+ <td>BIG-Bench Hard (3-shot, CoT)
 
 
 
 
332
  </td>
333
  <td>61.1
334
  </td>
335
+ <td>38.1
336
  </td>
337
+ <td>47.0
338
  </td>
339
+ <td>81.3
340
  </td>
341
+ <td>65.7
342
  </td>
343
  </tr>
344
  <tr>
345
+ <td>ARC-Challenge (25-shot)
346
  </td>
347
+ <td>78.6
348
  </td>
349
+ <td>53.7
350
  </td>
351
+ <td>67.6
352
  </td>
353
+ <td>93.0
 
 
 
 
354
  </td>
355
+ <td>85.3
356
  </td>
357
  </tr>
358
  <tr>
359
  <td>Knowledge reasoning
360
  </td>
361
+ <td>TriviaQA-Wiki (5-shot)
 
 
 
 
362
  </td>
363
  <td>78.5
364
  </td>
365
+ <td>72.1
366
  </td>
367
+ <td>79.6
368
  </td>
369
+ <td>89.7
370
  </td>
371
+ <td>87.5
372
  </td>
373
  </tr>
374
  <tr>
375
  <td rowspan="4" >Reading comprehension
376
  </td>
377
+ <td>SQuAD (1-shot)
 
 
 
 
378
  </td>
379
  <td>76.4
380
  </td>
381
+ <td>72.2
382
  </td>
383
+ <td>72.1
384
  </td>
385
+ <td>85.6
386
  </td>
387
+ <td>82.6
388
  </td>
389
  </tr>
390
  <tr>
391
+ <td>QuAC (1-shot, F1)
 
 
 
 
392
  </td>
393
  <td>44.4
394
  </td>
395
+ <td>39.6
396
  </td>
397
+ <td>44.9
398
  </td>
399
  <td>51.1
400
  </td>
401
+ <td>49.4
402
  </td>
403
  </tr>
404
  <tr>
405
+ <td>BoolQ (0-shot)
 
 
 
 
406
  </td>
407
  <td>75.7
408
  </td>
409
+ <td>65.5
410
  </td>
411
+ <td>66.9
412
  </td>
413
+ <td>79.0
414
  </td>
415
+ <td>73.1
416
  </td>
417
  </tr>
418
  <tr>
419
+ <td>DROP (3-shot, F1)
 
 
 
 
420
  </td>
421
  <td>58.4
422
  </td>
423
+ <td>37.9
424
  </td>
425
+ <td>49.8
426
  </td>
427
+ <td>79.7
428
  </td>
429
+ <td>70.2
430
  </td>
431
  </tr>
432
  </table>
 
438
 
439
  <table>
440
  <tr>
 
 
441
  <td><strong>Benchmark</strong>
442
  </td>
443
+ <td><strong>Llama 3 8B</strong>
 
 
 
 
444
  </td>
445
+ <td><strong>Llama 2 7B</strong>
446
  </td>
447
+ <td><strong>Llama 2 13B</strong>
448
  </td>
449
+ <td><strong>Llama 3 70B</strong>
450
  </td>
451
+ <td><strong>Llama 2 70B</strong>
452
  </td>
453
  </tr>
454
  <tr>
455
+ <td>MMLU (5-shot)
 
 
 
 
456
  </td>
457
+ <td>68.4
458
  </td>
459
+ <td>34.1
460
  </td>
461
+ <td>47.8
462
  </td>
463
  <td>82.0
464
  </td>
465
+ <td>52.9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
466
  </td>
467
  </tr>
468
  <tr>
469
+ <td>GPQA (0-shot)
470
  </td>
471
+ <td>34.2
472
  </td>
473
+ <td>21.7
474
  </td>
475
+ <td>22.3
 
 
476
  </td>
477
  <td>39.5
478
  </td>
479
+ <td>21.0
 
 
480
  </td>
481
  </tr>
482
  <tr>
483
+ <td>HumanEval (0-shot)
 
 
484
  </td>
485
+ <td>62.2
486
  </td>
487
+ <td>7.9
488
  </td>
489
+ <td>14.0
 
 
490
  </td>
491
  <td>81.7
492
  </td>
493
+ <td>25.6
 
 
494
  </td>
495
  </tr>
496
  <tr>
497
+ <td>GSM-8K (8-shot, CoT)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
498
  </td>
499
+ <td>79.6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
500
  </td>
501
+ <td>25.7
502
  </td>
503
+ <td>77.4
504
  </td>
505
  <td>93.0
506
  </td>
507
+ <td>57.5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
508
  </td>
509
  </tr>
510
  <tr>
511
+ <td>MATH (4-shot, CoT)
 
 
512
  </td>
513
+ <td>30.0
514
  </td>
515
+ <td>3.8
516
  </td>
517
+ <td>6.7
518
  </td>
519
+ <td>50.4
520
  </td>
521
+ <td>11.6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
522
  </td>
523
  </tr>
524
  </table>
525
 
526
 
527
 
528
+ ### Responsibility & Safety
529
 
530
+ We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
531
 
532
+ Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
533
 
534
+ Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
535
 
 
 
 
536
 
537
+ As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
538
 
 
539
 
540
+ #### Llama 3-Instruct
541
 
542
+ As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
543
 
544
+ <span style="text-decoration:underline;">Safety</span>
545
 
546
+ For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
547
 
548
+ <span style="text-decoration:underline;">Refusals</span>
549
 
550
+ In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
551
 
552
+ We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
553
 
 
554
 
555
+ #### Responsible release
556
 
557
+ In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
558
 
559
+ Misuse
560
 
561
+ If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
562
 
563
 
564
+ #### Critical risks
565
 
566
+ <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
567
 
568
+ We have conducted a two fold assessment of the safety of the model in this area:
569
 
 
570
 
571
 
572
+ * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
573
+ * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
574
 
 
575
 
576
+ ### <span style="text-decoration:underline;">Cyber Security </span>
577
 
578
+ We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
579
 
 
580
 
581
+ ### <span style="text-decoration:underline;">Child Safety</span>
582
 
583
+ Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
584
 
 
585
 
586
+ ### Community
587
 
588
+ Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
589
 
590
+ Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
591
 
592
 
593
+ ## Ethical Considerations and Limitations
594
 
595
+ The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
596
 
597
+ But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
598
 
599
+ Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
600
 
 
601
 
602
+ ## Citation instructions
603
 
604
+ @article{llama3modelcard,
605
 
606
+ title={Llama 3 Model Card},
607
 
608
+ author={AI@Meta},
609
 
610
+ year={2024},
611
 
612
+ url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
613
 
614
+ }
615
 
616
+ ## Contributors
 
 
617
 
618
+ Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos