Tristan commited on
Commit
a69b4fb
1 Parent(s): b7e2e18

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -9,7 +9,7 @@ language: en
9
  This is a more up-to-date version of the [original BERT](https://huggingface.co/bert-base-cased) and [original RoBERTa](https://huggingface.co/roberta-base).
10
  In addition to being more up-to-date, it also tends to perform better than the original BERT on standard benchmarks.
11
  We think it is fair to directly compare our model to the original BERT because our model was trained with about the same level of compute as the original BERT, and the architecture of BERT and RoBERTa are basically the same.
12
- The original RoBERTa takes an order of magnitude more compute, although our model is also not that different in performance from RoBERTa on standard benchmarks.
13
  Our model was trained on a cleaned October 2022 snapshot of Common Crawl and Wikipedia.
14
 
15
  This model was created as part of the OLM project, which has the goal of continuously training and releasing models that are up-to-date and comparable in standard language model performance to their static counterparts.
@@ -74,7 +74,7 @@ The model was trained according to the OLM BERT/RoBERTa instructions at this [re
74
 
75
  ## Evaluation results
76
 
77
- The model achieves the following results after being tuned on GLUE tasks:
78
 
79
  | Task | Metric | Original BERT | OLM RoBERTa Oct 2022 (Ours) |
80
  |:-----|:---------|----------------:|----------------------------:|
 
9
  This is a more up-to-date version of the [original BERT](https://huggingface.co/bert-base-cased) and [original RoBERTa](https://huggingface.co/roberta-base).
10
  In addition to being more up-to-date, it also tends to perform better than the original BERT on standard benchmarks.
11
  We think it is fair to directly compare our model to the original BERT because our model was trained with about the same level of compute as the original BERT, and the architecture of BERT and RoBERTa are basically the same.
12
+ The original RoBERTa takes an order of magnitude more compute, although our model is also not that different in performance from the original RoBERTa on many standard benchmarks.
13
  Our model was trained on a cleaned October 2022 snapshot of Common Crawl and Wikipedia.
14
 
15
  This model was created as part of the OLM project, which has the goal of continuously training and releasing models that are up-to-date and comparable in standard language model performance to their static counterparts.
 
74
 
75
  ## Evaluation results
76
 
77
+ The model achieves the following results after tuning on GLUE tasks:
78
 
79
  | Task | Metric | Original BERT | OLM RoBERTa Oct 2022 (Ours) |
80
  |:-----|:---------|----------------:|----------------------------:|