elmadany commited on
Commit
ad72751
1 Parent(s): 18507ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -7,7 +7,7 @@
7
  In addition, we provide the three models on two architectures small and base. For all models, we use a learning rate of 0.01, a batch size of 128 sequences, and a maximum sequence length of 512 whereas AraT5-tweet 128 maximum sequence is used. Hence, the original implementation of T5 in the TensorFlow framework is used to train the models. We train the models for 1M steps.8 Training took ∼ 80 days on 1 on Google Cloud TPU with 8 cores (v3.8) from TensorFlow Research Cloud (TFRC).
8
 
9
  # How to use AraT5 models
10
- This is an example for fine-tuning **AraT5-base** for News Title Generation on the Aranews dataset [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1GFOGolWPIfDvYdSNdGFrOXwu3Gu28k2b?usp=sharing)
11
 
12
  For more details, please visit our own [GitHub](https://github.com/UBC-NLP/araT5).
13
 
 
7
  In addition, we provide the three models on two architectures small and base. For all models, we use a learning rate of 0.01, a batch size of 128 sequences, and a maximum sequence length of 512 whereas AraT5-tweet 128 maximum sequence is used. Hence, the original implementation of T5 in the TensorFlow framework is used to train the models. We train the models for 1M steps.8 Training took ∼ 80 days on 1 on Google Cloud TPU with 8 cores (v3.8) from TensorFlow Research Cloud (TFRC).
8
 
9
  # How to use AraT5 models
10
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1GFOGolWPIfDvYdSNdGFrOXwu3Gu28k2b?usp=sharing)This is an example for fine-tuning **AraT5-base** for News Title Generation on the Aranews dataset
11
 
12
  For more details, please visit our own [GitHub](https://github.com/UBC-NLP/araT5).
13