Maltehb commited on
Commit
378a3cc
1 Parent(s): f98d7ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -25,15 +25,15 @@ Here is an example on how to load both the cased and the uncased Ælæctra model
25
  ```python
26
  from transformers import AutoTokenizer, AutoModelForPreTraining
27
 
28
- tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-cased")
29
- model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-cased")
30
  ```
31
 
32
  ```python
33
  from transformers import AutoTokenizer, AutoModelForPreTraining
34
 
35
- tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-uncased")
36
- model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-uncased")
37
  ```
38
 
39
  ### Evaluation of current Danish Language Models
 
25
  ```python
26
  from transformers import AutoTokenizer, AutoModelForPreTraining
27
 
28
+ tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased")
29
+ model = AutoModelForPreTraining.from_pretrained("Maltehb/Maltehb/-l-ctra-danish-electra-small-cased")
30
  ```
31
 
32
  ```python
33
  from transformers import AutoTokenizer, AutoModelForPreTraining
34
 
35
+ tokenizer = AutoTokenizer.from_pretrained("Maltehb/Maltehb/-l-ctra-danish-electra-small-uncased")
36
+ model = AutoModelForPreTraining.from_pretrained("Maltehb/Maltehb/-l-ctra-danish-electra-small-uncased")
37
  ```
38
 
39
  ### Evaluation of current Danish Language Models