pszemraj commited on
Commit
fa732c4
1 Parent(s): 4016509

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -103,6 +103,27 @@ The intent is to create a text2text language model that successfully completes "
103
 
104
  Compare some of the heavier-error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :)
105
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
  ### Other checkpoints
107
 
108
  If trading a slight decrease in grammatical correction quality for faster inference speed makes sense for your use case, check out the **[base](https://huggingface.co/pszemraj/grammar-synthesis-base)** and **[small](https://huggingface.co/pszemraj/grammar-synthesis-small)** checkpoints fine-tuned from the relevant t5 checkpoints.
 
103
 
104
  Compare some of the heavier-error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :)
105
 
106
+ ### ONNX Checkpoint
107
+
108
+ This model has been converted to ONNX and can be loaded/used with huggingface's `optimum` library.
109
+
110
+ You first need to [install optimum](https://huggingface.co/docs/optimum/installation)
111
+
112
+ ```bash
113
+ pip install optimum[onnxruntime]
114
+ # ^ if you want to use a different runtime read their docs
115
+ ```
116
+ load with the optimum `pipeline`
117
+
118
+ ```python
119
+ from optimum.pipelines import pipeline
120
+
121
+ corrector = pipeline(
122
+ "text2text-generation", model=corrector_model_name, accelerator="ort"
123
+ )
124
+ # use as normal
125
+ ```
126
+
127
  ### Other checkpoints
128
 
129
  If trading a slight decrease in grammatical correction quality for faster inference speed makes sense for your use case, check out the **[base](https://huggingface.co/pszemraj/grammar-synthesis-base)** and **[small](https://huggingface.co/pszemraj/grammar-synthesis-small)** checkpoints fine-tuned from the relevant t5 checkpoints.