pszemraj commited on
Commit
0c542e3
1 Parent(s): d7df3d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -61,16 +61,20 @@ parameters:
61
 
62
  A fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) for grammar correction on an expanded version of the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset.
63
 
 
64
  ## Example
65
 
66
  ![example](https://i.imgur.com/PIhrc7E.png)
67
 
 
 
68
  ---
69
 
70
  ## usage in Python
71
 
 
72
 
73
- after `pip install transformers`):
74
 
75
  ```python
76
  from transformers import pipeline
@@ -84,7 +88,9 @@ results = corrector(raw_text)
84
  print(results)
85
  ```
86
 
87
- Compare vs. the original [grammar-synthesis-large](https://huggingface.co/pszemraj/grammar-synthesis-large).
 
 
88
 
89
 
90
  ---
 
61
 
62
  A fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) for grammar correction on an expanded version of the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset.
63
 
64
+
65
  ## Example
66
 
67
  ![example](https://i.imgur.com/PIhrc7E.png)
68
 
69
+ Compare vs. the original [grammar-synthesis-large](https://huggingface.co/pszemraj/grammar-synthesis-large).
70
+
71
  ---
72
 
73
  ## usage in Python
74
 
75
+ > There's a colab notebook that already has this basic version implemented (_click on the Open in Colab button_)
76
 
77
+ After `pip install transformers` run the following code:
78
 
79
  ```python
80
  from transformers import pipeline
 
88
  print(results)
89
  ```
90
 
91
+ **For Batch Inference:** see [this discussion thread](https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis/discussions/1) for details, but essentially the dataset consists of several sentences at a time, and so I'd recommend running inference **in the same fashion:** batches of 64-96 tokens ish (or, 2-3 sentences split with regex)
92
+
93
+
94
 
95
 
96
  ---