K00B404 commited on
Commit
a68b8cb
1 Parent(s): 7f5b650

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -141
README.md CHANGED
@@ -1,14 +1,6 @@
1
  # Model Card for GPT_2_CODE
2
-
3
- <!-- Provide a quick summary of what the model is/does. [Optional] -->
4
- WIP,
5
- Goal is to create a small GPT2 python coder
6
-
7
-
8
-
9
-
10
  # Table of Contents
11
-
12
  - [Model Card for GPT_2_CODE](#model-card-for--model_id-)
13
  - [Table of Contents](#table-of-contents)
14
  - [Table of Contents](#table-of-contents-1)
@@ -44,16 +36,9 @@ Goal is to create a small GPT2 python coder
44
  - [Model Card Authors [optional]](#model-card-authors-optional)
45
  - [Model Card Contact](#model-card-contact)
46
  - [How to Get Started with the Model](#how-to-get-started-with-the-model)
47
-
48
-
49
  # Model Details
50
-
51
  ## Model Description
52
-
53
- <!-- Provide a longer summary of what this model is/does. -->
54
- WIP,
55
- Goal is to create a small GPT2 python coder
56
-
57
  - **Developed by:** C, o, d, e, M, o, n, k, e, y
58
  - **Shared by [Optional]:** More information needed
59
  - **Model type:** Language model
@@ -63,186 +48,65 @@ Goal is to create a small GPT2 python coder
63
  - **Resources for more information:** More information needed
64
  - [GitHub Repo](None)
65
  - [Associated Paper](None)
66
-
67
  # Uses
68
-
69
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
70
-
71
  ## Direct Use
72
-
73
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
74
- <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
75
-
76
  generate python code snippets
77
-
78
-
79
  ## Downstream Use [Optional]
80
-
81
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
82
- <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
83
-
84
  semi auto coder
85
-
86
-
87
  ## Out-of-Scope Use
88
-
89
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
90
- <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
91
-
92
  describe code
93
-
94
-
95
- # Bias, Risks, and Limitations
96
-
97
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
98
-
99
- Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
100
-
101
-
102
- ## Recommendations
103
-
104
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
105
-
106
-
107
  Keep Finetuning on question/python datasets
108
-
109
-
110
  # Training Details
111
 
112
  ## Training Data
113
-
114
- <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
115
-
116
  flytech/python-codes-25k
117
  espejelomar/code_search_net_python_10000_examples
118
-
119
-
120
  ## Training Procedure
121
-
122
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
123
-
124
  ### Preprocessing
125
-
126
  More information needed
127
-
128
  ### Speeds, Sizes, Times
129
-
130
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
131
-
132
  Epochs 3
133
  flytech/python-codes-25k (4600)
134
  Training Loss: 0.4007
135
  Validation Loss: 0.5526
136
-
137
  Epochs 3
138
  espejelomar/code_search_net_python_10000_examples (4800)
139
  Training Loss: 1.5355
140
  Validation Loss: 1.1723
141
-
142
  # Evaluation
143
-
144
- <!-- This section describes the evaluation protocols and provides the results. -->
145
-
146
- ## Testing Data, Factors & Metrics
147
-
148
  ### Testing Data
149
-
150
- <!-- This should link to a Data Card if possible. -->
151
-
152
  flytech/python-codes-25k
153
  espejelomar/code_search_net_python_10000_examples
154
-
155
-
156
  ### Factors
157
-
158
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
159
-
160
  80/20 train/val
161
-
162
  ### Metrics
163
-
164
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
165
-
166
  train/validate
167
  lr scheduling
168
-
169
  ## Results
170
-
171
  Better in python code generation as base gpt2-medium model
172
-
173
  # Model Examination
174
-
175
  More information needed
176
-
177
  # Environmental Impact
178
-
179
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
180
-
181
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
182
-
183
  - **Hardware Type:** CPU and Colab T4
184
  - **Hours used:** 4
185
  - **Cloud Provider:** Google Colab
186
  - **Compute Region:** NL
187
- - **Carbon Emitted:** ???
188
-
189
- # Technical Specifications [optional]
190
-
191
  ## Model Architecture and Objective
192
-
193
  gpt2
194
-
195
  ## Compute Infrastructure
196
-
197
  More information needed
198
-
199
  ### Hardware
200
-
201
  CPU and Colab T4
202
-
203
  ### Software
204
-
205
  pytorch, custom python
206
-
207
- # Citation
208
-
209
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
210
-
211
- **BibTeX:**
212
-
213
- More information needed
214
-
215
- **APA:**
216
-
217
- More information needed
218
-
219
- # Glossary [optional]
220
-
221
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
222
-
223
- More information needed
224
-
225
  # More Information [optional]
226
-
227
  Experimental
228
-
229
  # Model Card Authors [optional]
230
-
231
- <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
232
-
233
  CodeMonkeyXL
234
-
235
  # Model Card Contact
236
-
237
  K00B404 huggingface
238
-
239
  # How to Get Started with the Model
240
-
241
  Use the code below to get started with the model.
242
-
243
- <details>
244
- <summary> Click to expand </summary>
245
-
246
- More information needed
247
-
248
- </details>
 
1
  # Model Card for GPT_2_CODE
2
+ -Goal is to create a small GPT2 python coder
 
 
 
 
 
 
 
3
  # Table of Contents
 
4
  - [Model Card for GPT_2_CODE](#model-card-for--model_id-)
5
  - [Table of Contents](#table-of-contents)
6
  - [Table of Contents](#table-of-contents-1)
 
36
  - [Model Card Authors [optional]](#model-card-authors-optional)
37
  - [Model Card Contact](#model-card-contact)
38
  - [How to Get Started with the Model](#how-to-get-started-with-the-model)
 
 
39
  # Model Details
 
40
  ## Model Description
41
+ WIP,Goal is to create a small GPT2 python coder
 
 
 
 
42
  - **Developed by:** C, o, d, e, M, o, n, k, e, y
43
  - **Shared by [Optional]:** More information needed
44
  - **Model type:** Language model
 
48
  - **Resources for more information:** More information needed
49
  - [GitHub Repo](None)
50
  - [Associated Paper](None)
 
51
  # Uses
52
+ coding assistant
 
 
53
  ## Direct Use
 
 
 
 
54
  generate python code snippets
 
 
55
  ## Downstream Use [Optional]
 
 
 
 
56
  semi auto coder
 
 
57
  ## Out-of-Scope Use
 
 
 
 
58
  describe code
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  Keep Finetuning on question/python datasets
 
 
60
  # Training Details
61
 
62
  ## Training Data
 
 
 
63
  flytech/python-codes-25k
64
  espejelomar/code_search_net_python_10000_examples
 
 
65
  ## Training Procedure
66
+ Train/Val/Scheduler
 
 
67
  ### Preprocessing
 
68
  More information needed
 
69
  ### Speeds, Sizes, Times
 
 
 
70
  Epochs 3
71
  flytech/python-codes-25k (4600)
72
  Training Loss: 0.4007
73
  Validation Loss: 0.5526
 
74
  Epochs 3
75
  espejelomar/code_search_net_python_10000_examples (4800)
76
  Training Loss: 1.5355
77
  Validation Loss: 1.1723
 
78
  # Evaluation
79
+ Manual comparison with base model
 
 
 
 
80
  ### Testing Data
 
 
 
81
  flytech/python-codes-25k
82
  espejelomar/code_search_net_python_10000_examples
 
 
83
  ### Factors
 
 
 
84
  80/20 train/val
 
85
  ### Metrics
 
 
 
86
  train/validate
87
  lr scheduling
 
88
  ## Results
 
89
  Better in python code generation as base gpt2-medium model
 
90
  # Model Examination
 
91
  More information needed
 
92
  # Environmental Impact
 
 
 
 
 
93
  - **Hardware Type:** CPU and Colab T4
94
  - **Hours used:** 4
95
  - **Cloud Provider:** Google Colab
96
  - **Compute Region:** NL
 
 
 
 
97
  ## Model Architecture and Objective
 
98
  gpt2
 
99
  ## Compute Infrastructure
 
100
  More information needed
 
101
  ### Hardware
 
102
  CPU and Colab T4
 
103
  ### Software
 
104
  pytorch, custom python
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
105
  # More Information [optional]
 
106
  Experimental
 
107
  # Model Card Authors [optional]
 
 
 
108
  CodeMonkeyXL
 
109
  # Model Card Contact
 
110
  K00B404 huggingface
 
111
  # How to Get Started with the Model
 
112
  Use the code below to get started with the model.