Anustup commited on
Commit
038becc
1 Parent(s): 0ba81ce

Upload 3 files

Browse files
Files changed (3) hide show
  1. requirements.txt +3 -0
  2. tf.csv +2048 -0
  3. utils.py +84 -0
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ streamlit~=1.32.0
2
+ requests~=2.31.0
3
+ pandas
tf.csv ADDED
@@ -0,0 +1,2048 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Issue Title,Description,Created At,Comments
2
+ [xla:gpu] Extend collective-permute decomposer to also make decision for,"[xla:gpu] Extend collective-permute decomposer to also make decision for
3
+ Send-Recv pipeling and record the decision with frontend attributes.
4
+
5
+ We first use a simple heuristics to decide on the decomposition of which
6
+ CollectivePermute operations will be pipelined. We will only pipeline
7
+ CollectivePermute that sends loop input data, and pick the first
8
+ pipelineable CollectivePermute for pipelining. Then, if there is another
9
+ pipelineable CollectivePermute that forms a cycle with the to-be-pipelined
10
+ CollectivePermute, we will pipeline both CollectivePermute. Otherwise, we will
11
+ only pipeline one CollectivePermute.
12
+
13
+ Then, when we decompose CollectivePermute operations, we add a frontend
14
+ attribute to the Send/Recv operation to represent the pipelining decision.
15
+
16
+ Add tests.
17
+ ",2024-03-11T05:16:45Z,0
18
+ Microoptmize the conditions in IsArrayType.,"Microoptmize the conditions in IsArrayType.
19
+ ",2024-03-11T04:30:26Z,0
20
+ Do not call Shape::is_static when unnecessary.,"Do not call Shape::is_static when unnecessary.
21
+ ",2024-03-11T04:26:26Z,0
22
+ Eliminate unnecessary copies for HloSharding.,"Eliminate unnecessary copies for HloSharding.
23
+ ",2024-03-11T04:25:26Z,0
24
+ Add Dynamic Range Quantized op support for `op_stat_pass.cc`.,"Add Dynamic Range Quantized op support for `op_stat_pass.cc`.
25
+
26
+ - Cleanup header imports as well.
27
+ ",2024-03-11T03:12:47Z,0
28
+ Add check conditions in `quantization_driver_test.cc`.,"Add check conditions in `quantization_driver_test.cc`.
29
+
30
+ - Adds more rigorous checks for desired states in intermediate testing stages.
31
+ - Renames and rewrites `IsEmpty` and `HasQuantParams` for clarity.
32
+ ",2024-03-11T02:17:30Z,0
33
+ 2.16.1 libtensorflow binary,"### Issue type
34
+
35
+ Support
36
+
37
+ ### Have you reproduced the bug with TensorFlow Nightly?
38
+
39
+ Yes
40
+
41
+ ### Source
42
+
43
+ binary
44
+
45
+ ### TensorFlow version
46
+
47
+ 2.16.1
48
+
49
+ ### Custom code
50
+
51
+ No
52
+
53
+ ### OS platform and distribution
54
+
55
+ Linux
56
+
57
+ ### Mobile device
58
+
59
+ _No response_
60
+
61
+ ### Python version
62
+
63
+ _No response_
64
+
65
+ ### Bazel version
66
+
67
+ _No response_
68
+
69
+ ### GCC/compiler version
70
+
71
+ _No response_
72
+
73
+ ### CUDA/cuDNN version
74
+
75
+ _No response_
76
+
77
+ ### GPU model and memory
78
+
79
+ Yes
80
+
81
+ ### Current behavior?
82
+
83
+ Hi!
84
+
85
+ Tensorflow 2.16.1 has been [released](https://github.com/tensorflow/tensorflow/releases/tag/v2.16.1) recently. However, the latest archive with the `libtensorflow` on the official website [is still 2.15](https://www.tensorflow.org/install/lang_c). Where can I get the latest 2.16.1 `libtensorflow` with GPU support for Linux?
86
+
87
+ ### Standalone code to reproduce the issue
88
+
89
+ ```shell
90
+ -
91
+ ```
92
+
93
+
94
+ ### Relevant log output
95
+
96
+ _No response_",2024-03-10T20:56:00Z,0
97
+ Make function loading more concurrent with `TF_ENABLE_EAGER_CLIENT_STREAMING_ENQUEUE` set to `false`,"Make function loading more concurrent with `TF_ENABLE_EAGER_CLIENT_STREAMING_ENQUEUE` set to `false`
98
+ ",2024-03-10T19:12:58Z,0
99
+ Testing a temporary code change.,"Testing a temporary code change.
100
+ ",2024-03-10T18:13:15Z,0
101
+ [XLA:Python] Port py_values to nanobind.,"[XLA:Python] Port py_values to nanobind.
102
+ ",2024-03-10T15:11:31Z,0
103
+ tf.tensor_scatter_nd_add: Aborted (core dumped),"### Issue type
104
+
105
+ Bug
106
+
107
+ ### Have you reproduced the bug with TensorFlow Nightly?
108
+
109
+ Yes
110
+
111
+ ### Source
112
+
113
+ binary
114
+
115
+ ### TensorFlow version
116
+
117
+ tf 2.15
118
+
119
+ ### Custom code
120
+
121
+ Yes
122
+
123
+ ### OS platform and distribution
124
+
125
+ Ubuntu 20.04
126
+
127
+ ### Mobile device
128
+
129
+ _No response_
130
+
131
+ ### Python version
132
+
133
+ 3.9
134
+
135
+ ### Bazel version
136
+
137
+ _No response_
138
+
139
+ ### GCC/compiler version
140
+
141
+ _No response_
142
+
143
+ ### CUDA/cuDNN version
144
+
145
+ _No response_
146
+
147
+ ### GPU model and memory
148
+
149
+ _No response_
150
+
151
+ ### Current behavior?
152
+
153
+ Under specific input, `tf.tensor_scatter_nd_add` encounters ""Aborted (core dumped)"".
154
+
155
+ ### Standalone code to reproduce the issue
156
+
157
+ ```shell
158
+ import tensorflow as tf
159
+
160
+ # Generate input data
161
+ input_tensor = tf.zeros([15, 15, 15])
162
+ indices = tf.constant([[[0, 0, 0], [1, 1, 1]], [[2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5]], [[6, 6, 6], [7, 7, 7]], [[8, 8, 8], [9, 9, 9]], [[10, 10, 10], [11, 11, 11]], [[12, 12, 12], [13, 13, 13]], [[14, 14, 14], [0, 0, 0]], [[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]], [[5, 5, 5], [6, 6, 6]], [[7, 7, 7], [8, 8, 8]], [[9, 9, 9], [10, 10, 10]], [[11, 11, 11], [12, 12, 12]], [[13, 13, 13], [14, 14, 14]]])
163
+ updates = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0]) # Cast updates to float
164
+
165
+ # Invoke tf.tensor_scatter_nd_add
166
+ result = tf.tensor_scatter_nd_add(input_tensor, indices, updates)
167
+
168
+ # Print the result
169
+ print(result)
170
+ ```
171
+
172
+
173
+ ### Relevant log output
174
+
175
+ ```shell
176
+ 2024-03-10 14:59:51.853766: F tensorflow/core/framework/tensor_shape.cc:357] Check failed: d < dims() (1 vs. 1)
177
+ Aborted (core dumped)
178
+ ```
179
+ ",2024-03-10T15:00:49Z,0
180
+ tf.raw_ops.UnicodeEncode: Segmentation fault (core dumped),"### Issue type
181
+
182
+ Bug
183
+
184
+ ### Have you reproduced the bug with TensorFlow Nightly?
185
+
186
+ Yes
187
+
188
+ ### Source
189
+
190
+ binary
191
+
192
+ ### TensorFlow version
193
+
194
+ tf 2.15
195
+
196
+ ### Custom code
197
+
198
+ Yes
199
+
200
+ ### OS platform and distribution
201
+
202
+ Ubuntu 20.04
203
+
204
+ ### Mobile device
205
+
206
+ _No response_
207
+
208
+ ### Python version
209
+
210
+ 3.9
211
+
212
+ ### Bazel version
213
+
214
+ _No response_
215
+
216
+ ### GCC/compiler version
217
+
218
+ _No response_
219
+
220
+ ### CUDA/cuDNN version
221
+
222
+ _No response_
223
+
224
+ ### GPU model and memory
225
+
226
+ _No response_
227
+
228
+ ### Current behavior?
229
+
230
+ Under specific input, `tf.raw_ops.UnicodeEncode` encounters ""Segmentation fault (core dumped)"".
231
+
232
+ ### Standalone code to reproduce the issue
233
+
234
+ ```shell
235
+ import tensorflow as tf
236
+
237
+ # Generate input data
238
+ input_values = tf.constant([72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100]) # Unicode codepoints for ""Hello World""
239
+ input_splits = tf.constant([[0, 5, 11]]) # Split indices for the input_values with two dimensions
240
+ output_encoding = ""UTF-8""
241
+
242
+ # Invoke tf.raw_ops.unicode_encode
243
+ output = tf.raw_ops.UnicodeEncode(input_values=input_values, input_splits=input_splits, output_encoding=output_encoding)
244
+
245
+ # Print the output
246
+ print(output)
247
+ ```
248
+
249
+
250
+ ### Relevant log output
251
+
252
+ ```shell
253
+ Segmentation fault (core dumped)
254
+ ```
255
+ ",2024-03-10T14:59:08Z,0
256
+ tf.raw_ops.TensorScatterSub: Aborted (core dumped),"### Issue type
257
+
258
+ Bug
259
+
260
+ ### Have you reproduced the bug with TensorFlow Nightly?
261
+
262
+ Yes
263
+
264
+ ### Source
265
+
266
+ binary
267
+
268
+ ### TensorFlow version
269
+
270
+ tf 2.15
271
+
272
+ ### Custom code
273
+
274
+ Yes
275
+
276
+ ### OS platform and distribution
277
+
278
+ Ubuntu 20.04
279
+
280
+ ### Mobile device
281
+
282
+ _No response_
283
+
284
+ ### Python version
285
+
286
+ 3.9
287
+
288
+ ### Bazel version
289
+
290
+ _No response_
291
+
292
+ ### GCC/compiler version
293
+
294
+ _No response_
295
+
296
+ ### CUDA/cuDNN version
297
+
298
+ _No response_
299
+
300
+ ### GPU model and memory
301
+
302
+ _No response_
303
+
304
+ ### Current behavior?
305
+
306
+ Under specific input, `tf.raw_ops.TensorScatterSub` encounters ""Aborted (core dumped)"".
307
+
308
+ ### Standalone code to reproduce the issue
309
+
310
+ ```shell
311
+ import tensorflow as tf
312
+
313
+ # Generate input data
314
+ tensor = tf.constant([1, 2, 3, 4, 5])
315
+ indices = tf.constant([[[1], [3]], [[0], [2]]]) # Nested structure for indices
316
+ updates = tf.constant([10, 20])
317
+
318
+ # Invoke tf.raw_ops.TensorScatterSub
319
+ result = tf.raw_ops.TensorScatterSub(tensor=tensor, indices=indices, updates=updates)
320
+
321
+ # Print the result
322
+ print(result)
323
+ ```
324
+
325
+
326
+ ### Relevant log output
327
+
328
+ ```shell
329
+ 2024-03-10 14:55:41.958738: F tensorflow/core/framework/tensor_shape.cc:357] Check failed: d < dims() (1 vs. 1)
330
+ Aborted (core dumped)
331
+ ```
332
+ ",2024-03-10T14:57:36Z,0
333
+ tf.raw_ops.SparseConcat: Overflow bug ,"### Issue type
334
+
335
+ Bug
336
+
337
+ ### Have you reproduced the bug with TensorFlow Nightly?
338
+
339
+ Yes
340
+
341
+ ### Source
342
+
343
+ binary
344
+
345
+ ### TensorFlow version
346
+
347
+ tf 2.15
348
+
349
+ ### Custom code
350
+
351
+ Yes
352
+
353
+ ### OS platform and distribution
354
+
355
+ Ubuntu 20.04
356
+
357
+ ### Mobile device
358
+
359
+ _No response_
360
+
361
+ ### Python version
362
+
363
+ 3.9
364
+
365
+ ### Bazel version
366
+
367
+ _No response_
368
+
369
+ ### GCC/compiler version
370
+
371
+ _No response_
372
+
373
+ ### CUDA/cuDNN version
374
+
375
+ _No response_
376
+
377
+ ### GPU model and memory
378
+
379
+ _No response_
380
+
381
+ ### Current behavior?
382
+
383
+ Under specific input, `tf.raw_ops.SparseConcat` encounters overflow bug.
384
+
385
+ ### Standalone code to reproduce the issue
386
+
387
+ ```shell
388
+ import tensorflow as tf
389
+
390
+ # Generate input data
391
+ indices1 = tf.constant([[0, 0], [1, 2]], dtype=tf.int64)
392
+ values1 = tf.constant([1, 2], dtype=tf.float32)
393
+ shape1 = tf.constant([3, 4], dtype=tf.int64)
394
+
395
+ indices2 = tf.constant([[0, 1], [2, 3]], dtype=tf.int64)
396
+ values2 = tf.constant([3, 4], dtype=tf.float32)
397
+ shape2 = tf.constant([-1, 4], dtype=tf.int64) # Mutated shape with the negative bit set
398
+
399
+ # Invoke tf.raw_ops.SparseConcat
400
+ concatenated_sparse = tf.raw_ops.SparseConcat(
401
+ indices=[indices1, indices2],
402
+ values=[values1, values2],
403
+ shapes=[shape1, shape2],
404
+ concat_dim=0
405
+ )
406
+
407
+ print(concatenated_sparse)
408
+ ```
409
+
410
+
411
+ ### Relevant log output
412
+
413
+ ```shell
414
+ tensorflow.python.framework.errors_impl.InternalError: {{function_node __wrapped__SparseConcat_N_2_device_/job:localhost/replica:0/task:0/device:CPU:0}} Encountered overflow from large input shape. [Op:SparseConcat] name:
415
+ ```
416
+ ",2024-03-10T14:55:13Z,0
417
+ tf.raw_ops.FusedPadConv2D: Aborted (core dumped),"### Issue type
418
+
419
+ Bug
420
+
421
+ ### Have you reproduced the bug with TensorFlow Nightly?
422
+
423
+ Yes
424
+
425
+ ### Source
426
+
427
+ binary
428
+
429
+ ### TensorFlow version
430
+
431
+ tf 2.15
432
+
433
+ ### Custom code
434
+
435
+ Yes
436
+
437
+ ### OS platform and distribution
438
+
439
+ Ubuntu 20.04
440
+
441
+ ### Mobile device
442
+
443
+ _No response_
444
+
445
+ ### Python version
446
+
447
+ 3.9
448
+
449
+ ### Bazel version
450
+
451
+ _No response_
452
+
453
+ ### GCC/compiler version
454
+
455
+ _No response_
456
+
457
+ ### CUDA/cuDNN version
458
+
459
+ _No response_
460
+
461
+ ### GPU model and memory
462
+
463
+ _No response_
464
+
465
+ ### Current behavior?
466
+
467
+ Under specific input, `tf.raw_ops.FusedPadConv2D` encounters ""Aborted (core dumped)"".
468
+
469
+ ### Standalone code to reproduce the issue
470
+
471
+ ```shell
472
+ import tensorflow as tf
473
+
474
+ # Generate input data
475
+ input_data = tf.random.normal([3, 10, 10])
476
+
477
+ # Define paddings
478
+ paddings = tf.constant([[0, 0], [1, 1], [1, 1]])
479
+
480
+ # Define filter
481
+ filter = tf.random.normal([3, 3, 3, 16])
482
+
483
+ # Define mode
484
+ mode = ""REFLECT"" # Change mode to ""REFLECT"" or ""SYMMETRIC""
485
+
486
+ # Define strides
487
+ strides = [1, 1, 1, 1]
488
+
489
+ # Define padding
490
+ padding = ""VALID""
491
+
492
+ # Invoke tf.raw_ops.FusedPadConv2D
493
+ output = tf.raw_ops.FusedPadConv2D(input=input_data, paddings=paddings, filter=filter, mode=mode, strides=strides, padding=padding)
494
+
495
+ print(output)
496
+ ```
497
+
498
+
499
+ ### Relevant log output
500
+
501
+ ```shell
502
+ 2024-03-10 14:49:28.555826: F tensorflow/core/framework/tensor_shape.cc:357] Check failed: d < dims() (3 vs. 3)
503
+ Aborted (core dumped)
504
+ ```
505
+ ",2024-03-10T14:51:07Z,0
506
+ tf.tensor_scatter_nd_update: Aborted (core dumped),"### Issue type
507
+
508
+ Bug
509
+
510
+ ### Have you reproduced the bug with TensorFlow Nightly?
511
+
512
+ Yes
513
+
514
+ ### Source
515
+
516
+ binary
517
+
518
+ ### TensorFlow version
519
+
520
+ tf 2.15
521
+
522
+ ### Custom code
523
+
524
+ Yes
525
+
526
+ ### OS platform and distribution
527
+
528
+ Ubuntu 20.04
529
+
530
+ ### Mobile device
531
+
532
+ _No response_
533
+
534
+ ### Python version
535
+
536
+ 3.9
537
+
538
+ ### Bazel version
539
+
540
+ _No response_
541
+
542
+ ### GCC/compiler version
543
+
544
+ _No response_
545
+
546
+ ### CUDA/cuDNN version
547
+
548
+ _No response_
549
+
550
+ ### GPU model and memory
551
+
552
+ _No response_
553
+
554
+ ### Current behavior?
555
+
556
+ Under specific input, `tf.tensor_scatter_nd_update` encounters ""Aborted (core dumped)"".
557
+
558
+ ### Standalone code to reproduce the issue
559
+
560
+ ```shell
561
+ import tensorflow as tf
562
+
563
+ # Generate input data
564
+ input_tensor = tf.zeros([2, 2, 2]) # A tensor that contains other tensors, creating a nested structure
565
+ indices = tf.constant([[[0, 0, 0], [1, 1, 1]], [[1, 0, 1], [0, 1, 0]]])
566
+ updates = tf.constant([1, 2], dtype=tf.float32) # Cast updates to float
567
+
568
+ # Invoke tf.tensor_scatter_nd_update
569
+ result = tf.tensor_scatter_nd_update(input_tensor, indices, updates)
570
+
571
+ # Print the result
572
+ print(result)
573
+ ```
574
+
575
+
576
+ ### Relevant log output
577
+
578
+ ```shell
579
+ 2024-03-10 14:36:43.315650: F tensorflow/core/framework/tensor_shape.cc:357] Check failed: d < dims() (1 vs. 1)
580
+ Aborted (core dumped)
581
+ ```
582
+ ",2024-03-10T14:48:19Z,0
583
+ failed to compile a tensorflow C++ example. # Error incompatible with your Protocol Buffer headers ,"### Issue type
584
+
585
+ Bug
586
+
587
+ ### Have you reproduced the bug with TensorFlow Nightly?
588
+
589
+ No
590
+
591
+ ### Source
592
+
593
+ source
594
+
595
+ ### TensorFlow version
596
+
597
+ tf 2.15.0
598
+
599
+ ### Custom code
600
+
601
+ No
602
+
603
+ ### OS platform and distribution
604
+
605
+ Linux Ubuntu 22.04
606
+
607
+ ### Mobile device
608
+
609
+ _No response_
610
+
611
+ ### Python version
612
+
613
+ 3.10.12
614
+
615
+ ### Bazel version
616
+
617
+ 6.1.0
618
+
619
+ ### GCC/compiler version
620
+
621
+ 11.4.0
622
+
623
+ ### CUDA/cuDNN version
624
+
625
+ 12.2/8.9.7
626
+
627
+ ### GPU model and memory
628
+
629
+ GTX 3090/24G
630
+
631
+ ### Current behavior?
632
+
633
+ I first compiled TensorFlow using Bazel according to the official documentation, these are my operations:
634
+ `git clone https://github.com/tensorflow/tensorflow`
635
+ `cd tensorflow`
636
+ `git checkout r2.15`
637
+ `./configure `
638
+ and information is:
639
+
640
+ >
641
+ > You have bazel 6.1.0 installed.
642
+ > Please specify the location of python. [Default is /usr/bin/python3]:
643
+ >
644
+ >
645
+ > Found possible Python library paths:
646
+ > /usr/lib/python3/dist-packages
647
+ > /usr/local/lib/python3.10/dist-packages
648
+ > Please input the desired Python library path to use. Default is [/usr/lib/python3/dist-packages]
649
+ >
650
+ > Do you wish to build TensorFlow with ROCm support? [y/N]: n
651
+ > No ROCm support will be enabled for TensorFlow.
652
+ >
653
+ > Do you wish to build TensorFlow with CUDA support? [y/N]: y
654
+ > CUDA support will be enabled for TensorFlow.
655
+ >
656
+ > Do you wish to build TensorFlow with TensorRT support? [y/N]: n
657
+ > No TensorRT support will be enabled for TensorFlow.
658
+ >
659
+ > Found CUDA 12.2 in:
660
+ > /usr/local/cuda-12.2/targets/x86_64-linux/lib
661
+ > /usr/local/cuda-12.2/targets/x86_64-linux/include
662
+ > Found cuDNN 8 in:
663
+ > /usr/lib/x86_64-linux-gnu
664
+ > /usr/include
665
+ >
666
+ >
667
+ > Please specify a list of comma-separated CUDA compute capabilities you want to build with.
668
+ > You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Each capability can be specified as ""x.y"" or ""compute_xy"" to include both virtual and binary GPU code, or as ""sm_xy"" to only include the binary code.
669
+ > Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute capabilities >= 3.5 [Default is: 8.6]:
670
+ >
671
+ >
672
+ > Do you want to use clang as CUDA compiler? [Y/n]: n
673
+ > nvcc will be used as CUDA compiler.
674
+ >
675
+ > Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
676
+ >
677
+ >
678
+ > Please specify optimization flags to use during compilation when bazel option ""--config=opt"" is specified [Default is -Wno-sign-compare]:
679
+ >
680
+ >
681
+ > Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
682
+ > Not configuring the WORKSPACE for Android builds.
683
+ >
684
+ > Preconfigured Bazel build configs. You can use any of the below by adding ""--config=<>"" to your build command. See .bazelrc for more details.
685
+ > --config=mkl # Build with MKL support.
686
+ > --config=mkl_aarch64 # Build with oneDNN and Compute Library for the Arm Architecture (ACL).
687
+ > --config=monolithic # Config for mostly static monolithic build.
688
+ > --config=numa # Build with NUMA support.
689
+ > --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
690
+ > --config=v1 # Build with TensorFlow 1 API instead of TF 2 API.
691
+ > Preconfigured Bazel build configs to DISABLE default on features:
692
+ > --config=nogcp # Disable GCP support.
693
+ > --config=nonccl # Disable NVIDIA NCCL support.
694
+ > Configuration finished
695
+
696
+ and I then compile with bazel:
697
+ `bazel build --config=cuda tensorflow:tensorflow_cc`
698
+ `bazel build tensorflow:install_headers`
699
+
700
+ There were no issues, I successfully compiled the header files and link libraries I wanted in the `bazel-bin` folder.
701
+ But when I try to compile a C++ sample:
702
+ ```
703
+ #include <tensorflow/core/platform/env.h>
704
+ #include <tensorflow/core/public/session.h>
705
+
706
+ #include <iostream>
707
+
708
+ using namespace std;
709
+ using namespace tensorflow;
710
+
711
+ int main()
712
+ {
713
+ Session* session;
714
+ Status status = NewSession(SessionOptions(), &session);
715
+ if (!status.ok()) {
716
+ cout << status.ToString() << ""\n"";
717
+ return 1;
718
+ }
719
+ cout << ""Session successfully created.\n"";
720
+ }
721
+
722
+ ```
723
+
724
+ command is
725
+ `g++ -std=c++14 -o tf_example -I/home/wangchen/tensorflow/bazel-bin/tensorflow/include -L/home/wangchen/tensorflow/bazel-bin/tensorflow/libtensorflow_cc -L/home/wangchen/tensorflow/bazel-bin/tensorflow/libtensorflow_framework -ltensorflow_framework -ltensorflow_cc tf_example.cpp `
726
+
727
+ I got an error #error This file was generated by an older version of protoc which is incompatible with your Protocol Buffer headers. Please regenerate this file with a newer version of protoc.
728
+
729
+ My protobuf is compiled from official repo, the versions are:
730
+ ```
731
+ {
732
+ ""23.x"": {
733
+ ""protoc_version"": ""23.4"",
734
+ ""lts"": false,
735
+ ""date"": ""2023-07-05"",
736
+ ""languages"": {
737
+ ""cpp"": ""4.23.4"",
738
+ ""csharp"": ""3.23.4"",
739
+ ""java"": ""3.23.4"",
740
+ ""javascript"": ""3.23.4"",
741
+ ""objectivec"": ""3.23.4"",
742
+ ""php"": ""3.23.4"",
743
+ ""python"": ""4.23.4"",
744
+ ""ruby"": ""3.23.4""
745
+ }
746
+ }
747
+ }
748
+ ```
749
+ I suspect there might be some protobuf versions that are incompatible with my TensorFlow.
750
+ What methods should I use to obtain the correct version?
751
+ I would greatly appreciate any proposed solutions.
752
+
753
+
754
+ ### Standalone code to reproduce the issue
755
+
756
+ ```shell
757
+ #include <tensorflow/core/platform/env.h>
758
+ #include <tensorflow/core/public/session.h>
759
+
760
+ #include <iostream>
761
+
762
+ using namespace std;
763
+ using namespace tensorflow;
764
+
765
+ int main()
766
+ {
767
+ Session* session;
768
+ Status status = NewSession(SessionOptions(), &session);
769
+ if (!status.ok()) {
770
+ cout << status.ToString() << ""\n"";
771
+ return 1;
772
+ }
773
+ cout << ""Session successfully created.\n"";
774
+ }
775
+
776
+ ```
777
+ ```
778
+
779
+
780
+ ### Relevant log output
781
+
782
+ ```shell
783
+ wangchen@wc:~/tfc++test$ g++ -std=c++14 -o tf_example -I/home/wangchen/tensorflow/bazel-bin/tensorflow/include -L/home/wangchen/tensorflow/bazel-bin/tensorflow/libtensorflow_cc -L/home/wangchen/tensorflow/bazel-bin/tensorflow/libtensorflow_framework -ltensorflow_framework -ltensorflow_cc tf_example.cpp
784
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tsl/platform/status.h:39,
785
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/platform/status.h:23,
786
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/platform/errors.h:27,
787
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/platform/env.h:27,
788
+ from tf_example.cpp:1:
789
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tsl/protobuf/error_codes.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
790
+ 17 | #error This file was generated by an older version of protoc which is
791
+ | ^~~~~
792
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tsl/protobuf/error_codes.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
793
+ 18 | #error incompatible with your Protocol Buffer headers. Please
794
+ | ^~~~~
795
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tsl/protobuf/error_codes.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
796
+ 19 | #error regenerate this file with a newer version of protoc.
797
+ | ^~~~~
798
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:24,
799
+ from tf_example.cpp:2:
800
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/device_attributes.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
801
+ 17 | #error This file was generated by an older version of protoc which is
802
+ | ^~~~~
803
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/device_attributes.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
804
+ 18 | #error incompatible with your Protocol Buffer headers. Please
805
+ | ^~~~~
806
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/device_attributes.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
807
+ 19 | #error regenerate this file with a newer version of protoc.
808
+ | ^~~~~
809
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
810
+ from tf_example.cpp:2:
811
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
812
+ 17 | #error This file was generated by an older version of protoc which is
813
+ | ^~~~~
814
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
815
+ 18 | #error incompatible with your Protocol Buffer headers. Please
816
+ | ^~~~~
817
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
818
+ 19 | #error regenerate this file with a newer version of protoc.
819
+ | ^~~~~
820
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:33,
821
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
822
+ from tf_example.cpp:2:
823
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
824
+ 17 | #error This file was generated by an older version of protoc which is
825
+ | ^~~~~
826
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
827
+ 18 | #error incompatible with your Protocol Buffer headers. Please
828
+ | ^~~~~
829
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
830
+ 19 | #error regenerate this file with a newer version of protoc.
831
+ | ^~~~~
832
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:36,
833
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:33,
834
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
835
+ from tf_example.cpp:2:
836
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/attr_value.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
837
+ 17 | #error This file was generated by an older version of protoc which is
838
+ | ^~~~~
839
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/attr_value.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
840
+ 18 | #error incompatible with your Protocol Buffer headers. Please
841
+ | ^~~~~
842
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/attr_value.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
843
+ 19 | #error regenerate this file with a newer version of protoc.
844
+ | ^~~~~
845
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/attr_value.pb.h:36,
846
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:36,
847
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:33,
848
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
849
+ from tf_example.cpp:2:
850
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
851
+ 17 | #error This file was generated by an older version of protoc which is
852
+ | ^~~~~
853
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
854
+ 18 | #error incompatible with your Protocol Buffer headers. Please
855
+ | ^~~~~
856
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
857
+ 19 | #error regenerate this file with a newer version of protoc.
858
+ | ^~~~~
859
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor.pb.h:33,
860
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/attr_value.pb.h:36,
861
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:36,
862
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:33,
863
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
864
+ from tf_example.cpp:2:
865
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/resource_handle.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
866
+ 17 | #error This file was generated by an older version of protoc which is
867
+ | ^~~~~
868
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/resource_handle.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
869
+ 18 | #error incompatible with your Protocol Buffer headers. Please
870
+ | ^~~~~
871
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/resource_handle.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
872
+ 19 | #error regenerate this file with a newer version of protoc.
873
+ | ^~~~~
874
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/resource_handle.pb.h:33,
875
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor.pb.h:33,
876
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/attr_value.pb.h:36,
877
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:36,
878
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:33,
879
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
880
+ from tf_example.cpp:2:
881
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor_shape.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
882
+ 17 | #error This file was generated by an older version of protoc which is
883
+ | ^~~~~
884
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor_shape.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
885
+ 18 | #error incompatible with your Protocol Buffer headers. Please
886
+ | ^~~~~
887
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor_shape.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
888
+ 19 | #error regenerate this file with a newer version of protoc.
889
+ | ^~~~~
890
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/resource_handle.pb.h:34,
891
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor.pb.h:33,
892
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/attr_value.pb.h:36,
893
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:36,
894
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:33,
895
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
896
+ from tf_example.cpp:2:
897
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/types.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
898
+ 17 | #error This file was generated by an older version of protoc which is
899
+ | ^~~~~
900
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/types.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
901
+ 18 | #error incompatible with your Protocol Buffer headers. Please
902
+ | ^~~~~
903
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/types.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
904
+ 19 | #error regenerate this file with a newer version of protoc.
905
+ | ^~~~~
906
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:37,
907
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:33,
908
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
909
+ from tf_example.cpp:2:
910
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/node_def.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
911
+ 17 | #error This file was generated by an older version of protoc which is
912
+ | ^~~~~
913
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/node_def.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
914
+ 18 | #error incompatible with your Protocol Buffer headers. Please
915
+ | ^~~~~
916
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/node_def.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
917
+ 19 | #error regenerate this file with a newer version of protoc.
918
+ | ^~~~~
919
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/node_def.pb.h:37,
920
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:37,
921
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:33,
922
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
923
+ from tf_example.cpp:2:
924
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/full_type.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
925
+ 17 | #error This file was generated by an older version of protoc which is
926
+ | ^~~~~
927
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/full_type.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
928
+ 18 | #error incompatible with your Protocol Buffer headers. Please
929
+ | ^~~~~
930
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/full_type.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
931
+ 19 | #error regenerate this file with a newer version of protoc.
932
+ | ^~~~~
933
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/function.pb.h:38,
934
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:33,
935
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
936
+ from tf_example.cpp:2:
937
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/op_def.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
938
+ 17 | #error This file was generated by an older version of protoc which is
939
+ | ^~~~~
940
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/op_def.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
941
+ 18 | #error incompatible with your Protocol Buffer headers. Please
942
+ | ^~~~~
943
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/op_def.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
944
+ 19 | #error regenerate this file with a newer version of protoc.
945
+ | ^~~~~
946
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:34,
947
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
948
+ from tf_example.cpp:2:
949
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph_debug_info.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
950
+ 17 | #error This file was generated by an older version of protoc which is
951
+ | ^~~~~
952
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph_debug_info.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
953
+ 18 | #error incompatible with your Protocol Buffer headers. Please
954
+ | ^~~~~
955
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph_debug_info.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
956
+ 19 | #error regenerate this file with a newer version of protoc.
957
+ | ^~~~~
958
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/graph.pb.h:36,
959
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:25,
960
+ from tf_example.cpp:2:
961
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/versions.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
962
+ 17 | #error This file was generated by an older version of protoc which is
963
+ | ^~~~~
964
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/versions.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
965
+ 18 | #error incompatible with your Protocol Buffer headers. Please
966
+ | ^~~~~
967
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/versions.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
968
+ 19 | #error regenerate this file with a newer version of protoc.
969
+ | ^~~~~
970
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:30,
971
+ from tf_example.cpp:2:
972
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/config.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
973
+ 17 | #error This file was generated by an older version of protoc which is
974
+ | ^~~~~
975
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/config.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
976
+ 18 | #error incompatible with your Protocol Buffer headers. Please
977
+ | ^~~~~
978
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/config.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
979
+ 19 | #error regenerate this file with a newer version of protoc.
980
+ | ^~~~~
981
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/config.pb.h:37,
982
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:30,
983
+ from tf_example.cpp:2:
984
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/cost_graph.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
985
+ 17 | #error This file was generated by an older version of protoc which is
986
+ | ^~~~~
987
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/cost_graph.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
988
+ 18 | #error incompatible with your Protocol Buffer headers. Please
989
+ | ^~~~~
990
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/cost_graph.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
991
+ 19 | #error regenerate this file with a newer version of protoc.
992
+ | ^~~~~
993
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/config.pb.h:39,
994
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:30,
995
+ from tf_example.cpp:2:
996
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/step_stats.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
997
+ 17 | #error This file was generated by an older version of protoc which is
998
+ | ^~~~~
999
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/step_stats.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
1000
+ 18 | #error incompatible with your Protocol Buffer headers. Please
1001
+ | ^~~~~
1002
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/step_stats.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
1003
+ 19 | #error regenerate this file with a newer version of protoc.
1004
+ | ^~~~~
1005
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/step_stats.pb.h:36,
1006
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/config.pb.h:39,
1007
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:30,
1008
+ from tf_example.cpp:2:
1009
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/allocation_description.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
1010
+ 17 | #error This file was generated by an older version of protoc which is
1011
+ | ^~~~~
1012
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/allocation_description.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
1013
+ 18 | #error incompatible with your Protocol Buffer headers. Please
1014
+ | ^~~~~
1015
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/allocation_description.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
1016
+ 19 | #error regenerate this file with a newer version of protoc.
1017
+ | ^~~~~
1018
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/step_stats.pb.h:37,
1019
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/config.pb.h:39,
1020
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:30,
1021
+ from tf_example.cpp:2:
1022
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor_description.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
1023
+ 17 | #error This file was generated by an older version of protoc which is
1024
+ | ^~~~~
1025
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor_description.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
1026
+ 18 | #error incompatible with your Protocol Buffer headers. Please
1027
+ | ^~~~~
1028
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/framework/tensor_description.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
1029
+ 19 | #error regenerate this file with a newer version of protoc.
1030
+ | ^~~~~
1031
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/config.pb.h:40,
1032
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:30,
1033
+ from tf_example.cpp:2:
1034
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/cluster.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
1035
+ 17 | #error This file was generated by an older version of protoc which is
1036
+ | ^~~~~
1037
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/cluster.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
1038
+ 18 | #error incompatible with your Protocol Buffer headers. Please
1039
+ | ^~~~~
1040
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/cluster.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
1041
+ 19 | #error regenerate this file with a newer version of protoc.
1042
+ | ^~~~~
1043
+ In file included from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/config.pb.h:41,
1044
+ from /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/public/session.h:30,
1045
+ from tf_example.cpp:2:
1046
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/debug.pb.h:17:2: error: #error This file was generated by an older version of protoc which is
1047
+ 17 | #error This file was generated by an older version of protoc which is
1048
+ | ^~~~~
1049
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/debug.pb.h:18:2: error: #error incompatible with your Protocol Buffer headers. Please
1050
+ 18 | #error incompatible with your Protocol Buffer headers. Please
1051
+ | ^~~~~
1052
+ /home/wangchen/tensorflow/bazel-bin/tensorflow/include/tensorflow/core/protobuf/debug.pb.h:19:2: error: #error regenerate this file with a newer version of protoc.
1053
+ 19 | #error regenerate this file with a newer version of protoc.
1054
+ | ^~~~~
1055
+ ```
1056
+ ",2024-03-10T04:22:46Z,0
1057
+ Saved model won't load: Unable to synchronously open object (bad local heap signature),"### Issue type
1058
+
1059
+ Bug
1060
+
1061
+ ### Have you reproduced the bug with TensorFlow Nightly?
1062
+
1063
+ Yes
1064
+
1065
+ ### Source
1066
+
1067
+ binary
1068
+
1069
+ ### TensorFlow version
1070
+
1071
+ 2.16.1
1072
+
1073
+ ### Custom code
1074
+
1075
+ Yes
1076
+
1077
+ ### OS platform and distribution
1078
+
1079
+ windows 10
1080
+
1081
+ ### Mobile device
1082
+
1083
+ _No response_
1084
+
1085
+ ### Python version
1086
+
1087
+ 3.12
1088
+
1089
+ ### Bazel version
1090
+
1091
+ _No response_
1092
+
1093
+ ### GCC/compiler version
1094
+
1095
+ _No response_
1096
+
1097
+ ### CUDA/cuDNN version
1098
+
1099
+ _No response_
1100
+
1101
+ ### GPU model and memory
1102
+
1103
+ _No response_
1104
+
1105
+ ### Current behavior?
1106
+
1107
+ Model saved from Python 3.12 tensorflow 2.16.1
1108
+ model.save('my_model.keras', overwrite=True)
1109
+
1110
+ After this the model does not load
1111
+
1112
+ ### Standalone code to reproduce the issue
1113
+
1114
+ ```shell
1115
+ model=tf.keras.models.load_model('my_model.keras', custom_objects=None, compile=True, safe_mode=True)
1116
+ ```
1117
+
1118
+
1119
+ ### Relevant log output
1120
+
1121
+ ```shell
1122
+ Traceback (most recent call last):
1123
+ File ""D:\Project\main.py"", line 391, in <module>
1124
+ model=tf.keras.models.load_model('my_model.keras', custom_objects=None, compile=True, safe_mode=True)
1125
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1126
+ File ""D:\Project\venv\Lib\site-packages\keras\src\saving\saving_api.py"", line 176, in load_model
1127
+ return saving_lib.load_model(
1128
+ ^^^^^^^^^^^^^^^^^^^^^^
1129
+ File ""D:\Project\venv\Lib\site-packages\keras\src\saving\saving_lib.py"", line 192, in load_model
1130
+ _raise_loading_failure(error_msgs)
1131
+ File ""D:\Project\venv\Lib\site-packages\keras\src\saving\saving_lib.py"", line 273, in _raise_loading_failure
1132
+ raise ValueError(msg)
1133
+ ValueError: A total of 13 objects could not be loaded. Example error message for object <Sequential name=sequential, built=True>:
1134
+
1135
+ 'Unable to synchronously open object (bad local heap signature)'
1136
+
1137
+ List of objects that could not be loaded:
1138
+ [<Sequential name=sequential, built=True>, <TextVectorization name=text_vectorization, built=True>, <StringLookup name=string_lookup_1, built=False>, <Embedding name=embedding, built=True>, <Conv1D name=conv1d, built=True>, <Dropout name=dropout, built=True>, <Conv1D name=conv1d_1, built=True>, <Dropout name=dropout_1, built=True>, <GlobalMaxPooling1D name=global_max_pooling1d, built=True>, <Dense name=dense, built=True>, <Dropout name=dropout_2, built=True>, <Dense name=dense_1, built=True>, <keras.src.optimizers.adam.Adam object at 0x000001C5026B24E0>]
1139
+ ```
1140
+ ",2024-03-10T04:07:46Z,1
1141
+ Tensorflow import error,"### Issue type
1142
+
1143
+ Build/Install
1144
+
1145
+ ### Have you reproduced the bug with TensorFlow Nightly?
1146
+
1147
+ Yes
1148
+
1149
+ ### Source
1150
+
1151
+ source
1152
+
1153
+ ### TensorFlow version
1154
+
1155
+ tf 2.13.0
1156
+
1157
+ ### Custom code
1158
+
1159
+ Yes
1160
+
1161
+ ### OS platform and distribution
1162
+
1163
+ Win 11
1164
+
1165
+ ### Mobile device
1166
+
1167
+ _No response_
1168
+
1169
+ ### Python version
1170
+
1171
+ 3.9.7
1172
+
1173
+ ### Bazel version
1174
+
1175
+ _No response_
1176
+
1177
+ ### GCC/compiler version
1178
+
1179
+ _No response_
1180
+
1181
+ ### CUDA/cuDNN version
1182
+
1183
+ _No response_
1184
+
1185
+ ### GPU model and memory
1186
+
1187
+ _No response_
1188
+
1189
+ ### Current behavior?
1190
+
1191
+ I intalled tensorflow, but it gives an error when I try to import it.
1192
+
1193
+ ### Standalone code to reproduce the issue
1194
+
1195
+ ```shell
1196
+ import tensorflow as tf
1197
+ ```
1198
+
1199
+
1200
+ ### Relevant log output
1201
+
1202
+ ```shell
1203
+ runfile('X:/Nano-Photonics and Quantum Optics Lab!/ML Project/Tkinter learning/Tkinter Git - GitLab/Inverse_Design_Periodic_GUI_CustomModern.py', wdir='X:/Nano-Photonics and Quantum Optics Lab!/ML Project/Tkinter learning/Tkinter Git - GitLab')
1204
+ Traceback (most recent call last):
1205
+
1206
+ File ""X:\Nano-Photonics and Quantum Optics Lab!\ML Project\Tkinter learning\Tkinter Git - GitLab\Inverse_Design_Periodic_GUI_CustomModern.py"", line 20, in <module>
1207
+ import tensorflow as tf #print(tf.__version__)
1208
+
1209
+ File ""C:\Users\athen\anaconda3\lib\site-packages\tensorflow\__init__.py"", line 469, in <module>
1210
+ _keras._load()
1211
+
1212
+ File ""C:\Users\athen\anaconda3\lib\site-packages\tensorflow\python\util\lazy_loader.py"", line 41, in _load
1213
+ module = importlib.import_module(self.__name__)
1214
+
1215
+ File ""C:\Users\athen\anaconda3\lib\importlib\__init__.py"", line 127, in import_module
1216
+ return _bootstrap._gcd_import(name[level:], package, level)
1217
+
1218
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\__init__.py"", line 20, in <module>
1219
+ from keras import distribute
1220
+
1221
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\distribute\__init__.py"", line 18, in <module>
1222
+ from keras.distribute import sidecar_evaluator
1223
+
1224
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\distribute\sidecar_evaluator.py"", line 22, in <module>
1225
+ from keras.optimizers.optimizer_experimental import (
1226
+
1227
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\optimizers\__init__.py"", line 25, in <module>
1228
+ from keras import backend
1229
+
1230
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\backend\__init__.py"", line 3, in <module>
1231
+ from keras.backend import experimental
1232
+
1233
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\backend\experimental\__init__.py"", line 3, in <module>
1234
+ from keras.src.backend import disable_tf_random_generator
1235
+
1236
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\__init__.py"", line 21, in <module>
1237
+ from keras.src import applications
1238
+
1239
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\applications\__init__.py"", line 18, in <module>
1240
+ from keras.src.applications.convnext import ConvNeXtBase
1241
+
1242
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\applications\convnext.py"", line 28, in <module>
1243
+ from keras.src import backend
1244
+
1245
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\backend.py"", line 35, in <module>
1246
+ from keras.src.engine import keras_tensor
1247
+
1248
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\engine\keras_tensor.py"", line 19, in <module>
1249
+ from keras.src.utils import object_identity
1250
+
1251
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\utils\__init__.py"", line 20, in <module>
1252
+ from keras.src.saving.serialization_lib import deserialize_keras_object
1253
+
1254
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\saving\serialization_lib.py"", line 28, in <module>
1255
+ from keras.src.saving.legacy.saved_model.utils import in_tf_saved_model_scope
1256
+
1257
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\saving\legacy\saved_model\utils.py"", line 30, in <module>
1258
+ from keras.src.utils.layer_utils import CallFunctionSpec
1259
+
1260
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\utils\layer_utils.py"", line 26, in <module>
1261
+ from keras.src import initializers
1262
+
1263
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\initializers\__init__.py"", line 23, in <module>
1264
+ from keras.src.initializers import initializers_v1
1265
+
1266
+ File ""C:\Users\athen\anaconda3\lib\site-packages\keras\src\initializers\initializers_v1.py"", line 32, in <module>
1267
+ keras_export(v1=[""keras.initializers.Zeros"", ""keras.initializers.zeros""])(
1268
+
1269
+ File ""C:\Users\athen\anaconda3\lib\site-packages\tensorflow\python\util\tf_export.py"", line 348, in __call__
1270
+ self.set_attr(undecorated_func, api_names_attr, self._names)
1271
+
1272
+ File ""C:\Users\athen\anaconda3\lib\site-packages\tensorflow\python\util\tf_export.py"", line 363, in set_attr
1273
+ raise SymbolAlreadyExposedError(
1274
+
1275
+ SymbolAlreadyExposedError: Symbol Zeros is already exposed as ().
1276
+ ```
1277
+ ",2024-03-10T01:09:44Z,2
1278
+ TF 2.16.1 Fails to work with GPUs,"### Issue type
1279
+
1280
+ Bug
1281
+
1282
+ ### Have you reproduced the bug with TensorFlow Nightly?
1283
+
1284
+ No
1285
+
1286
+ ### Source
1287
+
1288
+ binary
1289
+
1290
+ ### TensorFlow version
1291
+
1292
+ TF 2.16.1
1293
+
1294
+ ### Custom code
1295
+
1296
+ No
1297
+
1298
+ ### OS platform and distribution
1299
+
1300
+ Linux Ubuntu 22.04.4 LTS
1301
+
1302
+ ### Mobile device
1303
+
1304
+ _No response_
1305
+
1306
+ ### Python version
1307
+
1308
+ 3.10.12
1309
+
1310
+ ### Bazel version
1311
+
1312
+ _No response_
1313
+
1314
+ ### GCC/compiler version
1315
+
1316
+ _No response_
1317
+
1318
+ ### CUDA/cuDNN version
1319
+
1320
+ 12.4
1321
+
1322
+ ### GPU model and memory
1323
+
1324
+ _No response_
1325
+
1326
+ ### Current behavior?
1327
+
1328
+ I created a python venv in which I installed TF 2.16.1 following your instructions: pip install tensorflow
1329
+ When I run python, import tf, and issue tf.config.list_physical_devices('GPU')
1330
+ I get an empty list [ ]
1331
+
1332
+ I created another python venv, installed TF 2.16.1, only this time with the instructions:
1333
+
1334
+ python3 -m pip install tensorflow[and-cuda]
1335
+
1336
+ When I run that version, import tensorflow as tf, and issue
1337
+
1338
+ tf.config.list_physical_devices('GPU')
1339
+
1340
+ I also get an empty list.
1341
+
1342
+ BTW, I have no problems running on my box TF 2.15.1 with GPUs. Julia also works just fine with GPUs and so does PyTorch.
1343
+ the
1344
+
1345
+
1346
+
1347
+ ### Standalone code to reproduce the issue
1348
+
1349
+ ```shell
1350
+ Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
1351
+ Type ""help"", ""copyright"", ""credits"" or ""license"" for more information.
1352
+ >>> import tensorflow as tf
1353
+ 2024-03-09 19:15:45.018171: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
1354
+ To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
1355
+ 2024-03-09 19:15:50.412646: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
1356
+ >>> tf.__version__
1357
+ '2.16.1'
1358
+
1359
+ tf.config.list_physical_devices('GPU')
1360
+ 2024-03-09 19:16:28.923792: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
1361
+ 2024-03-09 19:16:29.078379: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
1362
+ Skipping registering GPU devices...
1363
+ []
1364
+ >>>
1365
+ ```
1366
+
1367
+
1368
+ ### Relevant log output
1369
+
1370
+ _No response_",2024-03-10T00:17:36Z,6
1371
+ Replace `RemoteTensorHandle` with `TensorProto` for scalars in an `EnqueueRequest` except for `DT_RESOURCE`,"Replace `RemoteTensorHandle` with `TensorProto` for scalars in an `EnqueueRequest` except for `DT_RESOURCE`
1372
+ ",2024-03-09T20:18:30Z,0
1373
+ tensorflow 2.16.1 build error: Compiling xla/service/cpu/onednn_matmul.cc failed,"### Issue type
1374
+
1375
+ Bug
1376
+
1377
+ ### Have you reproduced the bug with TensorFlow Nightly?
1378
+
1379
+ No
1380
+
1381
+ ### Source
1382
+
1383
+ source
1384
+
1385
+ ### TensorFlow version
1386
+
1387
+ 2.16.1
1388
+
1389
+ ### Custom code
1390
+
1391
+ No
1392
+
1393
+ ### OS platform and distribution
1394
+
1395
+ Linux Ubuntu 22.04
1396
+
1397
+ ### Mobile device
1398
+
1399
+ _No response_
1400
+
1401
+ ### Python version
1402
+
1403
+ 3.11.8
1404
+
1405
+ ### Bazel version
1406
+
1407
+ 6.5.0
1408
+
1409
+ ### GCC/compiler version
1410
+
1411
+ 11.4.0
1412
+
1413
+ ### CUDA/cuDNN version
1414
+
1415
+ 12.4/9.0.0.312
1416
+
1417
+ ### GPU model and memory
1418
+
1419
+ NVIDIA GeForce 940MX
1420
+
1421
+ ### Current behavior?
1422
+
1423
+ INFO: Reading 'startup' options from ~/Documents/dev/git/tensorflow/.bazelrc: --windows_enable_symlinks
1424
+ INFO: Options provided by the client:
1425
+ Inherited 'common' options: --isatty=1 --terminal_columns=211
1426
+ INFO: Reading rc options for 'build' from ~/Documents/dev/git/tensorflow/.bazelrc:
1427
+ Inherited 'common' options: --experimental_repo_remote_exec
1428
+ INFO: Reading rc options for 'build' from ~/Documents/dev/git/tensorflow/.bazelrc:
1429
+ 'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --features=-force_no_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
1430
+ INFO: Reading rc options for 'build' from ~/Documents/dev/git/tensorflow/.tf_configure.bazelrc:
1431
+ 'build' options: --action_env PYTHON_BIN_PATH=~/Documents/dev/programs/miniconda3/envs/tf/bin/python3 --action_env PYTHON_LIB_PATH=~/Documents/dev/programs/miniconda3/envs/tf/lib/python3.11/site-packages --python_path=~/Documents/dev/programs/miniconda3/envs/tf/bin/python3 --action_env CUDA_TOOLKIT_PATH=/usr/local/cuda-12.3 --action_env TF_CUDA_COMPUTE_CAPABILITIES=5.0 --action_env LD_LIBRARY_PATH=/usr/lib/libreoffice/program:/usr/local/cuda/targets/x86_64-linux/lib:/usr/lib/x86_64-linux-gnu --action_env GCC_HOST_COMPILER_PATH=/usr/bin/x86_64-linux-gnu-gcc-11 --config=cuda
1432
+ INFO: Found applicable config definition build:short_logs in file ~/Documents/dev/git/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
1433
+ INFO: Found applicable config definition build:v2 in file ~/Documents/dev/git/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
1434
+ INFO: Found applicable config definition build:cuda in file ~/Documents/dev/git/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
1435
+ INFO: Found applicable config definition build:mkl in file ~/Documents/dev/git/tensorflow/.bazelrc: --define=build_with_mkl=true --define=enable_mkl=true --define=tensorflow_mkldnn_contraction_kernel=0 --define=build_with_openmp=true -c opt
1436
+ INFO: Found applicable config definition build:opt in file ~/Documents/dev/git/tensorflow/.tf_configure.bazelrc: --copt=-Wno-sign-compare --host_copt=-Wno-sign-compare
1437
+ INFO: Found applicable config definition build:linux in file ~/Documents/dev/git/tensorflow/.bazelrc: --host_copt=-w --copt=-Wno-all --copt=-Wno-extra --copt=-Wno-deprecated --copt=-Wno-deprecated-declarations --copt=-Wno-ignored-attributes --copt=-Wno-array-bounds --copt=-Wunused-result --copt=-Werror=unused-result --copt=-Wswitch --copt=-Werror=switch --copt=-Wno-error=unused-but-set-variable --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --config=dynamic_kernels --experimental_guard_against_concurrent_changes
1438
+ INFO: Found applicable config definition build:dynamic_kernels in file ~/Documents/dev/git/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
1439
+ INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (711 packages loaded, 51601 targets configured).
1440
+ INFO: Found 1 target...
1441
+ ERROR: ~/.cache/bazel/_bazel_vyepishov/cf67b2b2e967476eb2b1ee98e33ab5bd/external/local_xla/xla/service/cpu/BUILD:1638:11: Compiling xla/service/cpu/onednn_matmul.cc failed: (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command (from target @local_xla//xla/service/cpu:onednn_matmul) external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -MD -MF bazel-out/k8-opt/bin/external/local_xla/xla/service/cpu/_objs/onednn_matmul/onednn_matmul.pic.d ... (remaining 229 arguments skipped)
1442
+ In file included from external/local_xla/xla/shape.h:28,
1443
+ from external/local_xla/xla/service/cpu/onednn_matmul.h:21,
1444
+ from external/local_xla/xla/service/cpu/onednn_matmul.cc:18:
1445
+ external/local_xla/xla/layout.h:377:18: warning: ‘xla::Layout::DimInfo::dim_level_type’ is too small to hold all values of ‘enum xla::DimLevelType’
1446
+ 377 | DimLevelType dim_level_type : 6;
1447
+ | ^~~~~~~~~~~~~~
1448
+ external/local_xla/xla/layout.h:389:17: warning: ‘xla::Layout::index_primitive_type_’ is too small to hold all values of ‘enum xla::PrimitiveType’
1449
+ 389 | PrimitiveType index_primitive_type_ : 8;
1450
+ | ^~~~~~~~~~~~~~~~~~~~~
1451
+ external/local_xla/xla/layout.h:390:17: warning: ‘xla::Layout::pointer_primitive_type_’ is too small to hold all values of ‘enum xla::PrimitiveType’
1452
+ 390 | PrimitiveType pointer_primitive_type_ : 8;
1453
+ | ^~~~~~~~~~~~~~~~~~~~~~~
1454
+ external/local_xla/xla/service/cpu/onednn_matmul.cc: In function ‘void xla::cpu::__xla_cpu_runtime_OneDnnMatMul(void*, void**)’:
1455
+ external/local_xla/xla/service/cpu/onednn_matmul.cc:186:68: error: cannot convert ‘std::unique_ptr<tsl::OneDnnThreadPool>::pointer’ {aka ‘tsl::OneDnnThreadPool*’} to ‘dnnl::threadpool_interop::threadpool_iface*’
1456
+ 186 | auto onednn_stream = MakeOneDnnStream(cpu_engine, thread_pool.get());
1457
+ | ~~~~~~~~~~~~~~~^~
1458
+ | |
1459
+ | std::unique_ptr<tsl::OneDnnThreadPool>::pointer {aka tsl::OneDnnThreadPool*}
1460
+ external/local_xla/xla/service/cpu/onednn_matmul.cc:148:49: note: initializing argument 2 of ‘dnnl::stream xla::cpu::{anonymous}::MakeOneDnnStream(const dnnl::engine&, dnnl::threadpool_interop::threadpool_iface*)’
1461
+ 148 | dnnl::threadpool_interop::threadpool_iface* thread_pool) {
1462
+ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~
1463
+ external/local_xla/xla/service/cpu/onednn_matmul.cc: In function ‘void xla::cpu::__xla_cpu_runtime_OneDnnMatMulReorder(void*, void**)’:
1464
+ external/local_xla/xla/service/cpu/onednn_matmul.cc:322:68: error: cannot convert ‘std::unique_ptr<tsl::OneDnnThreadPool>::pointer’ {aka ‘tsl::OneDnnThreadPool*’} to ‘dnnl::threadpool_interop::threadpool_iface*’
1465
+ 322 | auto onednn_stream = MakeOneDnnStream(cpu_engine, thread_pool.get());
1466
+ | ~~~~~~~~~~~~~~~^~
1467
+ | |
1468
+ | std::unique_ptr<tsl::OneDnnThreadPool>::pointer {aka tsl::OneDnnThreadPool*}
1469
+ external/local_xla/xla/service/cpu/onednn_matmul.cc:148:49: note: initializing argument 2 of ‘dnnl::stream xla::cpu::{anonymous}::MakeOneDnnStream(const dnnl::engine&, dnnl::threadpool_interop::threadpool_iface*)’
1470
+ 148 | dnnl::threadpool_interop::threadpool_iface* thread_pool) {
1471
+ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~
1472
+ Target //tensorflow/tools/pip_package:build_pip_package failed to build
1473
+ Use --verbose_failures to see the command lines of failed build steps.
1474
+ INFO: Elapsed time: 16142.186s, Critical Path: 328.40s
1475
+ INFO: 25824 processes: 8831 internal, 16993 local.
1476
+ FAILED: Build did NOT complete successfully
1477
+
1478
+ ### Standalone code to reproduce the issue
1479
+
1480
+ ```shell
1481
+ bazel build --config=mkl --config=opt //tensorflow/tools/pip_package:build_pip_package
1482
+ ```
1483
+
1484
+
1485
+ ### Relevant log output
1486
+
1487
+ ```shell
1488
+ INFO: Reading 'startup' options from ~/Documents/dev/git/tensorflow/.bazelrc: --windows_enable_symlinks
1489
+ INFO: Options provided by the client:
1490
+ Inherited 'common' options: --isatty=1 --terminal_columns=211
1491
+ INFO: Reading rc options for 'build' from ~/Documents/dev/git/tensorflow/.bazelrc:
1492
+ Inherited 'common' options: --experimental_repo_remote_exec
1493
+ INFO: Reading rc options for 'build' from ~/Documents/dev/git/tensorflow/.bazelrc:
1494
+ 'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --features=-force_no_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
1495
+ INFO: Reading rc options for 'build' from ~/Documents/dev/git/tensorflow/.tf_configure.bazelrc:
1496
+ 'build' options: --action_env PYTHON_BIN_PATH=~/Documents/dev/programs/miniconda3/envs/tf/bin/python3 --action_env PYTHON_LIB_PATH=~/Documents/dev/programs/miniconda3/envs/tf/lib/python3.11/site-packages --python_path=~/Documents/dev/programs/miniconda3/envs/tf/bin/python3 --action_env CUDA_TOOLKIT_PATH=/usr/local/cuda-12.3 --action_env TF_CUDA_COMPUTE_CAPABILITIES=5.0 --action_env LD_LIBRARY_PATH=/usr/lib/libreoffice/program:/usr/local/cuda/targets/x86_64-linux/lib:/usr/lib/x86_64-linux-gnu --action_env GCC_HOST_COMPILER_PATH=/usr/bin/x86_64-linux-gnu-gcc-11 --config=cuda
1497
+ INFO: Found applicable config definition build:short_logs in file ~/Documents/dev/git/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
1498
+ INFO: Found applicable config definition build:v2 in file ~/Documents/dev/git/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
1499
+ INFO: Found applicable config definition build:cuda in file ~/Documents/dev/git/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
1500
+ INFO: Found applicable config definition build:mkl in file ~/Documents/dev/git/tensorflow/.bazelrc: --define=build_with_mkl=true --define=enable_mkl=true --define=tensorflow_mkldnn_contraction_kernel=0 --define=build_with_openmp=true -c opt
1501
+ INFO: Found applicable config definition build:opt in file ~/Documents/dev/git/tensorflow/.tf_configure.bazelrc: --copt=-Wno-sign-compare --host_copt=-Wno-sign-compare
1502
+ INFO: Found applicable config definition build:linux in file ~/Documents/dev/git/tensorflow/.bazelrc: --host_copt=-w --copt=-Wno-all --copt=-Wno-extra --copt=-Wno-deprecated --copt=-Wno-deprecated-declarations --copt=-Wno-ignored-attributes --copt=-Wno-array-bounds --copt=-Wunused-result --copt=-Werror=unused-result --copt=-Wswitch --copt=-Werror=switch --copt=-Wno-error=unused-but-set-variable --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --config=dynamic_kernels --experimental_guard_against_concurrent_changes
1503
+ INFO: Found applicable config definition build:dynamic_kernels in file ~/Documents/dev/git/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
1504
+ INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (711 packages loaded, 51601 targets configured).
1505
+ INFO: Found 1 target...
1506
+ ERROR: ~/.cache/bazel/_bazel_vyepishov/cf67b2b2e967476eb2b1ee98e33ab5bd/external/local_xla/xla/service/cpu/BUILD:1638:11: Compiling xla/service/cpu/onednn_matmul.cc failed: (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command (from target @local_xla//xla/service/cpu:onednn_matmul) external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -MD -MF bazel-out/k8-opt/bin/external/local_xla/xla/service/cpu/_objs/onednn_matmul/onednn_matmul.pic.d ... (remaining 229 arguments skipped)
1507
+ In file included from external/local_xla/xla/shape.h:28,
1508
+ from external/local_xla/xla/service/cpu/onednn_matmul.h:21,
1509
+ from external/local_xla/xla/service/cpu/onednn_matmul.cc:18:
1510
+ external/local_xla/xla/layout.h:377:18: warning: ‘xla::Layout::DimInfo::dim_level_type’ is too small to hold all values of ‘enum xla::DimLevelType’
1511
+ 377 | DimLevelType dim_level_type : 6;
1512
+ | ^~~~~~~~~~~~~~
1513
+ external/local_xla/xla/layout.h:389:17: warning: ‘xla::Layout::index_primitive_type_’ is too small to hold all values of ‘enum xla::PrimitiveType’
1514
+ 389 | PrimitiveType index_primitive_type_ : 8;
1515
+ | ^~~~~~~~~~~~~~~~~~~~~
1516
+ external/local_xla/xla/layout.h:390:17: warning: ‘xla::Layout::pointer_primitive_type_’ is too small to hold all values of ‘enum xla::PrimitiveType’
1517
+ 390 | PrimitiveType pointer_primitive_type_ : 8;
1518
+ | ^~~~~~~~~~~~~~~~~~~~~~~
1519
+ external/local_xla/xla/service/cpu/onednn_matmul.cc: In function ‘void xla::cpu::__xla_cpu_runtime_OneDnnMatMul(void*, void**)’:
1520
+ external/local_xla/xla/service/cpu/onednn_matmul.cc:186:68: error: cannot convert ‘std::unique_ptr<tsl::OneDnnThreadPool>::pointer’ {aka ‘tsl::OneDnnThreadPool*’} to ‘dnnl::threadpool_interop::threadpool_iface*’
1521
+ 186 | auto onednn_stream = MakeOneDnnStream(cpu_engine, thread_pool.get());
1522
+ | ~~~~~~~~~~~~~~~^~
1523
+ | |
1524
+ | std::unique_ptr<tsl::OneDnnThreadPool>::pointer {aka tsl::OneDnnThreadPool*}
1525
+ external/local_xla/xla/service/cpu/onednn_matmul.cc:148:49: note: initializing argument 2 of ‘dnnl::stream xla::cpu::{anonymous}::MakeOneDnnStream(const dnnl::engine&, dnnl::threadpool_interop::threadpool_iface*)’
1526
+ 148 | dnnl::threadpool_interop::threadpool_iface* thread_pool) {
1527
+ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~
1528
+ external/local_xla/xla/service/cpu/onednn_matmul.cc: In function ‘void xla::cpu::__xla_cpu_runtime_OneDnnMatMulReorder(void*, void**)’:
1529
+ external/local_xla/xla/service/cpu/onednn_matmul.cc:322:68: error: cannot convert ‘std::unique_ptr<tsl::OneDnnThreadPool>::pointer’ {aka ‘tsl::OneDnnThreadPool*’} to ‘dnnl::threadpool_interop::threadpool_iface*’
1530
+ 322 | auto onednn_stream = MakeOneDnnStream(cpu_engine, thread_pool.get());
1531
+ | ~~~~~~~~~~~~~~~^~
1532
+ | |
1533
+ | std::unique_ptr<tsl::OneDnnThreadPool>::pointer {aka tsl::OneDnnThreadPool*}
1534
+ external/local_xla/xla/service/cpu/onednn_matmul.cc:148:49: note: initializing argument 2 of ‘dnnl::stream xla::cpu::{anonymous}::MakeOneDnnStream(const dnnl::engine&, dnnl::threadpool_interop::threadpool_iface*)’
1535
+ 148 | dnnl::threadpool_interop::threadpool_iface* thread_pool) {
1536
+ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~
1537
+ Target //tensorflow/tools/pip_package:build_pip_package failed to build
1538
+ Use --verbose_failures to see the command lines of failed build steps.
1539
+ INFO: Elapsed time: 16142.186s, Critical Path: 328.40s
1540
+ INFO: 25824 processes: 8831 internal, 16993 local.
1541
+ FAILED: Build did NOT complete successfully
1542
+ ```
1543
+ ",2024-03-09T20:04:58Z,0
1544
+ Fix SegFault in Python InterpreterWrapper,"If `InterpreterWrapper::TensorSparsityParameters` encounters Tensors which do not have a `block_map`, a `nullptr` is dereferenced causing AccViol/SegFault.
1545
+
1546
+ Add a check for `nullptr`.
1547
+
1548
+ Attempts to fix #62058",2024-03-09T19:57:47Z,0
1549
+ Force an extra step from pred to u32 before then converting to f32 as that can fail on TGP,"Force an extra step from pred to u32 before then converting to f32 as that can fail on TGP
1550
+ ",2024-03-09T19:43:15Z,0
1551
+ Build error related to XLA and absl,"### Issue type
1552
+
1553
+ Build/Install
1554
+
1555
+ ### Have you reproduced the bug with TensorFlow Nightly?
1556
+
1557
+ No
1558
+
1559
+ ### Source
1560
+
1561
+ source
1562
+
1563
+ ### TensorFlow version
1564
+
1565
+ 2.16.1
1566
+
1567
+ ### Custom code
1568
+
1569
+ No
1570
+
1571
+ ### OS platform and distribution
1572
+
1573
+ Linux Ubuntu 22.04
1574
+
1575
+ ### Mobile device
1576
+
1577
+ _No response_
1578
+
1579
+ ### Python version
1580
+
1581
+ 3.11.7
1582
+
1583
+ ### Bazel version
1584
+
1585
+ 6.5.0
1586
+
1587
+ ### GCC/compiler version
1588
+
1589
+ 11.4.0
1590
+
1591
+ ### CUDA/cuDNN version
1592
+
1593
+ 11.8.0/8.9.7.29
1594
+
1595
+ ### GPU model and memory
1596
+
1597
+ _No response_
1598
+
1599
+ ### Current behavior?
1600
+
1601
+ When building TF from source using the Spack package manager, I see the following build failure:
1602
+ ```
1603
+ ERROR: /tmp/spackkiy_sjk0/dfa266778fb055fec5b77ad2acb73759/external/local_xla/xla/service/gpu/kernels/BUILD:157:13: Compiling xla/service/gpu/kernels/topk_kernel_bfloat16.cu.cc failed: (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command (from target @local_xla//xla/service/gpu/kernels:topk_kernel_cuda)
1604
+ ...
1605
+ external/com_google_absl/absl/strings/internal/str_format/bind.h: In constructor ‘absl::lts_20230802::str_format_internal::FormatSpecTemplate<Args>::FormatSpecTemplate(const absl::lts_20230802::str_format_internal::ExtendedParsedFormat<absl::lts_20230802::FormatConversionCharSet(C)...>&)’:
1606
+ external/com_google_absl/absl/strings/internal/str_format/bind.h:172:1: error: parse error in template argument list
1607
+ 172 | CheckArity<sizeof...(C), sizeof...(Args)>();
1608
+ | ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1609
+ external/com_google_absl/absl/strings/internal/str_format/bind.h:172:63: error: expected ‘;’ before ‘)’ token
1610
+ 172 | CheckArity<sizeof...(C), sizeof...(Args)>();
1611
+ | ^
1612
+ external/com_google_absl/absl/strings/internal/str_format/bind.h:173:147: error: template argument 1 is invalid
1613
+ 173 | CheckMatches<C...>(absl::make_index_sequence<sizeof...(C)>{});
1614
+ | ^
1615
+ external/com_google_absl/absl/strings/internal/str_format/bind.h:173:151: error: expected primary-expression before ‘{’ token
1616
+ 173 | CheckMatches<C...>(absl::make_index_sequence<sizeof...(C)>{});
1617
+ | ^
1618
+ external/com_google_absl/absl/strings/internal/str_format/bind.h:173:151: error: expected ‘;’ before ‘{’ token
1619
+ external/com_google_absl/absl/strings/internal/str_format/bind.h:173:153: error: expected primary-expression before ‘)’ token
1620
+ 173 | CheckMatches<C...>(absl::make_index_sequence<sizeof...(C)>{});
1621
+ | ^
1622
+ Target //tensorflow/tools/pip_package:build_pip_package failed to build
1623
+ INFO: Elapsed time: 1238.631s, Critical Path: 57.51s
1624
+ INFO: 17066 processes: 6004 internal, 11062 local.
1625
+ FAILED: Build did NOT complete successfully
1626
+ ```
1627
+
1628
+ ### Standalone code to reproduce the issue
1629
+
1630
+ See the below build log for steps to reproduce the issue.
1631
+
1632
+ ### Relevant log output
1633
+
1634
+ * [build log](https://github.com/tensorflow/tensorflow/files/14547197/spack-build-out.txt)
1635
+ * [build env](https://github.com/tensorflow/tensorflow/files/14547196/spack-build-env-mods.txt)
1636
+ ",2024-03-09T17:20:29Z,1
1637
+ core dumped with tf.raw_ops.FakeQuantWithMinMaxVarsPerChannel,"### Issue type
1638
+
1639
+ Bug
1640
+
1641
+ ### Have you reproduced the bug with TensorFlow Nightly?
1642
+
1643
+ Yes
1644
+
1645
+ ### Source
1646
+
1647
+ binary
1648
+
1649
+ ### TensorFlow version
1650
+
1651
+ tf 2.15
1652
+
1653
+ ### Custom code
1654
+
1655
+ Yes
1656
+
1657
+ ### OS platform and distribution
1658
+
1659
+ Ubuntu 20.04
1660
+
1661
+ ### Mobile device
1662
+
1663
+ _No response_
1664
+
1665
+ ### Python version
1666
+
1667
+ 3.9
1668
+
1669
+ ### Bazel version
1670
+
1671
+ _No response_
1672
+
1673
+ ### GCC/compiler version
1674
+
1675
+ _No response_
1676
+
1677
+ ### CUDA/cuDNN version
1678
+
1679
+ _No response_
1680
+
1681
+ ### GPU model and memory
1682
+
1683
+ _No response_
1684
+
1685
+ ### Current behavior?
1686
+
1687
+ core dumped error with specific input parameters.
1688
+
1689
+ ### Standalone code to reproduce the issue
1690
+
1691
+ ```shell
1692
+ import tensorflow as tf
1693
+
1694
+ # Generate input data
1695
+ input_data = tf.constant([[1.5, 2.5, 3.5], [4.5, 5.5, 6.5]])
1696
+
1697
+ # Define min and max values per channel
1698
+ min_per_channel = tf.constant([1.0, 2.0, 3.0])
1699
+ max_per_channel = tf.constant([2.0, 3.0, 4.0])
1700
+
1701
+ # Invoke tf.raw_ops.FakeQuantWithMinMaxVarsPerChannel with inputs as 0-dimensional tensor and max as a 1x3 tensor
1702
+ quantized_output = tf.raw_ops.FakeQuantWithMinMaxVarsPerChannel(inputs=tf.constant(0.0), min=min_per_channel, max=max_per_channel, num_bits=8, narrow_range=False)
1703
+
1704
+ # Print the quantized output
1705
+ print(quantized_output)
1706
+ ```
1707
+
1708
+
1709
+ ### Relevant log output
1710
+
1711
+ ```shell
1712
+ 2024-03-09 15:02:07.858055: F tensorflow/core/framework/tensor_shape.cc:356] Check failed: d >= 0 (0 vs. -1)
1713
+ Aborted (core dumped)
1714
+ ```
1715
+ ",2024-03-09T15:03:18Z,0
1716
+ core dumped with tf.raw_ops.DrawBoundingBoxes and tf.raw_ops.DrawBoundingBoxesV2,"### Issue type
1717
+
1718
+ Bug
1719
+
1720
+ ### Have you reproduced the bug with TensorFlow Nightly?
1721
+
1722
+ Yes
1723
+
1724
+ ### Source
1725
+
1726
+ binary
1727
+
1728
+ ### TensorFlow version
1729
+
1730
+ tf 2.15
1731
+
1732
+ ### Custom code
1733
+
1734
+ Yes
1735
+
1736
+ ### OS platform and distribution
1737
+
1738
+ Ubuntu 20.04
1739
+
1740
+ ### Mobile device
1741
+
1742
+ _No response_
1743
+
1744
+ ### Python version
1745
+
1746
+ 3.9
1747
+
1748
+ ### Bazel version
1749
+
1750
+ _No response_
1751
+
1752
+ ### GCC/compiler version
1753
+
1754
+ _No response_
1755
+
1756
+ ### CUDA/cuDNN version
1757
+
1758
+ _No response_
1759
+
1760
+ ### GPU model and memory
1761
+
1762
+ _No response_
1763
+
1764
+ ### Current behavior?
1765
+
1766
+ core dumped error with specific input parameters.
1767
+
1768
+ ### Standalone code to reproduce the issue
1769
+
1770
+ 1. The code of `tf.raw_ops.DrawBoundingBoxes`:
1771
+ ```shell
1772
+ import tensorflow as tf
1773
+ import numpy as np
1774
+
1775
+ # Generate input data
1776
+ batch_size = 1
1777
+ image_height = 100
1778
+ image_width = 100
1779
+ num_channels = 3
1780
+ num_boxes = 2
1781
+
1782
+ images = np.random.rand(image_height, image_width, num_channels).astype(np.float32) # Remove the batch dimension
1783
+ boxes = np.random.rand(batch_size, num_boxes, 4).astype(np.float32)
1784
+
1785
+ # Invoke tf.raw_ops.DrawBoundingBoxes with a zero-dimensional tensor for images
1786
+ drawn_images = tf.raw_ops.DrawBoundingBoxes(images=tf.convert_to_tensor(images),
1787
+ boxes=tf.convert_to_tensor(boxes))
1788
+
1789
+ # Print the result
1790
+ print(drawn_images)
1791
+ ```
1792
+
1793
+ 2. The code of `tf.raw_ops.DrawBoundingBoxesV2`:
1794
+ ```
1795
+ import tensorflow as tf
1796
+ import numpy as np
1797
+
1798
+ # Generate input data
1799
+ image_height = 100
1800
+ image_width = 100
1801
+ num_channels = 3
1802
+ num_boxes = 2
1803
+
1804
+ images = tf.random.uniform((image_height, image_width, num_channels)) # Change the shape to satisfy the requirement of a zero-dimensional tensor
1805
+ boxes = tf.random.uniform((1, num_boxes, 4))
1806
+ colors = tf.constant([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]]) # Define colors for each bounding box
1807
+
1808
+ # Invoke tf.raw_ops.DrawBoundingBoxesV2
1809
+ output_images = tf.raw_ops.DrawBoundingBoxesV2(images=images, boxes=boxes, colors=colors)
1810
+
1811
+ # Print the output images
1812
+ print(output_images)
1813
+ ```
1814
+
1815
+
1816
+ ### Relevant log output
1817
+
1818
+ ```shell
1819
+ 2024-03-09 14:55:53.834849: F tensorflow/core/framework/tensor_shape.cc:357] Check failed: d < dims() (3 vs. 3)
1820
+ Aborted (core dumped)
1821
+ ```
1822
+ ",2024-03-09T14:57:17Z,2
1823
+ Aborted (core dumped) with tf.raw_ops.AvgPoolGrad,"### Issue type
1824
+
1825
+ Bug
1826
+
1827
+ ### Have you reproduced the bug with TensorFlow Nightly?
1828
+
1829
+ Yes
1830
+
1831
+ ### Source
1832
+
1833
+ binary
1834
+
1835
+ ### TensorFlow version
1836
+
1837
+ tf 2.15
1838
+
1839
+ ### Custom code
1840
+
1841
+ Yes
1842
+
1843
+ ### OS platform and distribution
1844
+
1845
+ Ubuntu 20.04
1846
+
1847
+ ### Mobile device
1848
+
1849
+ _No response_
1850
+
1851
+ ### Python version
1852
+
1853
+ 3.9
1854
+
1855
+ ### Bazel version
1856
+
1857
+ _No response_
1858
+
1859
+ ### GCC/compiler version
1860
+
1861
+ _No response_
1862
+
1863
+ ### CUDA/cuDNN version
1864
+
1865
+ _No response_
1866
+
1867
+ ### GPU model and memory
1868
+
1869
+ _No response_
1870
+
1871
+ ### Current behavior?
1872
+
1873
+ core dumped error with specific input parameters.
1874
+
1875
+ ### Standalone code to reproduce the issue
1876
+
1877
+ ```shell
1878
+ import tensorflow as tf
1879
+
1880
+ # Generate input data
1881
+ input_data = tf.random.normal([1, 28, 28, 3])
1882
+ grad = tf.random.normal([1, 14, 14, 6]) # Change the number of channels in grad tensor
1883
+
1884
+ # Perform average pooling
1885
+ result = tf.nn.avg_pool2d(input_data, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID', data_format='NHWC')
1886
+
1887
+ # Compute gradient
1888
+ grad_result = tf.raw_ops.AvgPoolGrad(orig_input_shape=tf.shape(input_data), grad=grad, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID', data_format='NHWC')
1889
+
1890
+ print(grad_result)
1891
+ ```
1892
+
1893
+
1894
+ ### Relevant log output
1895
+
1896
+ ```shell
1897
+ free(): corrupted unsorted chunks
1898
+ Aborted (core dumped)
1899
+ ```
1900
+ ",2024-03-09T14:54:40Z,0
1901
+ Segmentation fault with tf.raw_ops.AudioSpectrogram,"### Issue type
1902
+
1903
+ Bug
1904
+
1905
+ ### Have you reproduced the bug with TensorFlow Nightly?
1906
+
1907
+ Yes
1908
+
1909
+ ### Source
1910
+
1911
+ binary
1912
+
1913
+ ### TensorFlow version
1914
+
1915
+ tf 2.15
1916
+
1917
+ ### Custom code
1918
+
1919
+ Yes
1920
+
1921
+ ### OS platform and distribution
1922
+
1923
+ Ubuntu 20.04
1924
+
1925
+ ### Mobile device
1926
+
1927
+ _No response_
1928
+
1929
+ ### Python version
1930
+
1931
+ 3.9
1932
+
1933
+ ### Bazel version
1934
+
1935
+ _No response_
1936
+
1937
+ ### GCC/compiler version
1938
+
1939
+ _No response_
1940
+
1941
+ ### CUDA/cuDNN version
1942
+
1943
+ _No response_
1944
+
1945
+ ### GPU model and memory
1946
+
1947
+ _No response_
1948
+
1949
+ ### Current behavior?
1950
+
1951
+ Segmentation fault error with specific input parameters.
1952
+
1953
+ ### Standalone code to reproduce the issue
1954
+
1955
+ ```shell
1956
+ import tensorflow as tf
1957
+
1958
+ # Generate input data
1959
+ input_data = tf.random.normal([1, 44100], dtype=tf.float32)
1960
+
1961
+ # Invoke tf.raw_ops.AudioSpectrogram with a negative window_size
1962
+ spectrogram = tf.raw_ops.AudioSpectrogram(input=input_data, window_size=-1024, stride=64, magnitude_squared=False)
1963
+
1964
+ # Print the spectrogram
1965
+ print(spectrogram)
1966
+ ```
1967
+
1968
+
1969
+ ### Relevant log output
1970
+
1971
+ ```shell
1972
+ Segmentation fault (core dumped)
1973
+ ```
1974
+ ",2024-03-09T14:50:26Z,1
1975
+ core dumped with tf.quantization.fake_quant_with_min_max_vars_per_channel,"### Issue type
1976
+
1977
+ Bug
1978
+
1979
+ ### Have you reproduced the bug with TensorFlow Nightly?
1980
+
1981
+ Yes
1982
+
1983
+ ### Source
1984
+
1985
+ binary
1986
+
1987
+ ### TensorFlow version
1988
+
1989
+ tf 2.15
1990
+
1991
+ ### Custom code
1992
+
1993
+ Yes
1994
+
1995
+ ### OS platform and distribution
1996
+
1997
+ Ubuntu 20.04
1998
+
1999
+ ### Mobile device
2000
+
2001
+ _No response_
2002
+
2003
+ ### Python version
2004
+
2005
+ 3.9
2006
+
2007
+ ### Bazel version
2008
+
2009
+ _No response_
2010
+
2011
+ ### GCC/compiler version
2012
+
2013
+ _No response_
2014
+
2015
+ ### CUDA/cuDNN version
2016
+
2017
+ _No response_
2018
+
2019
+ ### GPU model and memory
2020
+
2021
+ _No response_
2022
+
2023
+ ### Current behavior?
2024
+
2025
+ core dumped error with specific input parameters.
2026
+
2027
+ ### Standalone code to reproduce the issue
2028
+
2029
+ ```shell
2030
+ import tensorflow as tf
2031
+
2032
+ input_data = tf.constant(3.0)
2033
+
2034
+ min_per_channel = tf.constant(2.0)
2035
+ max_per_channel = tf.constant(4.0)
2036
+
2037
+ quantized_data = tf.quantization.fake_quant_with_min_max_vars_per_channel(input_data, min_per_channel, max_per_channel)
2038
+ print(quantized_data)
2039
+ ```
2040
+
2041
+
2042
+ ### Relevant log output
2043
+
2044
+ ```shell
2045
+ 2024-03-09 14:43:28.826225: F tensorflow/core/framework/tensor_shape.cc:356] Check failed: d >= 0 (0 vs. -1)
2046
+ Aborted (core dumped)
2047
+ ```
2048
+ ",2024-03-09T14:47:46Z,0
utils.py ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import requests
2
+ import json
3
+ import csv
4
+
5
+ from constants import OPENAI_API_KEY, OPENAI_API_BASE_URL, TEXT_MODEL_ENGINE, GITHUB_AUTH_KEY
6
+
7
+
8
+ def create_open_ai_query(input_query, system_message=None, model_engine=TEXT_MODEL_ENGINE,
9
+ functions=None, function_call=None):
10
+ openai_url = f"{OPENAI_API_BASE_URL}/chat/completions"
11
+ headers = {'Authorization': f'Bearer {OPENAI_API_KEY}', 'Content-Type': 'application/json'}
12
+ messages = []
13
+ if system_message:
14
+ messages.append({"role": "system", "content": system_message})
15
+ messages.append({"role": "user", "content": input_query})
16
+ payload = {
17
+ 'model': model_engine,
18
+ 'messages': messages,
19
+ 'response_format': {"type": "json_object"}
20
+ }
21
+ if functions:
22
+ payload['functions'] = functions
23
+ payload['function_call'] = function_call
24
+ response = requests.post(openai_url, headers=headers, data=json.dumps(payload))
25
+ if response.status_code == 200 and 'choices' in response.json():
26
+ if functions:
27
+ content_text = response.json()['choices'][0]['message']['function_call']['arguments'].strip()
28
+ else:
29
+ content_text = response.json()['choices'][0]['message']['content'].strip()
30
+ return {"success": True, "data": content_text, "response_json": response.json()}
31
+ return {"success": False, "error": response.text}
32
+
33
+
34
+ def generate_issues_json(repo_url):
35
+ # headers = {'Authorization': f'{GITHUB_AUTH_KEY}'}
36
+ response = requests.get(repo_url)
37
+ if response.status_code == 200:
38
+ return {'success': True, 'data': response.json()}
39
+ else:
40
+ return {'success': False, 'message': f'Request failed with status code: {response.status_code}'}
41
+
42
+
43
+ def convert_json_to_structured_csv(response_from_github_api, csv_filename):
44
+ fieldnames = ['Issue Title', 'Description', 'Created At', 'Comments']
45
+ with open(csv_filename, 'w', newline='') as csvfile:
46
+ writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
47
+ writer.writeheader()
48
+ try:
49
+ for issue_data in response_from_github_api:
50
+ issue_title = issue_data.get('title', '')
51
+ description = issue_data.get('body', '')
52
+ created_at = issue_data.get('created_at', '')
53
+ comments = issue_data.get('comments', '')
54
+ writer.writerow({
55
+ 'Issue Title': issue_title,
56
+ 'Description': description,
57
+ 'Created At': created_at,
58
+ 'Comments': comments
59
+ })
60
+ return {"success": True, "csv_data": f"{csv_filename}"}
61
+ except Exception as e:
62
+ return {"success": False, "error": f"{e}"}
63
+
64
+
65
+ def get_issues_csv(repo_url, csv_file_name):
66
+ list_of_github_issues = generate_issues_json(repo_url)
67
+ print(list_of_github_issues)
68
+ if list_of_github_issues["success"]:
69
+ print(type(list_of_github_issues["data"]))
70
+ generate_issues_csv = convert_json_to_structured_csv(list_of_github_issues["data"], csv_file_name)
71
+ print(generate_issues_csv)
72
+ if generate_issues_csv["success"]:
73
+ return {"success": True, "csv_data": f"{csv_file_name}"}
74
+ else:
75
+ return {"success": False}
76
+ else:
77
+ return {"success": False}
78
+
79
+
80
+ def convert_repo_url_to_git_api_url(github_repo_url):
81
+ parts = github_repo_url.strip("/").split("/")
82
+ owner, repo = parts[-2], parts[-1]
83
+ api_url = f"https://api.github.com/repos/{owner}/{repo}/issues"
84
+ return api_url