andrijdavid commited on
Commit
2b5bca8
1 Parent(s): 2e5c21b

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,17 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Q2_K/Q2_K-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Q3_K_L/Q3_K_L-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Q3_K_M/Q3_K_M-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Q3_K_S/Q3_K_S-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Q4_0/Q4_0-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Q4_1/Q4_1-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Q4_K_M/Q4_K_M-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Q4_K_S/Q4_K_S-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Q5_0/Q5_0-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Q5_1/Q5_1-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Q5_K_M/Q5_K_M-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Q5_K_S/Q5_K_S-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
48
+ Q6_K/Q6_K-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
49
+ Q8_0/Q8_0-00001-of-00001.gguf filter=lfs diff=lfs merge=lfs -text
Q2_K/Q2_K-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c51409210d735a6643df49f4079b47e269b0d79e1bb3d05e2db875b29650500a
3
+ size 3179130816
Q3_K_L/Q3_K_L-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:073f7bfe288d425fece4f61a61926e5cba0be77a4ffe15aa009b8aacad0ce3b2
3
+ size 4321955776
Q3_K_M/Q3_K_M-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d13e35aaa9688b8feaa3723a869dac4c55949e8d9a2561725705c949227f5451
3
+ size 4018917312
Q3_K_S/Q3_K_S-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ddbc92e374293e17d1affbd1bfe90ff66c6c4b977cf1fc7031738e22ce49bf7
3
+ size 3664498624
Q4_0/Q4_0-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c656be3ec689c49ebae8f026d86138ebae08253331ec1009ccc1abad33d06204
3
+ size 4661211072
Q4_1/Q4_1-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d27001d7888fa710224425af0ec8573323facff019ed4ff42a6bb2ef373252c6
3
+ size 5130252224
Q4_K_M/Q4_K_M-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42b5715ed16f79940c42cab4171b5440b12a093dc913609462034fd42ef64551
3
+ size 4920733632
Q4_K_S/Q4_K_S-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:640b3fd96fce9ba9590d212ff7ea9c43232397be315ac4429447f99d0ae3fda4
3
+ size 4692668352
Q5_0/Q5_0-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39e7f0f6db3ca38d01c828f776343df9eb8f9defd7c8a043b6e6555002d76334
3
+ size 5599293376
Q5_1/Q5_1-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc2693bb9aca696e8417771751026e1a8d7f84bdbe6a536141e8a2b5a5b6412d
3
+ size 6068334528
Q5_K_M/Q5_K_M-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbfbf2b87bc017ed93c47bf6333151cb21be68d6ceaba6d625e4607de2a16454
3
+ size 5732986816
Q5_K_S/Q5_K_S-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8555805b4812446be06e15193ee312da9c4d83e579dadace4f7681f6c2ddc672
3
+ size 5599293376
Q6_K/Q6_K-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37cc94da218d8e5b6b393135b3fba6590a6de262ead72ffe5dda8c442f05b0ba
3
+ size 6596005824
Q8_0/Q8_0-00001-of-00001.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77c7562dd0c1c638b10ee7f49fea4177b64d77dc41427108a62ef76523a0b061
3
+ size 8540770240
README.md ADDED
@@ -0,0 +1,297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ ---
4
+ language:
5
+ - tr
6
+ license: llama3
7
+ tags:
8
+ - Turkish
9
+ - turkish
10
+ - Llama
11
+ - Llama3
12
+ - GGUF
13
+ base_model: meta-llama/Meta-Llama-3-8B
14
+ pipeline_tag: text-generation
15
+ quantized_by: andrijdavid
16
+ ---
17
+ # Turkish-Llama-8b-v0.1-GGUF
18
+ - Original model: [Turkish-Llama-8b-v0.1](https://huggingface.co/ytu-ce-cosmos/Turkish-Llama-8b-v0.1)
19
+
20
+ <!-- description start -->
21
+ ## Description
22
+
23
+ This repo contains GGUF format model files for [Turkish-Llama-8b-v0.1](https://huggingface.co/ytu-ce-cosmos/Turkish-Llama-8b-v0.1).
24
+
25
+ <!-- description end -->
26
+ <!-- README_GGUF.md-about-gguf start -->
27
+ ### About GGUF
28
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
29
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
30
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
31
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
32
+ * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​
33
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
34
+ * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
35
+ * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
36
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
37
+ * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
38
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
39
+ * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
40
+ * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
41
+ * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
42
+ <!-- README_GGUF.md-about-gguf end -->
43
+
44
+ <!-- compatibility_gguf start -->
45
+ ## Explanation of quantisation methods
46
+ <details>
47
+ <summary>Click to see details</summary>
48
+ The new methods available are:
49
+
50
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
51
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
52
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
53
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
54
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
55
+ </details>
56
+ <!-- compatibility_gguf end -->
57
+
58
+ <!-- README_GGUF.md-how-to-download start -->
59
+ ## How to download GGUF files
60
+
61
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
62
+
63
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
64
+
65
+ * LM Studio
66
+ * LoLLMS Web UI
67
+ * Faraday.dev
68
+
69
+ ### In `text-generation-webui`
70
+
71
+ Under Download Model, you can enter the model repo: LiteLLMs/Turkish-Llama-8b-v0.1-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
72
+
73
+ Then click Download.
74
+
75
+ ### On the command line, including multiple files at once
76
+
77
+ I recommend using the `huggingface-hub` Python library:
78
+
79
+ ```shell
80
+ pip3 install huggingface-hub
81
+ ```
82
+
83
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
84
+
85
+ ```shell
86
+ huggingface-cli download LiteLLMs/Turkish-Llama-8b-v0.1-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
87
+ ```
88
+
89
+ <details>
90
+ <summary>More advanced huggingface-cli download usage (click to read)</summary>
91
+
92
+ You can also download multiple files at once with a pattern:
93
+
94
+ ```shell
95
+ huggingface-cli download LiteLLMs/Turkish-Llama-8b-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
96
+ ```
97
+
98
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
99
+
100
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
101
+
102
+ ```shell
103
+ pip3 install huggingface_hub[hf_transfer]
104
+ ```
105
+
106
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
107
+
108
+ ```shell
109
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Turkish-Llama-8b-v0.1-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
110
+ ```
111
+
112
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
113
+ </details>
114
+ <!-- README_GGUF.md-how-to-download end -->
115
+ <!-- README_GGUF.md-how-to-run start -->
116
+ ## Example `llama.cpp` command
117
+
118
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
119
+
120
+ ```shell
121
+ ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
122
+ ```
123
+
124
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
125
+
126
+ Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
127
+
128
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
129
+
130
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
131
+
132
+ ## How to run in `text-generation-webui`
133
+
134
+ Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
135
+
136
+ ## How to run from Python code
137
+
138
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
139
+
140
+ ### How to load this model in Python code, using llama-cpp-python
141
+
142
+ For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
143
+
144
+ #### First install the package
145
+
146
+ Run one of the following commands, according to your system:
147
+
148
+ ```shell
149
+ # Base ctransformers with no GPU acceleration
150
+ pip install llama-cpp-python
151
+ # With NVidia CUDA acceleration
152
+ CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
153
+ # Or with OpenBLAS acceleration
154
+ CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
155
+ # Or with CLBLast acceleration
156
+ CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
157
+ # Or with AMD ROCm GPU acceleration (Linux only)
158
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
159
+ # Or with Metal GPU acceleration for macOS systems only
160
+ CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
161
+ # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
162
+ $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
163
+ pip install llama-cpp-python
164
+ ```
165
+
166
+ #### Simple llama-cpp-python example code
167
+
168
+ ```python
169
+ from llama_cpp import Llama
170
+ # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
171
+ llm = Llama(
172
+ model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
173
+ n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
174
+ n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
175
+ n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
176
+ )
177
+ # Simple inference example
178
+ output = llm(
179
+ "<PROMPT>", # Prompt
180
+ max_tokens=512, # Generate up to 512 tokens
181
+ stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
182
+ echo=True # Whether to echo the prompt
183
+ )
184
+ # Chat Completion API
185
+ llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
186
+ llm.create_chat_completion(
187
+ messages = [
188
+ {"role": "system", "content": "You are a story writing assistant."},
189
+ {
190
+ "role": "user",
191
+ "content": "Write a story about llamas."
192
+ }
193
+ ]
194
+ )
195
+ ```
196
+
197
+ ## How to use with LangChain
198
+
199
+ Here are guides on using llama-cpp-python and ctransformers with LangChain:
200
+
201
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
202
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
203
+
204
+ <!-- README_GGUF.md-how-to-run end -->
205
+
206
+ <!-- footer end -->
207
+
208
+ <!-- original-model-card start -->
209
+ # Original model card: Turkish-Llama-8b-v0.1
210
+
211
+ <img src="./CosmosLlaMa.png" width="400px"/>
212
+
213
+ # Cosmos LLaMa
214
+
215
+ This model is a fully fine-tuned version of the LLaMA-3 8B model with a 30GB Turkish dataset.
216
+
217
+ The Cosmos LLaMa is designed for text generation tasks, providing the ability to continue a given text snippet in a coherent and contextually relevant manner. Due to the diverse nature of the training data, which includes websites, books, and other text sources, this model can exhibit biases. Users should be aware of these biases and use the model responsibly.
218
+
219
+ ## Example Usage
220
+
221
+ Here is an example of how to use the model in colab:
222
+
223
+ ```python
224
+ !pip install -U accelerate bitsandbytes
225
+ ```
226
+
227
+ ```python
228
+ import torch
229
+ from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
230
+ from transformers import BitsAndBytesConfig
231
+ import time
232
+
233
+ model_name = "ytu-ce-cosmos/Turkish-Llama-8b-v0.1"
234
+
235
+ bnb_config = BitsAndBytesConfig(
236
+ load_in_8bit=True,
237
+ bnb_8bit_compute_dtype=torch.bfloat16,
238
+ load_in_8bit_fp32_cpu_offload=True,
239
+ device_map = 'auto'
240
+ )
241
+
242
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
243
+ model = AutoModelForCausalLM.from_pretrained(
244
+ model_name,
245
+ device_map="auto",
246
+ torch_dtype=torch.bfloat16,
247
+ quantization_config=bnb_config,
248
+ )
249
+ ```
250
+
251
+ ```python
252
+ text_generator = pipeline(
253
+ "text-generation",
254
+ model=model,
255
+ tokenizer=tokenizer,
256
+ device_map="auto",
257
+ temperature=0.3,
258
+ repetition_penalty=1.1,
259
+ top_p=0.9,
260
+ max_length=610,
261
+ do_sample=True,
262
+ return_full_text=False,
263
+ min_new_tokens=32
264
+ )
265
+ ```
266
+
267
+ ```python
268
+ text = """Yapay zeka hakkında 3 tespit yaz.\n"""
269
+
270
+ r = text_generator(text)
271
+
272
+ print(r[0]['generated_text'])
273
+
274
+ """
275
+ 1. Yapay Zeka (AI), makinelerin insan benzeri bilişsel işlevleri gerçekleştirmesini sağlayan bir teknoloji alanıdır.
276
+
277
+ 2. Yapay zekanın geliştirilmesi ve uygulanması, sağlık hizmetlerinden eğlenceye kadar çeşitli sektörlerde çok sayıda fırsat sunmaktadır.
278
+
279
+ 3. Yapay zeka teknolojisinin potansiyel faydaları önemli olsa da mahremiyet, işten çıkarma ve etik hususlar gibi konularla ilgili endişeler de var.
280
+ """
281
+ ```
282
+
283
+ # Acknowledgments
284
+ - Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗
285
+ - Computing resources used in this work were provided by the National Center for High Performance Computing of Turkey (UHeM) under grant numbers 1016912023 and
286
+ 1018512024
287
+ - Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
288
+
289
+ ### Contact
290
+ COSMOS AI Research Group, Yildiz Technical University Computer Engineering Department <br>
291
+ https://cosmos.yildiz.edu.tr/ <br>
292
+ cosmos@yildiz.edu.tr
293
+
294
+
295
+
296
+
297
+ <!-- original-model-card end -->