modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
195M
likes
int64
0
6.47k
library_name
stringclasses
327 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
51 values
createdAt
unknown
card
stringlengths
1
913k
peakji/qwen2.5-1.5b-instruct-trim
peakji
"2024-09-19T01:03:51Z"
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
"2024-09-19T01:02:16Z"
Entry not found
utahnlp/newsqa_t5-3b_seed-1
utahnlp
"2024-09-19T01:05:40Z"
0
0
null
[ "safetensors", "t5", "region:us" ]
null
"2024-09-19T01:02:26Z"
Entry not found
tronsdds/Qwen-Qwen1.5-1.8B-1726707773
tronsdds
"2024-09-19T01:03:05Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:02:53Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
satvik26/sdxl-satvik-20
satvik26
"2024-09-19T01:02:58Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:02:58Z"
--- base_model: stablediffusionapi/epicrealism-xl library_name: diffusers license: openrail++ tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers instance_prompt: picture of s1skcj7vs3vsdg8 person widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - satvik26/sdxl-satvik-20 <Gallery /> ## Model description These are satvik26/sdxl-satvik-20 LoRA adaption weights for stablediffusionapi/epicrealism-xl. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use picture of s1skcj7vs3vsdg8 person to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](satvik26/sdxl-satvik-20/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Krabat/google-gemma-2b-1726707796
Krabat
"2024-09-19T01:03:20Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-09-19T01:03:16Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
peakji/qwen2.5-3b-instruct-trim
peakji
"2024-09-19T01:05:59Z"
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
"2024-09-19T01:03:54Z"
Entry not found
tronsdds/google-gemma-2b-1726707881
tronsdds
"2024-09-19T01:05:16Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-09-19T01:04:41Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
yunhuijang/qnu3tc47
yunhuijang
"2024-09-19T01:04:47Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:04:47Z"
Entry not found
SALUTEASD/Qwen-Qwen1.5-1.8B-1726707908
SALUTEASD
"2024-09-19T01:05:07Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:05:07Z"
Entry not found
SHLIM05/VIT
SHLIM05
"2024-09-19T01:05:15Z"
0
0
null
[ "pytorch", "vit", "region:us" ]
null
"2024-09-19T01:05:08Z"
Entry not found
dogssss/Qwen-Qwen1.5-1.8B-1726707935
dogssss
"2024-09-19T01:05:38Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:05:36Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
peakji/qwen2.5-7b-instruct-trim
peakji
"2024-09-19T01:08:30Z"
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
"2024-09-19T01:06:02Z"
Entry not found
jan-hq/llama3-s-instruct-v0.3-checkpoint-7000-phase-3
jan-hq
"2024-09-19T01:11:38Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:jan-hq/llama3-s-instruct-v0.3-checkpoint-7000", "base_model:finetune:jan-hq/llama3-s-instruct-v0.3-checkpoint-7000", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-19T01:06:10Z"
--- base_model: jan-hq/llama3-s-instruct-v0.3-checkpoint-7000 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** jan-hq - **License:** apache-2.0 - **Finetuned from model :** jan-hq/llama3-s-instruct-v0.3-checkpoint-7000 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
utahnlp/newsqa_t5-3b_seed-2
utahnlp
"2024-09-19T01:09:31Z"
0
0
null
[ "safetensors", "t5", "region:us" ]
null
"2024-09-19T01:06:16Z"
Entry not found
jerseyjerry/google-gemma-2b-it-1726707984
jerseyjerry
"2024-09-19T01:06:33Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b-it", "base_model:adapter:google/gemma-2b-it", "region:us" ]
null
"2024-09-19T01:06:24Z"
--- base_model: google/gemma-2b-it library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
Krabat/google-gemma-7b-1726708003
Krabat
"2024-09-19T01:06:46Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-7b", "base_model:adapter:google/gemma-7b", "region:us" ]
null
"2024-09-19T01:06:44Z"
--- base_model: google/gemma-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
latiao1999/Qwen-Qwen1.5-0.5B-1726708016
latiao1999
"2024-09-19T01:07:06Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-09-19T01:06:57Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
FunPang/whisper-large-V3-QLoRA-Cantones
FunPang
"2024-09-19T01:07:28Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "dataset:common_voice_13_0", "base_model:openai/whisper-large-v3", "base_model:adapter:openai/whisper-large-v3", "license:apache-2.0", "region:us" ]
null
"2024-09-19T01:07:02Z"
--- base_model: openai/whisper-large-v3 datasets: - common_voice_13_0 library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: whisper-large-V3-QLoRA-Cantones results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-V3-QLoRA-Cantones This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the common_voice_13_0 dataset. It achieves the following results on the evaluation set: - Loss: 2.8906 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0483 | 1.0 | 1753 | 2.8906 | ### Framework versions - PEFT 0.12.1.dev0 - Transformers 4.45.0.dev0 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
amac720/loratrainingamac720
amac720
"2024-09-19T01:08:06Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:08:06Z"
Entry not found
solidrust/Replete-Reflection-llama-3.1-8b-AWQ
solidrust
"2024-09-19T01:08:08Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:08:07Z"
--- base_model: Replete-AI/Replete-Reflection-llama-3.1-8b library_name: transformers tags: - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible pipeline_tag: text-generation inference: false quantized_by: Suparious --- # Replete-AI/Replete-Reflection-llama-3.1-8b AWQ - Model creator: [Replete-AI](https://huggingface.co/Replete-AI) - Original model: [Replete-Reflection-llama-3.1-8b](https://huggingface.co/Replete-AI/Replete-Reflection-llama-3.1-8b) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/Replete-Reflection-llama-3.1-8b-AWQ" system_message = "You are Replete-Reflection-llama-3.1-8b, incarnated as a powerful AI. You were created by Replete-AI." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
huazi123/google-gemma-2b-1726708091
huazi123
"2024-09-19T01:08:14Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-09-19T01:08:09Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
tronsdds/google-gemma-7b-1726708103
tronsdds
"2024-09-19T01:09:12Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-7b", "base_model:adapter:google/gemma-7b", "region:us" ]
null
"2024-09-19T01:08:23Z"
--- base_model: google/gemma-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
peakji/qwen2.5-14b-instruct-trim
peakji
"2024-09-19T01:13:18Z"
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
"2024-09-19T01:08:34Z"
Entry not found
Gsr32/Oguiadeboaesposaem2024
Gsr32
"2024-09-19T01:08:36Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:08:36Z"
Entry not found
SALUTEASD/google-gemma-2b-1726708136
SALUTEASD
"2024-09-19T01:08:55Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:08:55Z"
Entry not found
dogssss/Qwen-Qwen1.5-0.5B-1726708207
dogssss
"2024-09-19T01:10:10Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-09-19T01:10:07Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
utahnlp/newsqa_t5-3b_seed-3
utahnlp
"2024-09-19T01:13:23Z"
0
0
null
[ "safetensors", "t5", "region:us" ]
null
"2024-09-19T01:10:08Z"
Entry not found
tronsdds/Qwen-Qwen1.5-1.8B-1726708235
tronsdds
"2024-09-19T01:10:48Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:10:35Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
juungwon/Llava-1.5-construction_3
juungwon
"2024-09-19T01:10:40Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:10:40Z"
Invalid username or password.
yinong333/finetuned_MiniLM
yinong333
"2024-09-19T01:11:05Z"
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:760", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-09-19T01:10:53Z"
--- base_model: sentence-transformers/all-MiniLM-L6-v2 library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10 - dot_recall@1 - dot_recall@3 - dot_recall@5 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:760 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: Why is it important to establish clear timelines for data retention, and what should happen to data once those timelines are reached? sentences: - "Technology \nDignari \nDouglas Goddard \nEdgar Dworsky \nElectronic Frontier\ \ Foundation \nElectronic Privacy Information \nCenter, Center for Digital \n\ Democracy, and Consumer \nFederation of America \nFaceTec \nFight for the Future\ \ \nGanesh Mani \nGeorgia Tech Research Institute \nGoogle \nHealth Information\ \ Technology \nResearch and Development \nInteragency Working Group \nHireVue\ \ \nHR Policy Association \nID.me \nIdentity and Data Sciences \nLaboratory at\ \ Science Applications \nInternational Corporation \nInformation Technology and\ \ \nInnovation Foundation \nInformation Technology Industry \nCouncil \nInnocence\ \ Project \nInstitute for Human-Centered \nArtificial Intelligence at Stanford\ \ \nUniversity \nIntegrated Justice Information \nSystems Institute \nInternational\ \ Association of Chiefs \nof Police \nInternational Biometrics + Identity \nAssociation\ \ \nInternational Business Machines \nCorporation \nInternational Committee of\ \ the Red \nCross \nInventionphysics \niProov \nJacob Boudreau \nJennifer K. Wagner,\ \ Dan Berger," - "new privacy risks and implementing appropriate mitigation measures, which may\ \ include express consent. \nClear timelines for data retention should be established,\ \ with data deleted as soon as possible in accordance \nwith legal or policy-based\ \ limitations. Determined data retention timelines should be documented and justi­\n\ fied. \nRisk identification and mitigation. Entities that collect, use, share,\ \ or store sensitive data should \nattempt to proactively identify harms and seek\ \ to manage them so as to avoid, mitigate, and respond appropri­\nately to identified\ \ risks. Appropriate responses include determining not to process data when the\ \ privacy risks \noutweigh the benefits or implementing measures to mitigate acceptable\ \ risks. Appropriate responses do not \ninclude sharing or transferring the privacy\ \ risks to users via notice or consent requests where users could not \nreasonably\ \ be expected to understand the risks without further support." - '55. Data & Trust Alliance. Algorithmic Bias Safeguards for Workforce: Overview. Jan. 2022. https:// dataandtrustalliance.org/Algorithmic_Bias_Safeguards_for_Workforce_Overview.pdf 56. Section 508.gov. IT Accessibility Laws and Policies. Access Board. https://www.section508.gov/ manage/laws-and-policies/ 67' - source_sentence: What is the purpose of the NIST AI Risk Management Framework? sentences: - "TABLE OF CONTENTS\nFROM PRINCIPLES TO PRACTICE: A TECHNICAL COMPANION TO THE\ \ BLUEPRINT \nFOR AN AI BILL OF RIGHTS \n \nUSING THIS TECHNICAL COMPANION\n \n\ SAFE AND EFFECTIVE SYSTEMS\n \nALGORITHMIC DISCRIMINATION PROTECTIONS\n \nDATA\ \ PRIVACY\n \nNOTICE AND EXPLANATION\n \nHUMAN ALTERNATIVES, CONSIDERATION, AND\ \ FALLBACK\nAPPENDIX\n \nEXAMPLES OF AUTOMATED SYSTEMS\n \nLISTENING TO THE AMERICAN\ \ PEOPLE\nENDNOTES \n12\n14\n15\n23\n30\n40\n46\n53\n53\n55\n63\n13" - "health diagnostic systems. \nThe Blueprint for an AI Bill of Rights recognizes\ \ that law enforcement activities require a balancing of \nequities, for example,\ \ between the protection of sensitive law enforcement information and the principle\ \ of \nnotice; as such, notice may not be appropriate, or may need to be adjusted\ \ to protect sources, methods, and \nother law enforcement equities. Even in contexts\ \ where these principles may not apply in whole or in part, \nfederal departments\ \ and agencies remain subject to judicial, privacy, and civil liberties oversight\ \ as well as \nexisting policies and safeguards that govern automated systems,\ \ including, for example, Executive Order 13960, \nPromoting the Use of Trustworthy\ \ Artificial Intelligence in the Federal Government (December 2020). \nThis white\ \ paper recognizes that national security (which includes certain law enforcement\ \ and \nhomeland security activities) and defense activities are of increased\ \ sensitivity and interest to our nation’s" - "mitigate risks posed by the use of AI to companies’ reputation, legal responsibilities,\ \ and other product safety \nand effectiveness concerns. \nThe Office of Management\ \ and Budget (OMB) has called for an expansion of opportunities \nfor meaningful\ \ stakeholder engagement in the design of programs and services. OMB also \npoints\ \ to numerous examples of effective and proactive stakeholder engagement, including\ \ the Community-\nBased Participatory Research Program developed by the National\ \ Institutes of Health and the participatory \ntechnology assessments developed\ \ by the National Oceanic and Atmospheric Administration.18\nThe National Institute\ \ of Standards and Technology (NIST) is developing a risk \nmanagement framework\ \ to better manage risks posed to individuals, organizations, and \nsociety by\ \ AI.19 The NIST AI Risk Management Framework, as mandated by Congress, is intended\ \ for \nvoluntary use to help incorporate trustworthiness considerations into\ \ the design, development, use, and" - source_sentence: What were the main topics discussed in the panel focused on consumer rights and protections in an automated society? sentences: - "context, or may be more speculative and therefore uncertain. \nAI risks can differ\ \ from or intensify traditional software risks. Likewise, GAI can exacerbate existing\ \ AI \nrisks, and creates unique risks. GAI risks can vary along many dimensions:\ \ \n• \nStage of the AI lifecycle: Risks can arise during design, development,\ \ deployment, operation, \nand/or decommissioning. \n• \nScope: Risks may exist\ \ at individual model or system levels, at the application or implementation \n\ levels (i.e., for a specific use case), or at the ecosystem level – that is, beyond\ \ a single system or \norganizational context. Examples of the latter include\ \ the expansion of “algorithmic \nmonocultures,3” resulting from repeated use\ \ of the same model, or impacts on access to \nopportunity, labor markets, and\ \ the creative economies.4 \n• \nSource of risk: Risks may emerge from factors\ \ related to the design, training, or operation of the" - "specific and empirically well-substantiated negative risk to public safety (or\ \ has \nalready caused harm). \nCBRN Information or Capabilities; \nDangerous,\ \ Violent, or Hateful \nContent \nAI Actor Tasks: Governance and Oversight" - "theme, exploring current challenges and concerns and considering what an automated\ \ society that \nrespects democratic values should look like. These discussions\ \ focused on the topics of consumer \nrights and protections, the criminal justice\ \ system, equal opportunities and civil justice, artificial \nintelligence and\ \ democratic values, social welfare and development, and the healthcare system.\ \ \nSummaries of Panel Discussions: \nPanel 1: Consumer Rights and Protections.\ \ This event explored the opportunities and challenges for \nindividual consumers\ \ and communities in the context of a growing ecosystem of AI-enabled consumer\ \ \nproducts, advanced platforms and services, “Internet of Things” (IoT) devices,\ \ and smart city products and \nservices. \nWelcome:\n•\nRashida Richardson, Senior\ \ Policy Advisor for Data and Democracy, White House Office of Science and\nTechnology\ \ Policy\n•\nKaren Kornbluh, Senior Fellow and Director of the Digital Innovation\ \ and Democracy Initiative, German\nMarshall Fund" - source_sentence: How did the input from various stakeholders contribute to the development of the Blueprint for an AI Bill of Rights? sentences: - "SECTION TITLE\nAPPENDIX\nListening to the American People \nThe White House Office\ \ of Science and Technology Policy (OSTP) led a yearlong process to seek and distill\ \ \ninput from people across the country – from impacted communities to industry\ \ stakeholders to \ntechnology developers to other experts across fields and sectors,\ \ as well as policymakers across the Federal \ngovernment – on the issue of algorithmic\ \ and data-driven harms and potential remedies. Through panel \ndiscussions, public\ \ listening sessions, private meetings, a formal request for information, and\ \ input to a \npublicly accessible and widely-publicized email address, people\ \ across the United States spoke up about \nboth the promises and potential harms\ \ of these technologies, and played a central role in shaping the \nBlueprint\ \ for an AI Bill of Rights. \nPanel Discussions to Inform the Blueprint for An\ \ AI Bill of Rights" - "About this Document \nThe Blueprint for an AI Bill of Rights: Making Automated\ \ Systems Work for the American People was \npublished by the White House Office\ \ of Science and Technology Policy in October 2022. This framework was \nreleased\ \ one year after OSTP announced the launch of a process to develop “a bill of\ \ rights for an AI-powered \nworld.” Its release follows a year of public engagement\ \ to inform this initiative. The framework is available \nonline at: https://www.whitehouse.gov/ostp/ai-bill-of-rights\ \ \nAbout the Office of Science and Technology Policy \nThe Office of Science\ \ and Technology Policy (OSTP) was established by the National Science and Technology\ \ \nPolicy, Organization, and Priorities Act of 1976 to provide the President\ \ and others within the Executive Office \nof the President with advice on the\ \ scientific, engineering, and technological aspects of the economy, national" - "Technology Policy\n•\nKaren Kornbluh, Senior Fellow and Director of the Digital\ \ Innovation and Democracy Initiative, German\nMarshall Fund\nModerator: \nDevin\ \ E. Willis, Attorney, Division of Privacy and Identity Protection, Bureau of\ \ Consumer Protection, Federal \nTrade Commission \nPanelists: \n•\nTamika L.\ \ Butler, Principal, Tamika L. Butler Consulting\n•\nJennifer Clark, Professor\ \ and Head of City and Regional Planning, Knowlton School of Engineering, Ohio\n\ State University\n•\nCarl Holshouser, Senior Vice President for Operations and\ \ Strategic Initiatives, TechNet\n•\nSurya Mattu, Senior Data Engineer and Investigative\ \ Data Journalist, The Markup\n•\nMariah Montgomery, National Campaign Director,\ \ Partnership for Working Families\n55" - source_sentence: What legal action did the Federal Trade Commission take against Kochava regarding data tracking? sentences: - "DATA PRIVACY \nEXTRA PROTECTIONS FOR DATA RELATED TO SENSITIVE\nDOMAINS\n•\n\ Continuous positive airway pressure machines gather data for medical purposes,\ \ such as diagnosing sleep\napnea, and send usage data to a patient’s insurance\ \ company, which may subsequently deny coverage for the\ndevice based on usage\ \ data. Patients were not aware that the data would be used in this way or monitored\n\ by anyone other than their doctor.70 \n•\nA department store company used predictive\ \ analytics applied to collected consumer data to determine that a\nteenage girl\ \ was pregnant, and sent maternity clothing ads and other baby-related advertisements\ \ to her\nhouse, revealing to her father that she was pregnant.71\n•\nSchool audio\ \ surveillance systems monitor student conversations to detect potential \"stress\ \ indicators\" as\na warning of potential violence.72 Online proctoring systems\ \ claim to detect if a student is cheating on an" - 'ENDNOTES 75. See., e.g., Sam Sabin. Digital surveillance in a post-Roe world. Politico. May 5, 2022. https:// www.politico.com/newsletters/digital-future-daily/2022/05/05/digital-surveillance-in-a-post-roe­ world-00030459; Federal Trade Commission. FTC Sues Kochava for Selling Data that Tracks People at Reproductive Health Clinics, Places of Worship, and Other Sensitive Locations. Aug. 29, 2022. https:// www.ftc.gov/news-events/news/press-releases/2022/08/ftc-sues-kochava-selling-data-tracks-people­ reproductive-health-clinics-places-worship-other 76. Todd Feathers. This Private Equity Firm Is Amassing Companies That Collect Data on America’s Children. The Markup. Jan. 11, 2022. https://themarkup.org/machine-learning/2022/01/11/this-private-equity-firm-is-amassing-companies­ that-collect-data-on-americas-children 77. Reed Albergotti. Every employee who leaves Apple becomes an ‘associate’: In job databases used by' - 'ENDNOTES 1.The Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive order-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/ 2. The White House. Remarks by President Biden on the Supreme Court Decision to Overturn Roe v. Wade. Jun. 24, 2022. https://www.whitehouse.gov/briefing-room/speeches-remarks/2022/06/24/remarks-by-president­ biden-on-the-supreme-court-decision-to-overturn-roe-v-wade/ 3. The White House. Join the Effort to Create A Bill of Rights for an Automated Society. Nov. 10, 2021. https:// www.whitehouse.gov/ostp/news-updates/2021/11/10/join-the-effort-to-create-a-bill-of-rights-for-an­ automated-society/ 4. U.S. Dept. of Health, Educ. & Welfare, Report of the Sec’y’s Advisory Comm. on Automated Pers. Data Sys.,' model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.7214285714285714 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8785714285714286 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9428571428571428 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9642857142857143 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7214285714285714 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2928571428571428 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1885714285714285 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09642857142857142 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7214285714285714 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8785714285714286 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9428571428571428 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9642857142857143 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8453118147428804 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8063690476190476 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8087038619275461 name: Cosine Map@100 - type: dot_accuracy@1 value: 0.7214285714285714 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.8785714285714286 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.9428571428571428 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.9642857142857143 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.7214285714285714 name: Dot Precision@1 - type: dot_precision@3 value: 0.2928571428571428 name: Dot Precision@3 - type: dot_precision@5 value: 0.1885714285714285 name: Dot Precision@5 - type: dot_precision@10 value: 0.09642857142857142 name: Dot Precision@10 - type: dot_recall@1 value: 0.7214285714285714 name: Dot Recall@1 - type: dot_recall@3 value: 0.8785714285714286 name: Dot Recall@3 - type: dot_recall@5 value: 0.9428571428571428 name: Dot Recall@5 - type: dot_recall@10 value: 0.9642857142857143 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.8453118147428804 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.8063690476190476 name: Dot Mrr@10 - type: dot_map@100 value: 0.8087038619275461 name: Dot Map@100 --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("yinong333/finetuned_MiniLM") # Run inference sentences = [ 'What legal action did the Federal Trade Commission take against Kochava regarding data tracking?', 'ENDNOTES\n75. See., e.g., Sam Sabin. Digital surveillance in a post-Roe world. Politico. May 5, 2022. https://\nwww.politico.com/newsletters/digital-future-daily/2022/05/05/digital-surveillance-in-a-post-roe\xad\nworld-00030459; Federal Trade Commission. FTC Sues Kochava for Selling Data that Tracks People at\nReproductive Health Clinics, Places of Worship, and Other Sensitive Locations. Aug. 29, 2022. https://\nwww.ftc.gov/news-events/news/press-releases/2022/08/ftc-sues-kochava-selling-data-tracks-people\xad\nreproductive-health-clinics-places-worship-other\n76. Todd Feathers. This Private Equity Firm Is Amassing Companies That Collect Data on America’s\nChildren. The Markup. Jan. 11, 2022.\nhttps://themarkup.org/machine-learning/2022/01/11/this-private-equity-firm-is-amassing-companies\xad\nthat-collect-data-on-americas-children\n77. Reed Albergotti. Every employee who leaves Apple becomes an ‘associate’: In job databases used by', 'DATA PRIVACY \nEXTRA PROTECTIONS FOR DATA RELATED TO SENSITIVE\nDOMAINS\n•\nContinuous positive airway pressure machines gather data for medical purposes, such as diagnosing sleep\napnea, and send usage data to a patient’s insurance company, which may subsequently deny coverage for the\ndevice based on usage data. Patients were not aware that the data would be used in this way or monitored\nby anyone other than their doctor.70 \n•\nA department store company used predictive analytics applied to collected consumer data to determine that a\nteenage girl was pregnant, and sent maternity clothing ads and other baby-related advertisements to her\nhouse, revealing to her father that she was pregnant.71\n•\nSchool audio surveillance systems monitor student conversations to detect potential "stress indicators" as\na warning of potential violence.72 Online proctoring systems claim to detect if a student is cheating on an', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7214 | | cosine_accuracy@3 | 0.8786 | | cosine_accuracy@5 | 0.9429 | | cosine_accuracy@10 | 0.9643 | | cosine_precision@1 | 0.7214 | | cosine_precision@3 | 0.2929 | | cosine_precision@5 | 0.1886 | | cosine_precision@10 | 0.0964 | | cosine_recall@1 | 0.7214 | | cosine_recall@3 | 0.8786 | | cosine_recall@5 | 0.9429 | | cosine_recall@10 | 0.9643 | | cosine_ndcg@10 | 0.8453 | | cosine_mrr@10 | 0.8064 | | **cosine_map@100** | **0.8087** | | dot_accuracy@1 | 0.7214 | | dot_accuracy@3 | 0.8786 | | dot_accuracy@5 | 0.9429 | | dot_accuracy@10 | 0.9643 | | dot_precision@1 | 0.7214 | | dot_precision@3 | 0.2929 | | dot_precision@5 | 0.1886 | | dot_precision@10 | 0.0964 | | dot_recall@1 | 0.7214 | | dot_recall@3 | 0.8786 | | dot_recall@5 | 0.9429 | | dot_recall@10 | 0.9643 | | dot_ndcg@10 | 0.8453 | | dot_mrr@10 | 0.8064 | | dot_map@100 | 0.8087 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 760 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 760 samples: | | sentence_0 | sentence_1 | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 20.96 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 167.91 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:--------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What is the purpose of the AI Bill of Rights mentioned in the context?</code> | <code>BLUEPRINT FOR AN <br>AI BILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022</code> | | <code>When was the Blueprint for an AI Bill of Rights published?</code> | <code>BLUEPRINT FOR AN <br>AI BILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022</code> | | <code>What was the purpose of the Blueprint for an AI Bill of Rights published by the White House Office of Science and Technology Policy?</code> | <code>About this Document <br>The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was <br>published by the White House Office of Science and Technology Policy in October 2022. This framework was <br>released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered <br>world.” Its release follows a year of public engagement to inform this initiative. The framework is available <br>online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights <br>About the Office of Science and Technology Policy <br>The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology <br>Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office <br>of the President with advice on the scientific, engineering, and technological aspects of the economy, national</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 20 - `per_device_eval_batch_size`: 20 - `num_train_epochs`: 5 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 20 - `per_device_eval_batch_size`: 20 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | cosine_map@100 | |:------:|:----:|:--------------:| | 1.0 | 38 | 0.7697 | | 1.3158 | 50 | 0.7851 | | 2.0 | 76 | 0.8109 | | 2.6316 | 100 | 0.8065 | | 3.0 | 114 | 0.8105 | | 3.9474 | 150 | 0.8115 | | 4.0 | 152 | 0.8114 | | 5.0 | 190 | 0.8087 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.0 - Transformers: 4.44.2 - PyTorch: 2.4.0+cu121 - Accelerate: 0.34.2 - Datasets: 2.19.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
huazi123/Qwen-Qwen1.5-0.5B-1726708296
huazi123
"2024-09-19T01:11:37Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-09-19T01:11:34Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
tronsdds/google-gemma-2b-1726708343
tronsdds
"2024-09-19T01:12:58Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-09-19T01:12:23Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
peakji/qwen2.5-32b-instruct-trim
peakji
"2024-09-19T01:22:37Z"
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
"2024-09-19T01:13:21Z"
Entry not found
latiao1999/Qwen-Qwen1.5-1.8B-1726708443
latiao1999
"2024-09-19T01:14:11Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:14:03Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF
mradermacher
"2024-09-19T01:31:34Z"
0
0
null
[ "gguf", "region:us" ]
null
"2024-09-19T01:15:02Z"
--- base_model: tuannn17/Odor-llama-3.1-8B-Instruct_colab_v2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/tuannn17/Odor-llama-3.1-8B-Instruct_colab_v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Odor-llama-3.1-8B-Instruct_colab_v2-GGUF/resolve/main/Odor-llama-3.1-8B-Instruct_colab_v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
dogssss/Qwen-Qwen1.5-1.8B-1726708510
dogssss
"2024-09-19T01:15:15Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:15:10Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
SALUTEASD/Qwen-Qwen1.5-0.5B-1726708536
SALUTEASD
"2024-09-19T01:15:40Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-09-19T01:15:35Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
tronsdds/google-gemma-7b-1726708565
tronsdds
"2024-09-19T01:16:54Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-7b", "base_model:adapter:google/gemma-7b", "region:us" ]
null
"2024-09-19T01:16:05Z"
--- base_model: google/gemma-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
utahnlp/newsqa_t5-11b_seed-1
utahnlp
"2024-09-19T01:23:58Z"
0
0
null
[ "safetensors", "t5", "region:us" ]
null
"2024-09-19T01:16:16Z"
Entry not found
jerseyjerry/Qwen-Qwen2-1.5B-1726708647
jerseyjerry
"2024-09-19T01:17:33Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2-1.5B", "base_model:adapter:Qwen/Qwen2-1.5B", "region:us" ]
null
"2024-09-19T01:17:27Z"
--- base_model: Qwen/Qwen2-1.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
huazi123/Qwen-Qwen1.5-1.8B-1726708701
huazi123
"2024-09-19T01:18:22Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:18:19Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
tronsdds/Qwen-Qwen1.5-1.8B-1726708699
tronsdds
"2024-09-19T01:18:32Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:18:19Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
Krabat/Qwen-Qwen1.5-0.5B-1726708784
Krabat
"2024-09-19T01:19:47Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-09-19T01:19:44Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
dogssss/Qwen-Qwen1.5-0.5B-1726708785
dogssss
"2024-09-19T01:19:50Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-09-19T01:19:45Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
tronsdds/google-gemma-2b-1726708808
tronsdds
"2024-09-19T01:20:43Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-09-19T01:20:08Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
gohsyi/Meta-Llama-3.1-8B-sft-metamath
gohsyi
"2024-09-19T01:20:19Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:20:19Z"
Entry not found
SALUTEASD/Qwen-Qwen1.5-1.8B-1726708824
SALUTEASD
"2024-09-19T01:20:30Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:20:23Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
traversaal-llm-regional-languages/Unsloth_Urdu_Llama3_1_4bit_PF100
traversaal-llm-regional-languages
"2024-09-19T01:22:43Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit", "base_model:finetune:unsloth/Meta-Llama-3.1-8B-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-09-19T01:21:12Z"
--- base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** traversaal-llm-regional-languages - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yunwoerte/wde
yunwoerte
"2024-09-19T01:22:13Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:22:13Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
peakji/qwen2.5-72b-instruct-trim
peakji
"2024-09-19T01:22:40Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:22:40Z"
Entry not found
jerseyjerry/Qwen-Qwen2-1.5B-Instruct-1726708981
jerseyjerry
"2024-09-19T01:23:06Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2-1.5B-Instruct", "region:us" ]
null
"2024-09-19T01:23:01Z"
--- base_model: Qwen/Qwen2-1.5B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
lichorosario/flux-lora-gliff-tosti-vector-1
lichorosario
"2024-09-19T01:27:20Z"
0
0
diffusers
[ "diffusers", "text-to-image", "fluxlora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:finetune:black-forest-labs/FLUX.1-schnell", "license:other", "region:us" ]
text-to-image
"2024-09-19T01:23:25Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora base_model: black-forest-labs/FLUX.1-schnell pipeline_tag: text-to-image instance_prompt: in a dark fantasy style, grainy library_name: diffusers inference: parameters: width: 1024 height: 1024 widget: - text: A monkey. in a dark fantasy style, grainy output: url: images/example_ewve6k2r9.png - text: >- This is a playful digital cartoon illustration featuring a young boy and a white cat. The boy has a cheerful expression, with wide brown eyes and an open mouth, showing his teeth in a happy, excited manner. His brown hair is short and styled with a slightly angular cut, with a lighter patch of brown forming a beard along his jawline. He is wearing a bright orange long-sleeved shirt, which contrasts nicely against the green background. The white cat is nestled closely against the boy, with its front paws affectionately draped over his shoulder as though it's hugging him. The cat's large yellow eyes, with narrow, black vertical pupils, give it a curious yet calm expression. Its ears are pointed, and its pink nose and whiskers are drawn simply but add to its cute, friendly appearance. The background is a solid green, which provides a clean, colorful backdrop that allows the figures of the boy and cat to stand out. The illustration is rendered in a modern, vector art style, characterized by bold lines, smooth shapes, and vibrant colors, giving it a fun and lively feel. The interaction between the boy and the cat suggests a strong bond, adding warmth and charm to the image.. in a dark fantasy style, grainy output: url: images/example_ryhmxqlxi.png - text: >- This is a digital cartoon illustration that portrays a character reminiscent of a horror or dark fantasy figure. The central figure is a pale, human-like face with an eerie, menacing expression. The character's skin is stark white, creating a ghostly appearance, and is crisscrossed with red lines forming a grid pattern on the head. At each intersection of the grid, there are metal nails or pins, all protruding outward in a symmetrical fashion, emphasizing a mechanical or tortured aesthetic. The eyes are dark and sunken with heavy, dark red and black shading around them, creating an ominous stare. The character's mouth is open, revealing sharp teeth with a distinct gap between the top and bottom sets, further adding to the unsettling look. The nose is thin, with blue-tinted shadows around it, enhancing the cold, inhuman feel of the face. The figure is dressed in black, with only the high collar visible, further isolating the attention on the face and head. The background is a gradient of dark gray to black, which contributes to the foreboding tone of the image. The overall style uses clean, solid lines and smooth gradients, typical of modern vector art, but the subject matter and atmosphere are much darker and gothic compared to typical cartoon illustrations. The image draws upon visual cues from horror characters, using sharp contrast, exaggerated facial features, and symmetrical patterns to evoke unease. The pins and grid pattern across the head give it a painful and torturous look, likely referencing themes of body modification or mechanical horror. in a dark fantasy style, grainy output: url: images/example_h3hu05oko.png - text: >- This is a digital cartoon illustration that portrays a character reminiscent of a horror or dark fantasy figure. The central figure is a pale, human-like face with an eerie, menacing expression. The character's skin is stark white, creating a ghostly appearance, and is crisscrossed with red lines forming a grid pattern on the head. At each intersection of the grid, there are metal nails or pins, all protruding outward in a symmetrical fashion, emphasizing a mechanical or tortured aesthetic. The eyes are dark and sunken with heavy, dark red and black shading around them, creating an ominous stare. The character's mouth is open, revealing sharp teeth with a distinct gap between the top and bottom sets, further adding to the unsettling look. The nose is thin, with blue-tinted shadows around it, enhancing the cold, inhuman feel of the face. The figure is dressed in black, with only the high collar visible, further isolating the attention on the face and head. The background is a gradient of dark gray to black, which contributes to the foreboding tone of the image. The overall style uses clean, solid lines and smooth gradients, typical of modern vector art, but the subject matter and atmosphere are much darker and gothic compared to typical cartoon illustrations. The image draws upon visual cues from horror characters, using sharp contrast, exaggerated facial features, and symmetrical patterns to evoke unease. The pins and grid pattern across the head give it a painful and torturous look, likely referencing themes of body modification or mechanical horror. in a dark fantasy style, grainy output: url: images/example_imwioqk2y.png - text: >- This is a colorful and exaggerated digital cartoon illustration of a woman with a dramatic and expressive facial expression. The character's large, bright green eyes are wide open, with heavy, long eyelashes that are stylized to the point of being almost comical. The eye makeup is bold, featuring light green eyeshadow that contrasts with the vibrant yellow-green irises. Her eyebrows are thick and arched in a way that adds to the intense expression. Her mouth is open wide, revealing bright white teeth, a red tongue, and exaggerated lips painted in glossy pink, suggesting that she's either screaming or shouting in surprise. Her rosy cheeks and the deep lines on her forehead further emphasize the animated emotion of shock or fear. The woman's hair is voluminous and pure white, styled in large, flowing waves that frame her face. The hair has a sharp, graphic quality, with defined highlights and shadows that enhance its thickness and give it a dynamic appearance. Her skin tone is a vibrant orange-tan, giving her a sun-kissed or possibly artificial look, consistent with the exaggerated style of the image. Her shoulders and upper chest are visible, drawn with similarly smooth shading to give a sense of muscle or form, though there’s no detail beyond the neck. The background is a solid dark blue, which contrasts sharply with the bright and bold colors of the figure, making her stand out. The overall style is highly exaggerated and graphic, typical of pop art or caricature, with an emphasis on bold colors, sharp contrasts, and overstated features to evoke a larger-than-life presence. The image feels playful and theatrical, with a sense of drama conveyed through the character's wide-open eyes and mouth.. in a dark fantasy style, grainy output: url: images/example_9ef2ql3o4.png - text: >- This is a digital cartoon illustratiThis digital cartoon illustration features a male character with a neutral expression. He is wearing a black helmet with two visible ventilation holes on top and a white logo resembling a cluster of circles. The helmet has chin straps on both sides, secured with buckles, adding a protective, sporty look. The man has glasses with rectangular frames, clear brown eyes, and a neatly trimmed beard and mustache, which frame his face symmetrically. His hair, partially visible under the helmet, is black and straight. The character wears a black shirt with a pointed collar, and a small part of a white undershirt is visible at the neckline, adding contrast to his dark outfit. His eyebrows are arched slightly, giving him a calm, thoughtful appearance. The background is a solid, bright yellow, which contrasts sharply with the black and dark tones of his helmet, beard, and clothing, making the character stand out prominently. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_5aql2vlz8.png - text: >- This is a digital cartoon illustratiThis digital cartoon illustration features a male character with a neutral expression. He is wearing a black helmet with two visible ventilation holes on top and a white logo resembling a cluster of circles. The helmet has chin straps on both sides, secured with buckles, adding a protective, sporty look. The man has glasses with rectangular frames, clear brown eyes, and a neatly trimmed beard and mustache, which frame his face symmetrically. His hair, partially visible under the helmet, is black and straight. The character wears a black shirt with a pointed collar, and a small part of a white undershirt is visible at the neckline, adding contrast to his dark outfit. His eyebrows are arched slightly, giving him a calm, thoughtful appearance. The background is a solid, bright yellow, which contrasts sharply with the black and dark tones of his helmet, beard, and clothing, making the character stand out prominently. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_to33oz50s.png - text: >- This digital cartoon illustration features a female character with a neutral expression. sHe is wearing a pink helmet with two visible ventilation holes on top and a white logo resembling a cluster of circles. The helmet has chin straps on both sides, secured with buckles, adding a protective, sporty look. The girl has glasses with rectangular frames, clear brown eyes, which frame his face symmetrically. Her hair, partially visible under the helmet, is black and straight. The character wears a black shirt with a pointed collar, and a small part of a white undershirt is visible at the neckline, adding contrast to his dark outfit. Her eyebrows are arched slightly, giving her a calm, thoughtful appearance. The background is a solid, bright yellow, which contrasts sharply with the black and dark tones of her helmet, and clothing, making the character stand out prominently. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_nwlg1vmir.png - text: >- This digital cartoon illustration features a cat dressed as an astronaut. The cat has a sleek, dark gray coat with white fur on its chest and a small pink nose. Its large yellow eyes, with narrow black pupils, give it an alert, focused expression. Long, white whiskers extend outward from its face, enhancing its feline appearance. The cat is wearing an orange space suit, which features detailed patches and a zipper down the middle. The patches include a black circular one with yellow details and a rectangular black patch with yellow stripes, giving the suit an authentic astronaut feel. The suit's collar is gray, adding contrast to the bright orange. The cat holds an astronaut's helmet under its arm, which is primarily white with a large black visor, reflecting two small blue ovals, suggesting the reflection of a light source. The background is a gradient of blue-gray, adding a subtle, futuristic atmosphere to the image. The overall style is smooth, with clean vector lines and solid colors typical of modern digital illustrations. The combination of the cat and the astronaut suit creates a fun and whimsical concept, blending space exploration with a playful, animal twist. output: url: images/example_4yglf2g8f.png - text: >- This digital cartoon illustration presents a stylized portrait of a bald man with a serious, intense expression. The image has a minimalist, geometric feel, with smooth shading and angular shapes that define the contours of the man's face. His skin is pale with subtle hues of gray, purple, and beige that create depth, giving the face a slightly futuristic, abstract quality. The man has a neatly trimmed, dark brown beard that frames his face, and his eyebrows are thick and sharply defined, contributing to his focused, intense gaze. His large eyes are prominent, with light reflections in them that draw attention to their clarity. The shading around the eyes adds to the intensity of his expression. The figure is wearing a simple black shirt, which blends into the darker tones of the image, keeping the focus on his face. The background is a deep gradient of dark purple fading to black, which creates a moody, dramatic atmosphere. The contrast between the dark background and the lighter tones of his face amplifies the sense of seriousness or contemplation. The overall style is clean and modern, with an emphasis on minimalism and bold contrasts. The sharp lines and smooth gradients give the portrait a sophisticated, almost digital feel, as if the character exists in a high-tech or virtual world. The illustration's simplicity and striking color choices make it visually impactful. output: url: images/example_fz5fd4w3g.png - text: >- This digital cartoon illustration features a cute, stylized character with exaggerated proportions and a playful, toy-like appearance. The character has a large, round head with pale pink skin, and is sporting a distinctive hairstyle with two high ponytails, each curving outward. The hair is black with simple gray highlights, and yellow bands secure the ponytails, giving the character a youthful and whimsical vibe. The character's facial features are minimal but expressive. They are wearing large, round black glasses, and their eyes are closed with a slight upward curve, suggesting a cheerful or content expression. The lips are oversized and painted bright red, standing out as a prominent feature on the face. The character is dressed in a black top, paired with shorts that are black with a red and white stripe at the bottom. The body is small and stubby, with tiny arms and legs, further emphasizing the toy-like or chibi style. The background is simple, with a warm orange color at the top and a teal floor beneath, dotted with purple and green circular shapes. This colorful and minimal setting adds to the playful and lighthearted mood of the illustration. The overall art style is smooth and geometric, typical of modern vector art, with thick outlines and bold, flat colors. The character’s design is charming and fun, with a focus on simplicity and cuteness. output: url: images/example_wi3sx2kdb.png - text: >- This digital cartoon illustration depicts a character who appears to be a tech-savvy individual or gamer. The figure has a neutral yet focused expression, with thick black eyebrows and black hair that’s styled in short, angular layers. The character wears black, rectangular glasses with white lenses, giving them a techie or hacker persona. The individual is also wearing a bright lime-green headset with a microphone extending from the earpiece, positioned in front of their mouth. The headset's vivid color contrasts against the darker tones of the rest of the image, making it stand out. The character is dressed in a simple dark blue shirt, adding to the casual, tech-focused vibe of the illustration. The background is inspired by the visual aesthetic of "The Matrix" with a dark, computer screen-like backdrop filled with cascading green digital characters and symbols. These symbols, in various fonts and sizes, are laid out in a grid pattern, evoking the sense of being immersed in a digital world or cyberspace. The art style is clean and sharp, typical of vector illustrations, with bold outlines and smooth gradients. The overall atmosphere of the image suggests a person engaged in programming, gaming, or hacking, with the background amplifying the sense of a high-tech, virtual environment. output: url: images/example_12emtt5cf.png - text: >- This is a simple, stylized digital cartoon illustration of a happy, young character with a large, toothy grin. The character’s face is expressive, with tightly closed eyes forming curved lines and thick, raised eyebrows indicating joy or laughter. The mouth is wide open, showing a row of white teeth with some gaps, emphasizing the childlike and playful nature of the character. The character has short, light brown hair, drawn in smooth, angular shapes, and their skin tone is a soft beige. They are wearing a light green hood pulled up over their head, framing their face. The hood contrasts with a navy blue jacket that is visible around their shoulders. Underneath the jacket, the character wears a plain white shirt, further contributing to the casual and playful tone. The background is a solid, deep red color, which adds warmth to the illustration without detracting from the focus on the character. The overall art style is minimalist, with clean lines and flat colors, typical of vector-based illustrations. The simplicity of the design, paired with the character’s exaggerated expression, gives the image a fun and lighthearted feel, as though the character is mid-laugh or enjoying a moment of pure happiness. output: url: images/example_iuooej9ju.png - text: >- This is a simple, stylized digital cartoon illustration of a happy, young character with a large, toothy grin. The character’s face is expressive, with tightly closed eyes forming curved lines and thick, raised eyebrows indicating joy or laughter. The mouth is wide open, showing a row of white teeth with some gaps, emphasizing the childlike and playful nature of the character. The character has short, light brown hair, drawn in smooth, angular shapes, and their skin tone is a soft beige. They are wearing a light green hood pulled up over their head, framing their face. The hood contrasts with a navy blue jacket that is visible around their shoulders. Underneath the jacket, the character wears a plain white shirt, further contributing to the casual and playful tone. The background is a solid, deep red color, which adds warmth to the illustration without detracting from the focus on the character. The overall art style is minimalist, with clean lines and flat colors, typical of vector-based illustrations. The simplicity of the design, paired with the character’s exaggerated expression, gives the image a fun and lighthearted feel, as though the character is mid-laugh or enjoying a moment of pure happiness. output: url: images/example_rlvn5dg7c.png - text: >- This is a charming digital cartoon illustration of an adorable mink animal, designed in a cute and playful style. The creature has large, expressive blue eyes, giving it a sweet and innocent look. Its face is framed by a mane of fluffy, light brown fur, which extends around its head and slightly onto its body, reminiscent of a lion cub. The animal’s small, rounded ears are lined with a darker color on the inside, and its whiskers, along with a small black nose and smiling mouth, add to its endearing expression. The body is drawn with simple, smooth lines, featuring tan fur and a fluffy, bushy tail that curves behind it. Its legs are short and stout, ending in tiny black paws, which give the character a youthful, chibi-like appearance. The background is a soft lavender color, and the ground the creature sits on is a muted green, allowing the figure to stand out. The overall style is clean and cartoony, with bold outlines and soft gradients that give the image a friendly and approachable feel. The animal’s playful demeanor, large eyes, and fluffy fur make it especially appealing, evoking a sense of warmth and cuteness. output: url: images/example_g5659dyra.png - text: >- This is a striking digital illustration of a character with a bold and intense look, blending elements of indigenous warrior symbolism and contemporary political imagery. The figure’s face is painted in a cracked, black-and-white pattern, resembling a mask or war paint, which runs vertically across their face in large, jagged lines. This cracked texture gives the illustration a gritty, weathered appearance, as though the paint is part of the skin. The character wears a green bandana tied around their forehead, with white text on it. The visible text includes a large white symbol and phrases like "#Qu" and "DERECHO A DECIDIR," which translates to "Right to Decide," likely referencing political or activist themes. The bandana stands out against the otherwise dark and muted tones of the image. Long, dark black hair frames the face, hanging down with simple, smooth strands. On the right side of the head, there are decorative white beads woven into the hair, adding a tribal or ceremonial aspect. On the left side, a brown, arrow-like feather or strip of material is tucked into the bandana, enhancing the character’s warrior-like appearance. The overall expression of the figure is serious and stoic, with piercing eyes that give a sense of strength and determination. The background is a soft blue, which contrasts with the dark tones of the figure's hair and face, making the character stand out. The style of the illustration is highly detailed and uses bold lines, clean shapes, and a mix of textures, creating a powerful, visually engaging composition. output: url: images/example_qbgqh9osz.png - text: >- This digital cartoon illustration features Kermit the frog. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_ycqh2x9oc.png - text: >- This digital cartoon illustration features E.T. the extra terrestrial. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_nthpo1egh.png - text: >- This digital cartoon illustration features E.T. the extra terrestrial color brown in front of a night sky with a full moon. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_52pquonxj.png - text: >- This digital cartoon illustration depicts a shirtless man with a confident and relaxed expression. He has short, brown hair styled in a slightly tousled manner, with a well-groomed beard and mustache that frames his face. His eyebrows are thick and dark, matching his hair, and his eyes are bright green, giving him a friendly and approachable look. The skin tone is warm, with smooth shading that highlights the contours of his face, shoulders, and upper chest. The shading is simple yet effective, with a minimalist style that uses flat colors and gradients to create depth. His smile is subtle, adding to the relaxed and natural demeanor of the character. The background is a gradient of teal, transitioning from darker tones at the edges to lighter shades near the center, providing a calming contrast to the figure's skin tones. The clean lines and smooth transitions of color are characteristic of vector art, giving the image a polished and modern feel. The overall vibe of the illustration is laid-back, with a focus on simplicity and warmth. output: url: images/example_nodezl8jd.png - text: >- This digital cartoon illustration depicts a shirtless man with a confident and relaxed expression. He has short, brown hair styled in a slightly tousled manner, with a well-groomed beard and mustache that frames his face. His eyebrows are thick and dark, matching his hair, and his eyes are bright green, giving him a friendly and approachable look. The skin tone is warm, with smooth shading that highlights the contours of his face, shoulders, and upper chest. The shading is simple yet effective, with a minimalist style that uses flat colors and gradients to create depth. His smile is subtle, adding to the relaxed and natural demeanor of the character. The background is a gradient of teal, transitioning from darker tones at the edges to lighter shades near the center, providing a calming contrast to the figure's skin tones. The clean lines and smooth transitions of color are characteristic of vector art, giving the image a polished and modern feel. The overall vibe of the illustration is laid-back, with a focus on simplicity and warmth. output: url: images/example_jwpws58lv.png - text: >- This digital cartoon illustration features a cheerful chef, depicted in a playful, stylized manner. The chef has a wide, friendly smile, showcasing prominent white teeth with a small gap between the top two. His face is round and expressive, with large, bright eyes that exude warmth and approachability. He sports a short, black mustache that neatly complements his facial features, along with a gray and white goatee, adding character to his face. The chef is dressed in a classic white chef's uniform, complete with a tall, traditional chef’s hat that extends upward, giving him an authoritative but approachable appearance. The uniform includes black buttons along the left side, typical of professional chef attire. His skin tone is a warm brown, and his thick black eyebrows are slightly arched, further enhancing his welcoming expression. The background is a soft beige, which keeps the focus on the chef's vibrant personality. The overall style is simple and clean, with bold lines and flat colors typical of vector art. The design, while minimal, conveys a sense of joy and professionalism, capturing the essence of a friendly, experienced chef who likely enjoys his work. output: url: images/example_kjmn1xc7u.png - text: >- This digital cartoon illustration features a man with a surreal and humorous twist. The central focus of the image is the man’s head, which has been partially "opened" to reveal a stylized pink brain. The top of his head, including his hair, is shown being lifted off like a lid, with the hand holding it above the brain. This playful concept gives the illustration a quirky and imaginative feel. The man has a clean-shaven face with light stubble, dark, expressive eyes with heavy lids, and a neutral smile, giving him a calm, almost philosophical demeanor. His hair is short and brown, styled neatly but shown detached from his head in the quirky design. The brain is bright pink with smooth, rounded folds, drawn in a simplified, cartoonish style that contrasts with the otherwise realistic facial features. He is wearing a simple black shirt, which keeps the focus on the surreal aspect of the head. The background is white, further emphasizing the central figure. The illustration is clean, with bold lines and soft shading typical of modern vector art, and the overall vibe is playful and slightly absurd, blending realism with a fun, imaginative twist. The open head concept suggests themes of creativity, thinking, or humor. output: url: images/example_qkjot73yu.png - text: >- This is a playful and quirky digital cartoon illustration of a bee character with a humorous twist. The bee has a classic yellow and black striped body, small wings with a light blue tint, and an overall chubby, rounded form. Its head is large and features exaggerated, oversized round glasses with thick black frames, giving the bee a slightly nerdy and surprised expression. The wide-open mouth, with two visible buck teeth, adds to the bee’s quirky personality. The bee sports a unique hairstyle, with red hair styled in a smooth, swooping fashion, further anthropomorphizing the character and adding to its comedic charm. The wings are simple, outlined in black with smooth blue shading, giving them a semi-transparent, glossy look. The background is a bright lavender pink, which enhances the playful and whimsical nature of the illustration, making the character pop visually. The overall style is clean and minimalist, with bold lines and flat colors, typical of modern vector art. The combination of the bee’s funny expression, glasses, and unusual hairstyle creates a lighthearted and engaging character, blending elements of both human and insect traits in a humorous way. output: url: images/example_6xma96q7l.png - text: >- This is a digital cartoon illustration that portrays a snake. in a dark fantasy style, grainy output: url: images/example_8955ro59h.png - text: >- This digital cartoon illustration depicts a shirtless man with a confident and relaxed expression. He has short, brown hair styled in a slightly tousled manner, with a well-groomed beard and mustache that frames his face. His eyebrows are thick and dark, matching his hair, and his eyes are bright green, giving him a friendly and approachable look. The skin tone is warm, with smooth shading that highlights the contours of his face, shoulders, and upper chest. The shading is simple yet effective, with a minimalist style that uses flat colors and gradients to create depth. His smile is subtle, adding to the relaxed and natural demeanor of the character. The background is a gradient of teal, transitioning from darker tones at the edges to lighter shades near the center, providing a calming contrast to the figure's skin tones. The clean lines and smooth transitions of color are characteristic of vector art, giving the image a polished and modern feel. The overall vibe of the illustration is laid-back, with a focus on simplicity and warmth. output: url: images/example_1x3lq4pq0.png - text: >- This digital cartoon illustration features a cute, chubby creature with a round, soft appearance and a slightly melancholic expression. The character has a large, white face with a smooth, oval shape, and small, black eyes that are encircled by bright orange rings, adding contrast and drawing focus to its sad-looking face. Above the eyes are two tiny black dots acting as nostrils, while a thin black curved line below them suggests a subtle, frowning mouth, enhancing the creature's downcast demeanor. The body of the creature is teal, with short arms and large, rounded feet that feature stubby, light-colored toes. Its arms are simple, and its body has a plush, soft look, as if it's designed to be squishy. Small, fin-like protrusions stick out from the sides of its head, adding a subtle aquatic or amphibian vibe to the character. The overall design is minimalist, with smooth lines and clean shapes, making the creature appear endearing and approachable despite its sad expression. The background is solid black, which highlights the character’s light colors and gives the illustration a strong visual contrast. The style is typical of modern vector art, with simple shading and bold outlines that keep the focus on the character’s expression and form. The overall mood of the image is one of gentle sadness or calmness, evoking sympathy or affection for the adorable, pouty creature. output: url: images/example_sam9votmf.png - text: >- This digital cartoon illustration features a cute, chubby creature with a round, soft appearance and a slightly melancholic expression. The character has a large, white face with a smooth, oval shape, and small, black eyes that are encircled by bright orange rings, adding contrast and drawing focus to its sad-looking face. Above the eyes are two tiny black dots acting as nostrils, while a thin black curved line below them suggests a subtle, frowning mouth, enhancing the creature's downcast demeanor. The body of the creature is teal, with short arms and large, rounded feet that feature stubby, light-colored toes. Its arms are simple, and its body has a plush, soft look, as if it's designed to be squishy. Small, fin-like protrusions stick out from the sides of its head, adding a subtle aquatic or amphibian vibe to the character. The overall design is minimalist, with smooth lines and clean shapes, making the creature appear endearing and approachable despite its sad expression. The background is solid black, which highlights the character’s light colors and gives the illustration a strong visual contrast. The style is typical of modern vector art, with simple shading and bold outlines that keep the focus on the character’s expression and form. The overall mood of the image is one of gentle sadness or calmness, evoking sympathy or affection for the adorable, pouty creature. output: url: images/example_i21t8q2vw.png - text: >- This digital cartoon illustration features a young boy with bright orange hair, standing happily with two large, friendly-looking green dinosaurs beside him. The boy has a wide smile, revealing a gap between his teeth, and his expression is cheerful and content. His face is lightly shaded with simple gradients, giving it a soft and realistic appearance, while his short, wavy orange hair adds a playful touch to his overall look. His reddish-brown eyebrows complement his hair color, and his eyes are bright and expressive, further enhancing his joyful demeanor. Surrounding the boy are two green dinosaurs with long necks, resembling cartoonish versions of a Brachiosaurus. The larger dinosaur is gently leaning its head over the boy’s head, as if playfully interacting with him, while the smaller dinosaur appears in the background on the right side, looking on curiously. Both dinosaurs have small, round black eyes, simple smooth textures, and friendly, non-threatening appearances, which adds to the whimsical and fun tone of the illustration. The background is a plain white, keeping the focus on the characters, and the overall style is clean and polished, with bold lines and soft shading typical of modern vector art. The illustration creates a sense of playful companionship between the boy and the dinosaurs, evoking a lighthearted, imaginative atmosphere. output: url: images/example_fqz3ruc0r.png - text: >- This digital cartoon illustration features a man depicted in bold, stylized colors with a modern, minimalist design. The man’s skin is shaded in tones of cool blue, contrasting sharply with his black hair and goatee. His expression is somewhat skeptical or curious, with raised eyebrows and eyes looking off to the side, as if pondering something. His short, dark hair is neatly styled, and his facial hair—a small mustache and goatee—adds a touch of personality to his appearance. He is wearing a plain white T-shirt, drawn with smooth, sharp lines that accentuate the folds in the fabric, giving the illustration a sense of depth and movement. The man is holding a lit cigar or vape pen in his right hand, from which a small, pink flame or vapor is rising, adding a pop of bright color to the otherwise cool-toned figure. His posture is relaxed, with his left arm by his side and his right arm raised, casually holding the cigar or vape. The background is a solid, bold red, which contrasts sharply with the blue tones of the man's skin and the white of his shirt, making him stand out prominently in the composition. The illustration style is clean and graphic, with simple shading and flat colors, typical of modern vector art. The overall mood is casual and reflective, with a hint of playfulness introduced by the pink flame. output: url: images/example_lnquc0yb8.png - text: >- This digital cartoon illustration presents a surreal and whimsical character blending human and geographical features. The character’s face is shaped like the Earth, with landmasses resembling continents mapped onto the head. Brown patches outline areas like North America, South America, and parts of Europe, creating the impression of a "world head." The figure's skin tone is beige, and the continents blend seamlessly into the face, which adds to the imaginative and quirky design. The character has vibrant red, curly hair, drawn in stylized, swirling waves that frame the face, giving a sense of dynamic movement. The green eyes are wide and expressive, with long eyelashes, and the eyebrows are thick and bright red, matching the hair. Below the eyes, the character wears a large, playful smile, showing gapped teeth, adding a touch of humor and friendliness to the otherwise odd appearance. The character is dressed in a purple top, accessorized with a large, pearl necklace around the neck, adding an element of elegance. The background is a dark blue, with subtle radial patterns, which contrasts with the bright colors of the figure, making the character stand out. The overall style is bold and cartoonish, with smooth lines, bright colors, and playful surrealism. The combination of human features and world geography gives the image a creative, out-of-the-box feel, blending elements of fantasy and humor. output: url: images/example_dscynogz3.png - text: >- This digital cartoon illustration portrays a fierce, tribal warrior with a bold and powerful presence. The character's dark brown skin is complemented by intense facial features, including a wide-open mouth showing sharp teeth, which adds to the aggressive and commanding expression. The warrior's eyes are wide, with sharp, angular black eyebrows giving a sense of strength and intensity. The figure is adorned with traditional warrior attire, including a large, golden collar that sits around the neck, styled with broad, horizontal bands. Three large, sharp white tusks or teeth are attached to the collar, further enhancing the character’s intimidating appearance. These tusks add an element of raw, primal power to the warrior’s look, emphasizing a connection to nature or animals. The warrior wears a helmet or headpiece made of a greenish-brown material, shaped to fit snugly around the head. The helmet has a central rounded crest on top, adding a sense of status or importance, suggesting this character could be a leader or chief within their tribe. The background is a simple, muted brown, which helps focus attention on the detailed and striking figure. The art style is clean and sharp, with smooth lines and flat colors typical of vector illustrations, giving the character a bold and distinct appearance. The overall mood of the image is one of strength, tradition, and authority, capturing the essence of a powerful warrior. output: url: images/example_6rdemz8a4.png - text: >- This digital cartoon illustration depicts a humorous and whimsical portrait of a man wearing a classic novelty disguise. The character’s head is large and prominently featured, with short, spiky white hair. His face is adorned with a fake, oversized black mustache, thick bushy eyebrows, and a comically large nose—all part of a playful disguise, reminiscent of the classic Groucho Marx glasses. The man is also wearing sunglasses, which cover most of his eyes, further adding to the humor and lighthearted tone of the image. A cigarette is perched in his mouth, completing the playful look, with a puff of smoke rising from the end. The details of the face, including wrinkles and shading, are stylized in a textured, almost crumpled paper-like effect, giving the illustration an added layer of visual interest. The background is a vibrant blue with a radial burst pattern, emanating outward in darker and lighter shades, which adds dynamic energy to the composition. The color contrast between the cool blue background and the neutral tones of the face and disguise elements makes the character pop visually. The overall style is bold, fun, and cartoonish, with clean lines and a clear focus on humor. The image evokes a playful, carefree vibe, capturing the essence of a comedic and lighthearted character who doesn’t take themselves too seriously. output: url: images/example_rzjre9psd.png - text: >- This digital cartoon-style portrait features a serious-looking individual with a calm and composed expression. The person has short, neatly styled black hair, with some strands falling slightly across the forehead, adding a sense of naturalness to the look. Their facial features are strong, with high cheekbones, a defined jawline, and prominent eyebrows that are thick and neatly shaped, giving the character a focused and thoughtful demeanor. The skin tone is a rich, deep brown, and the shading is smooth and subtle, enhancing the natural contours of the face. The eyes are slightly narrowed, giving the impression of concentration or introspection, and the lips are painted a dark maroon, which adds a touch of elegance and formality to the appearance. The individual is dressed in a high-collared turtleneck, light gray in color, which contrasts with the dark teal-green suit jacket. The formal attire, combined with the composed facial expression, suggests a professional or authoritative figure. The background is a soft yellow, which contrasts gently with the figure’s darker tones, allowing the character to stand out while maintaining a neutral and balanced composition. The overall style is clean and minimalist, typical of modern vector art, with an emphasis on smooth shading and bold shapes. The portrait conveys a sense of quiet strength, confidence, and professionalism. output: url: images/example_njkwus3me.png - text: >- This digital cartoon-style portrait features a woman with a distinctive and elegant appearance. She has straight, jet-black hair, styled in a blunt fringe that perfectly frames her pale face. The rest of her hair is pulled back into a tight bun at the top of her head, with a few long strands falling down the sides, adding a touch of sophistication and sleekness to her overall look. Her face is sharply defined with soft pink tones and precise shading, giving it a minimalist, modern appearance. The eyes are large and slightly downturned, with soft pink irises and long, subtle lashes, contributing to a delicate and introspective expression. The woman’s lips are painted in a muted red-pink color, adding a subtle warmth to the cool palette of the image. She wears long, elegant black drop earrings that complement her sleek hairstyle and formal attire. Her clothing is dark and textured, consisting of a thick, charcoal-gray turtleneck sweater with a subtle knit pattern. The high collar frames her neck and provides a sense of warmth and coziness, while the dark tones of her outfit contrast against her light skin and the vibrant blue background. The illustration is rendered in a clean, geometric vector art style, with smooth lines, flat colors, and sharp angles. The use of symmetry in her facial features and the sleekness of her overall design create a sense of calm and refinement. The mood of the portrait is sophisticated, modern, and slightly melancholic, evoking a quiet elegance in its simplicity and composition. output: url: images/example_mx4vemwrr.png - text: >- This digital cartoon-style portrait depicts a woman with a modern, minimalist aesthetic. She has a sleek, angular bob hairstyle that frames her face, with straight black hair featuring subtle gray highlights. The blunt fringe sits just above her eyebrows, giving her a polished and symmetrical look. Her facial features are soft yet defined, with a pale complexion and light shading that adds dimension to her face. Her large, almond-shaped eyes are highlighted by soft pink tones in the irises, giving her a thoughtful, calm expression. Her lips are painted in a muted pink, which complements her delicate features without overwhelming the overall subtlety of the portrait. The nose is narrow, and the contours of her face are sharp and symmetrical, contributing to a sense of balance and poise. She is dressed in a simple white top, possibly a tank top, with thin straps. The top reveals part of her chest and shoulders, keeping the focus on her face and hairstyle. The background is a deep, muted red, which contrasts with the cool tones of her hair and skin, making her stand out more distinctly. The overall illustration style is clean and geometric, with smooth lines and flat colors typical of vector art. The mood of the portrait is serene and sophisticated, with an emphasis on simplicity and symmetry. The minimalistic details in her expression and clothing give the character a sense of quiet confidence and modern elegance. output: url: images/example_86v6v99i4.png - text: >- This digital cartoon-style portrait depicts a palestinian woman with a modern, minimalist aesthetic. She has a sleek, angular bob hairstyle that frames her face, with straight black hair featuring subtle gray highlights. The blunt fringe sits just above her eyebrows, giving her a polished and symmetrical look. Her facial features are soft yet defined, with a pale complexion and light shading that adds dimension to her face. Her large, almond-shaped eyes are highlighted by soft pink tones in the irises, giving her a thoughtful, calm expression. Her lips are painted in a muted pink, which complements her delicate features without overwhelming the overall subtlety of the portrait. The nose is narrow, and the contours of her face are sharp and symmetrical, contributing to a sense of balance and poise. She is dressed in a simple white top, possibly a tank top, with thin straps. The top reveals part of her chest and shoulders, keeping the focus on her face and hairstyle. The background is a deep, muted red, which contrasts with the cool tones of her hair and skin, making her stand out more distinctly. The overall illustration style is clean and geometric, with smooth lines and flat colors typical of vector art. The mood of the portrait is serene and sophisticated, with an emphasis on simplicity and symmetry. The minimalistic details in her expression and clothing give the character a sense of quiet confidence and modern elegance. output: url: images/example_cu8abwpw4.png - text: >- This digital cartoon-style portrait features a youthful and vibrant character with a bold, futuristic look. The person has short, platinum-blonde hair styled in an edgy, asymmetrical cut, with long bangs covering one eye and shorter layers peeking out in the back. The bright, almost neon blonde hair contrasts with the colorful background, giving the character a modern, eye-catching appearance. The makeup is striking, with one eye featuring bold pink eyeshadow extending toward the brow, and black eyeliner framing the eye for a dramatic effect. On the opposite side, the person has small white dots decorating the skin just below the eye, adding a playful, creative element to the look. Their lips are lightly glossed in a soft pink, complementing the overall color palette while maintaining the focus on the vibrant eye makeup. The character wears a green top with pink and red accents on the shoulders, and a large, chunky gold necklace around their neck, adding a touch of bold fashion to the portrait. The clothing is modern and casual yet edgy, perfectly fitting the character's vibrant and confident style. The background is a gradient of dark blues and purples with soft glowing streaks of light, resembling a nightclub or futuristic setting, enhancing the lively and electric mood of the portrait. The overall art style is smooth and clean, typical of vector illustrations, with sharp lines and bright, neon-like colors. The character exudes confidence and individuality, with a fashion-forward, avant-garde aesthetic that feels both trendy and creative. output: url: images/example_0zrkfmzz4.png - text: >- This is a playful and cartoonish digital illustration of a broccoli character brought to life with exaggerated and humorous features. The broccoli's "head" consists of a large, fluffy green crown, representing the florets, with soft shading to give it depth and texture. The stalk below is a lighter green, and from this stalk emerge the character's comical facial features. The broccoli has two large, wide eyes with oversized black pupils, giving it a surprised and whimsical expression. Below the eyes is a wide, toothy grin, with a set of perfectly straight, white teeth and a slight gap between the two front ones. The mouth is open, with a hint of a red tongue inside, further adding to the character's friendly and playful demeanor. The background is a bright, cheerful yellow, which contrasts nicely with the various shades of green in the broccoli and makes the character pop. The overall art style is clean, simple, and bold, typical of modern vector art, with smooth lines and a minimalistic approach that emphasizes the humor and charm of the character. This adorable broccoli is both silly and fun, making vegetables look lively and engaging! output: url: images/example_xqv4pc0y6.png - text: >- This digital cartoon-style illustration features a young girl with an innocent, wide-eyed expression. She has a round face and short, light brown hair with neat bangs that frame her forehead. The hair is drawn in simple, smooth lines, giving it a soft, childlike appearance. Her eyes are oversized and shiny, with large black pupils and small white reflections, giving her an adorable and curious look. The simplicity of her eyes, along with the exaggerated size, enhances the charm and innocence of the character. Her small nose and slightly open mouth show a sweet smile, with a couple of baby teeth visible, adding to her youthful appearance. She is wearing a red dress with a matching red collar that has subtle decorative details, such as small dot patterns, making the outfit look quaint and appropriate for a young child. The collar is outlined in white, providing a soft contrast against the red tones of the dress. The background is left plain white, keeping the focus entirely on the girl. The art style is minimalist, with clean lines and flat colors typical of vector illustrations. The overall mood of the image is cheerful and lighthearted, capturing the innocence and sweetness of a young child. output: url: images/example_rwy3xy8mh.png - text: >- This digital cartoon-style illustration features an elderly man dressed as a ranger or outdoorsman, evoking a sense of adventure and nature. The man has a kind, slightly weathered face with pale skin and short, neatly combed white hair. He sports a small white mustache, adding to his distinguished appearance. His facial expression is calm and relaxed, with half-open eyes that give him a wise and experienced look. He is wearing a wide-brimmed ranger hat in olive green, which matches the natural theme of the image. His outfit consists of a light yellow-green coat with a high collar, and he has a red neckerchief or tie tucked under his collar, adding a touch of formality to his otherwise practical attire. The background is filled with overlapping green leaves, creating a rich, natural environment. The leaves vary in shades of green, providing depth and texture while maintaining the overall simplicity of the design. The pattern reinforces the outdoorsy, nature-oriented theme of the character. The illustration uses flat, clean colors and smooth lines typical of vector art. The character’s gentle demeanor, along with his traditional ranger outfit, suggests a wise, approachable figure, possibly someone who is experienced in nature or conservation work. The overall vibe is peaceful and grounded, capturing the essence of an experienced outdoorsman at home in nature. output: url: images/example_031o5buxc.png - text: >- This digital cartoon-style illustration features a young, stylish man enjoying a glass of milk. He has a modern, edgy look with dark brown hair styled into a high, voluminous quiff, while the sides of his head are shaved in a buzz-cut pattern. His facial hair is a neatly trimmed beard that gives him a well-groomed appearance. His expression is relaxed and confident, with raised eyebrows and a slight smirk as he sips from the glass. The man wears a sleeveless black tank top, showcasing his muscular arms. He has a silver hoop earring in his right ear, adding to his trendy and bold style. The shaved side of his head is detailed with small stubble, providing contrast to his fuller hair on top. He holds the glass of milk close to his mouth, mid-sip, and the hand gripping the glass is well-drawn with clear details on his fingers. The milk inside the glass is bright white, standing out against the warmer tones of his skin and the dark background. The background is a deep gradient of purple and maroon, giving the image a vibrant, nighttime feel that adds a sense of energy and coolness to the scene. The overall art style is clean and smooth, typical of vector illustrations, with bold colors and sharp lines. The image conveys a laid-back, confident vibe, combining a modern aesthetic with a casual activity. output: url: images/example_6eyqeo8uj.png - text: >- This playful digital cartoon illustration portrays a gorilla with a human-like twist. The gorilla has a calm, confident demeanor, wearing a sleek black business suit paired with a white shirt and a magenta tie, adding a touch of formality to the character. Its large, round eyes are expressive, with a warm amber color that adds a sense of intelligence and emotion to its face. Adding to the character's cool, laid-back vibe is a lit cigarette dangling from its mouth, with a small trail of smoke rising, contributing to a more rebellious or nonchalant attitude. The gorilla's face is drawn with smooth lines and subtle shading, accentuating its thick fur and prominent features such as the wide nose and strong jawline. The background is a solid, bright lime green, which contrasts sharply with the darker tones of the gorilla and its suit, making the character stand out vividly. The overall art style is clean, bold, and cartoonish, with smooth, polished lines typical of vector illustrations. The combination of the formal attire, casual smoking pose, and expressive eyes creates a unique and humorous character, blending the primal strength of a gorilla with the swagger of a businessperson. output: url: images/example_f6z89msfl.png - text: >- This digital cartoon-style illustration depicts an elderly man with a distinctive and cheerful appearance. He has a large, bushy white mustache that curves outward, giving him a friendly and grandfatherly look. His facial features are soft and rounded, with prominent, rosy cheeks and a big nose that adds to his warm expression. The man’s eyes are wide and expressive, featuring a subtle sparkle, while the skin around them shows light orange shading, possibly suggesting a bit of aging or sun exposure. His hair is a light brown, slightly wavy, and styled in a simple, classic manner, with white eyebrows that match his mustache. He is dressed in a bright yellow collared shirt, adding a pop of color to his otherwise neutral tones, and giving him a lively and approachable appearance. The background is a dark navy blue, which contrasts nicely with the bright yellow of his shirt and the white of his mustache, making the character stand out. The overall style is clean and cartoonish, with bold lines, flat colors, and smooth shading typical of vector art. The character radiates warmth and friendliness, suggesting someone with a gentle, welcoming personality. output: url: images/example_5k4gmkqep.png - text: >- This digital cartoon-style illustration features a colorful toucan standing against a vibrant blue background. The toucan has a distinctive, large beak that transitions from bright orange to yellow with a black tip, capturing the iconic appearance of this tropical bird. The beak is exaggerated in size, adding a playful and whimsical touch to the character. The bird’s body is mostly black, with a bright white patch on its chest. Its small, round eye is blue with a yellow ring around it, giving the bird a lively, curious expression. The toucan’s tail and wings are simple and black, complementing its sleek, cartoonish form. What stands out is the bird’s vibrant, circular head crest, which is composed of bold stripes in green, yellow, and red, resembling a stylized hat or headpiece. This adds a fun and creative twist to the toucan's design, enhancing its tropical vibe. The bird stands on orange feet with three toes, and its simple shadow below adds depth to the otherwise flat, colorful illustration. The overall art style is clean, smooth, and minimalist, with bold lines and bright colors typical of modern vector artwork. The illustration radiates energy and playfulness, bringing the exotic toucan to life in a cheerful and imaginative way. output: url: images/example_2i34emxlf.png - text: >- This digital cartoon-style illustration features a whimsical, fantasy-inspired cat with a unique and slightly fierce appearance. The cat has a round head with large, bright teal eyes that are wide open, giving it a curious and expressive look. The pupils are black and elongated, typical of a cat, but the eyes are exaggerated in size, adding a playful, endearing quality to the character. The cat’s fur is primarily brown, with darker and lighter shades used to add texture and depth. Its face is adorned with long, prominent whiskers that fan out from its snout, drawn in a light beige color, further emphasizing the feline characteristics. The ears are large and pointed, sticking up from the top of the head in a typical cat-like fashion. What sets this cat apart are its two long, sharp fangs that extend down from its upper jaw, giving it a somewhat fierce or mythical appearance, reminiscent of a saber-toothed tiger. The fangs are white and shiny, contrasting with the dark brown of the cat’s fur. Below the nose, the cat has a tiny pink tongue peeking out, which softens the overall look and adds a touch of cuteness. The background is a muted green, which contrasts nicely with the brown fur and vibrant eyes, making the character stand out. The overall style is clean and polished, with smooth lines and flat colors typical of vector art. The combination of the oversized eyes, sharp fangs, and soft fur creates a mix of both adorable and fierce, giving the cat a unique personality that feels both mythical and charming. output: url: images/example_a8bsc7p42.png --- # Tosti vector 1 (3000 steps) Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `tostiok`. ## Trigger words You should use `in a dark fantasy style, grainy` to trigger the image generation. <Gallery /> ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('lichorosario/flux-samhtr-remastered', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
tronsdds/google-gemma-7b-1726709031
tronsdds
"2024-09-19T01:24:40Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-7b", "base_model:adapter:google/gemma-7b", "region:us" ]
null
"2024-09-19T01:23:51Z"
--- base_model: google/gemma-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
utahnlp/squad_roberta-base_seed-1
utahnlp
"2024-09-19T01:24:20Z"
0
0
null
[ "safetensors", "roberta", "region:us" ]
null
"2024-09-19T01:24:03Z"
Entry not found
lichorosario/flux-lora-gliff-tosti-vector-1-1500s
lichorosario
"2024-09-19T01:30:36Z"
0
0
diffusers
[ "diffusers", "text-to-image", "fluxlora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:finetune:black-forest-labs/FLUX.1-schnell", "license:other", "region:us" ]
text-to-image
"2024-09-19T01:24:19Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora base_model: black-forest-labs/FLUX.1-schnell pipeline_tag: text-to-image instance_prompt: in a dark fantasy style, grainy library_name: diffusers inference: parameters: width: 1024 height: 1024 widget: - text: >- This is a playful digital cartoon illustration featuring a young boy and a white cat. The boy has a cheerful expression, with wide brown eyes and an open mouth, showing his teeth in a happy, excited manner. His brown hair is short and styled with a slightly angular cut, with a lighter patch of brown forming a beard along his jawline. He is wearing a bright orange long-sleeved shirt, which contrasts nicely against the green background. The white cat is nestled closely against the boy, with its front paws affectionately draped over his shoulder as though it's hugging him. The cat's large yellow eyes, with narrow, black vertical pupils, give it a curious yet calm expression. Its ears are pointed, and its pink nose and whiskers are drawn simply but add to its cute, friendly appearance. The background is a solid green, which provides a clean, colorful backdrop that allows the figures of the boy and cat to stand out. The illustration is rendered in a modern, vector art style, characterized by bold lines, smooth shapes, and vibrant colors, giving it a fun and lively feel. The interaction between the boy and the cat suggests a strong bond, adding warmth and charm to the image.. in a dark fantasy style, grainy output: url: images/example_xi42rsvku.png - text: >- This is a digital cartoon illustration that portrays a character reminiscent of a horror or dark fantasy figure. The central figure is a pale, human-like face with an eerie, menacing expression. The character's skin is stark white, creating a ghostly appearance, and is crisscrossed with red lines forming a grid pattern on the head. At each intersection of the grid, there are metal nails or pins, all protruding outward in a symmetrical fashion, emphasizing a mechanical or tortured aesthetic. The eyes are dark and sunken with heavy, dark red and black shading around them, creating an ominous stare. The character's mouth is open, revealing sharp teeth with a distinct gap between the top and bottom sets, further adding to the unsettling look. The nose is thin, with blue-tinted shadows around it, enhancing the cold, inhuman feel of the face. The figure is dressed in black, with only the high collar visible, further isolating the attention on the face and head. The background is a gradient of dark gray to black, which contributes to the foreboding tone of the image. The overall style uses clean, solid lines and smooth gradients, typical of modern vector art, but the subject matter and atmosphere are much darker and gothic compared to typical cartoon illustrations. The image draws upon visual cues from horror characters, using sharp contrast, exaggerated facial features, and symmetrical patterns to evoke unease. The pins and grid pattern across the head give it a painful and torturous look, likely referencing themes of body modification or mechanical horror. in a dark fantasy style, grainy output: url: images/example_q27aeqwdr.png - text: >- This digital cartoon illustration features a male character with a neutral expression. He is wearing a black helmet with two visible ventilation holes on top and a white logo resembling a cluster of circles. The helmet has chin straps on both sides, secured with buckles, adding a protective, sporty look. The man has glasses with rectangular frames, clear brown eyes, and a neatly trimmed beard and mustache, which frame his face symmetrically. His hair, partially visible under the helmet, is black and straight. The character wears a black shirt with a pointed collar, and a small part of a white undershirt is visible at the neckline, adding contrast to his dark outfit. His eyebrows are arched slightly, giving him a calm, thoughtful appearance. The background is a solid, bright yellow, which contrasts sharply with the black and dark tones of his helmet, beard, and clothing, making the character stand out prominently. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_jakumppz5.png - text: >- This digital cartoon illustration features a female character with a neutral expression. sHe is wearing a pink helmet with two visible ventilation holes on top and a white logo resembling a cluster of circles. The helmet has chin straps on both sides, secured with buckles, adding a protective, sporty look. The girl has glasses with rectangular frames, clear brown eyes, which frame his face symmetrically. Her hair, partially visible under the helmet, is black and straight. The character wears a black shirt with a pointed collar, and a small part of a white undershirt is visible at the neckline, adding contrast to his dark outfit. Her eyebrows are arched slightly, giving her a calm, thoughtful appearance. The background is a solid, bright yellow, which contrasts sharply with the black and dark tones of her helmet, and clothing, making the character stand out prominently. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_crabx93rh.png - text: >- This digital cartoon illustration features a cat dressed as an astronaut. The cat has a sleek, dark gray coat with white fur on its chest and a small pink nose. Its large yellow eyes, with narrow black pupils, give it an alert, focused expression. Long, white whiskers extend outward from its face, enhancing its feline appearance. The cat is wearing an orange space suit, which features detailed patches and a zipper down the middle. The patches include a black circular one with yellow details and a rectangular black patch with yellow stripes, giving the suit an authentic astronaut feel. The suit's collar is gray, adding contrast to the bright orange. The cat holds an astronaut's helmet under its arm, which is primarily white with a large black visor, reflecting two small blue ovals, suggesting the reflection of a light source. The background is a gradient of blue-gray, adding a subtle, futuristic atmosphere to the image. The overall style is smooth, with clean vector lines and solid colors typical of modern digital illustrations. The combination of the cat and the astronaut suit creates a fun and whimsical concept, blending space exploration with a playful, animal twist. output: url: images/example_0fm90d3uv.png - text: >- This digital cartoon illustration presents a stylized portrait of a bald man with a serious, intense expression. The image has a minimalist, geometric feel, with smooth shading and angular shapes that define the contours of the man's face. His skin is pale with subtle hues of gray, purple, and beige that create depth, giving the face a slightly futuristic, abstract quality. The man has a neatly trimmed, dark brown beard that frames his face, and his eyebrows are thick and sharply defined, contributing to his focused, intense gaze. His large eyes are prominent, with light reflections in them that draw attention to their clarity. The shading around the eyes adds to the intensity of his expression. The figure is wearing a simple black shirt, which blends into the darker tones of the image, keeping the focus on his face. The background is a deep gradient of dark purple fading to black, which creates a moody, dramatic atmosphere. The contrast between the dark background and the lighter tones of his face amplifies the sense of seriousness or contemplation. The overall style is clean and modern, with an emphasis on minimalism and bold contrasts. The sharp lines and smooth gradients give the portrait a sophisticated, almost digital feel, as if the character exists in a high-tech or virtual world. The illustration's simplicity and striking color choices make it visually impactful. output: url: images/example_iqd0f8w71.png - text: >- This digital cartoon illustration features a cute, stylized character with exaggerated proportions and a playful, toy-like appearance. The character has a large, round head with pale pink skin, and is sporting a distinctive hairstyle with two high ponytails, each curving outward. The hair is black with simple gray highlights, and yellow bands secure the ponytails, giving the character a youthful and whimsical vibe. The character's facial features are minimal but expressive. They are wearing large, round black glasses, and their eyes are closed with a slight upward curve, suggesting a cheerful or content expression. The lips are oversized and painted bright red, standing out as a prominent feature on the face. The character is dressed in a black top, paired with shorts that are black with a red and white stripe at the bottom. The body is small and stubby, with tiny arms and legs, further emphasizing the toy-like or chibi style. The background is simple, with a warm orange color at the top and a teal floor beneath, dotted with purple and green circular shapes. This colorful and minimal setting adds to the playful and lighthearted mood of the illustration. The overall art style is smooth and geometric, typical of modern vector art, with thick outlines and bold, flat colors. The character’s design is charming and fun, with a focus on simplicity and cuteness. output: url: images/example_2ii7rvg3r.png - text: >- This digital cartoon illustration depicts a character who appears to be a tech-savvy individual or gamer. The figure has a neutral yet focused expression, with thick black eyebrows and black hair that’s styled in short, angular layers. The character wears black, rectangular glasses with white lenses, giving them a techie or hacker persona. The individual is also wearing a bright lime-green headset with a microphone extending from the earpiece, positioned in front of their mouth. The headset's vivid color contrasts against the darker tones of the rest of the image, making it stand out. The character is dressed in a simple dark blue shirt, adding to the casual, tech-focused vibe of the illustration. The background is inspired by the visual aesthetic of "The Matrix" with a dark, computer screen-like backdrop filled with cascading green digital characters and symbols. These symbols, in various fonts and sizes, are laid out in a grid pattern, evoking the sense of being immersed in a digital world or cyberspace. The art style is clean and sharp, typical of vector illustrations, with bold outlines and smooth gradients. The overall atmosphere of the image suggests a person engaged in programming, gaming, or hacking, with the background amplifying the sense of a high-tech, virtual environment. output: url: images/example_wrcoy5d38.png - text: >- This is a simple, stylized digital cartoon illustration of a happy, young character with a large, toothy grin. The character’s face is expressive, with tightly closed eyes forming curved lines and thick, raised eyebrows indicating joy or laughter. The mouth is wide open, showing a row of white teeth with some gaps, emphasizing the childlike and playful nature of the character. The character has short, light brown hair, drawn in smooth, angular shapes, and their skin tone is a soft beige. They are wearing a light green hood pulled up over their head, framing their face. The hood contrasts with a navy blue jacket that is visible around their shoulders. Underneath the jacket, the character wears a plain white shirt, further contributing to the casual and playful tone. The background is a solid, deep red color, which adds warmth to the illustration without detracting from the focus on the character. The overall art style is minimalist, with clean lines and flat colors, typical of vector-based illustrations. The simplicity of the design, paired with the character’s exaggerated expression, gives the image a fun and lighthearted feel, as though the character is mid-laugh or enjoying a moment of pure happiness. output: url: images/example_mua6g6e24.png - text: >- This digital cartoon illustration features a male swimmer character full body. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_93vuevahd.png - text: >- This digital cartoon illustration features an undewater diver character full body. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_dq3vee0b7.png - text: >- This digital cartoon illustration features Kermit the frog. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_lw1ws1c7x.png - text: >- This digital cartoon illustration depicts a shirtless man with a confident and relaxed expression. He has short, brown hair styled in a slightly tousled manner, with a well-groomed beard and mustache that frames his face. His eyebrows are thick and dark, matching his hair, and his eyes are bright green, giving him a friendly and approachable look. The skin tone is warm, with smooth shading that highlights the contours of his face, shoulders, and upper chest. The shading is simple yet effective, with a minimalist style that uses flat colors and gradients to create depth. His smile is subtle, adding to the relaxed and natural demeanor of the character. The background is a gradient of teal, transitioning from darker tones at the edges to lighter shades near the center, providing a calming contrast to the figure's skin tones. The clean lines and smooth transitions of color are characteristic of vector art, giving the image a polished and modern feel. The overall vibe of the illustration is laid-back, with a focus on simplicity and warmth. output: url: images/example_0x4xzh0bh.png - text: >- This digital cartoon illustration features a cheerful chef, depicted in a playful, stylized manner. The chef has a wide, friendly smile, showcasing prominent white teeth with a small gap between the top two. His face is round and expressive, with large, bright eyes that exude warmth and approachability. He sports a short, black mustache that neatly complements his facial features, along with a gray and white goatee, adding character to his face. The chef is dressed in a classic white chef's uniform, complete with a tall, traditional chef’s hat that extends upward, giving him an authoritative but approachable appearance. The uniform includes black buttons along the left side, typical of professional chef attire. His skin tone is a warm brown, and his thick black eyebrows are slightly arched, further enhancing his welcoming expression. The background is a soft beige, which keeps the focus on the chef's vibrant personality. The overall style is simple and clean, with bold lines and flat colors typical of vector art. The design, while minimal, conveys a sense of joy and professionalism, capturing the essence of a friendly, experienced chef who likely enjoys his work. output: url: images/example_u4qy4lok3.png - text: >- This digital cartoon illustration features a cheerful chef, depicted in a playful, stylized manner. The chef has a wide, friendly smile, showcasing prominent white teeth with a small gap between the top two. His face is round and expressive, with large, bright eyes that exude warmth and approachability. He sports a short, black mustache that neatly complements his facial features, along with a gray and white goatee, adding character to his face. The chef is dressed in a classic white chef's uniform, complete with a tall, traditional chef’s hat that extends upward, giving him an authoritative but approachable appearance. The uniform includes black buttons along the left side, typical of professional chef attire. His skin tone is a warm brown, and his thick black eyebrows are slightly arched, further enhancing his welcoming expression. The background is a soft beige, which keeps the focus on the chef's vibrant personality. The overall style is simple and clean, with bold lines and flat colors typical of vector art. The design, while minimal, conveys a sense of joy and professionalism, capturing the essence of a friendly, experienced chef who likely enjoys his work. output: url: images/example_q508ygk4d.png - text: >- This digital cartoon illustration features a wounded centaur, mythical creature. The illustration uses smooth shading and bold, clean lines, typical of vector art. The overall tone is modern, simple, and slightly playful due to the bright background and clean design elements. in a dark fantasy style, grainy output: url: images/example_ub9vnasbt.png - text: >- This digital cartoon illustration features a man with a surreal and humorous twist. The central focus of the image is the man’s head, which has been partially "opened" to reveal a stylized pink brain. The top of his head, including his hair, is shown being lifted off like a lid, with the hand holding it above the brain. This playful concept gives the illustration a quirky and imaginative feel. The man has a clean-shaven face with light stubble, dark, expressive eyes with heavy lids, and a neutral smile, giving him a calm, almost philosophical demeanor. His hair is short and brown, styled neatly but shown detached from his head in the quirky design. The brain is bright pink with smooth, rounded folds, drawn in a simplified, cartoonish style that contrasts with the otherwise realistic facial features. He is wearing a simple black shirt, which keeps the focus on the surreal aspect of the head. The background is white, further emphasizing the central figure. The illustration is clean, with bold lines and soft shading typical of modern vector art, and the overall vibe is playful and slightly absurd, blending realism with a fun, imaginative twist. The open head concept suggests themes of creativity, thinking, or humor. output: url: images/example_nge09vqqc.png - text: >- This is a playful and quirky digital cartoon illustration of a bee character with a humorous twist. The bee has a classic yellow and black striped body, small wings with a light blue tint, and an overall chubby, rounded form. Its head is large and features exaggerated, oversized round glasses with thick black frames, giving the bee a slightly nerdy and surprised expression. The wide-open mouth, with two visible buck teeth, adds to the bee’s quirky personality. The bee sports a unique hairstyle, with red hair styled in a smooth, swooping fashion, further anthropomorphizing the character and adding to its comedic charm. The wings are simple, outlined in black with smooth blue shading, giving them a semi-transparent, glossy look. The background is a bright lavender pink, which enhances the playful and whimsical nature of the illustration, making the character pop visually. The overall style is clean and minimalist, with bold lines and flat colors, typical of modern vector art. The combination of the bee’s funny expression, glasses, and unusual hairstyle creates a lighthearted and engaging character, blending elements of both human and insect traits in a humorous way. output: url: images/example_r4uylpfx5.png - text: >- This is a digital cartoon illustration that portrays a snake. in a dark fantasy style, grainy output: url: images/example_k9nhfizxr.png - text: >- This digital cartoon illustration features a cute, chubby creature with a round, soft appearance and a slightly melancholic expression. The character has a large, white face with a smooth, oval shape, and small, black eyes that are encircled by bright orange rings, adding contrast and drawing focus to its sad-looking face. Above the eyes are two tiny black dots acting as nostrils, while a thin black curved line below them suggests a subtle, frowning mouth, enhancing the creature's downcast demeanor. The body of the creature is teal, with short arms and large, rounded feet that feature stubby, light-colored toes. Its arms are simple, and its body has a plush, soft look, as if it's designed to be squishy. Small, fin-like protrusions stick out from the sides of its head, adding a subtle aquatic or amphibian vibe to the character. The overall design is minimalist, with smooth lines and clean shapes, making the creature appear endearing and approachable despite its sad expression. The background is solid black, which highlights the character’s light colors and gives the illustration a strong visual contrast. The style is typical of modern vector art, with simple shading and bold outlines that keep the focus on the character’s expression and form. The overall mood of the image is one of gentle sadness or calmness, evoking sympathy or affection for the adorable, pouty creature. output: url: images/example_jskn5w4so.png - text: >- This digital cartoon illustration features a young boy with bright orange hair, standing happily with two large, friendly-looking green dinosaurs beside him. The boy has a wide smile, revealing a gap between his teeth, and his expression is cheerful and content. His face is lightly shaded with simple gradients, giving it a soft and realistic appearance, while his short, wavy orange hair adds a playful touch to his overall look. His reddish-brown eyebrows complement his hair color, and his eyes are bright and expressive, further enhancing his joyful demeanor. Surrounding the boy are two green dinosaurs with long necks, resembling cartoonish versions of a Brachiosaurus. The larger dinosaur is gently leaning its head over the boy’s head, as if playfully interacting with him, while the smaller dinosaur appears in the background on the right side, looking on curiously. Both dinosaurs have small, round black eyes, simple smooth textures, and friendly, non-threatening appearances, which adds to the whimsical and fun tone of the illustration. The background is a plain white, keeping the focus on the characters, and the overall style is clean and polished, with bold lines and soft shading typical of modern vector art. The illustration creates a sense of playful companionship between the boy and the dinosaurs, evoking a lighthearted, imaginative atmosphere. output: url: images/example_4fud0tze1.png - text: >- This digital cartoon illustration features a young boy with bright orange hair, standing happily with two large, friendly-looking green dinosaurs beside him. The boy has a wide smile, revealing a gap between his teeth, and his expression is cheerful and content. His face is lightly shaded with simple gradients, giving it a soft and realistic appearance, while his short, wavy orange hair adds a playful touch to his overall look. His reddish-brown eyebrows complement his hair color, and his eyes are bright and expressive, further enhancing his joyful demeanor. Surrounding the boy are two green dinosaurs with long necks, resembling cartoonish versions of a Brachiosaurus. The larger dinosaur is gently leaning its head over the boy’s head, as if playfully interacting with him, while the smaller dinosaur appears in the background on the right side, looking on curiously. Both dinosaurs have small, round black eyes, simple smooth textures, and friendly, non-threatening appearances, which adds to the whimsical and fun tone of the illustration. The background is a plain white, keeping the focus on the characters, and the overall style is clean and polished, with bold lines and soft shading typical of modern vector art. The illustration creates a sense of playful companionship between the boy and the dinosaurs, evoking a lighthearted, imaginative atmosphere. output: url: images/example_np0lwxbt8.png - text: >- This digital cartoon illustration features a man depicted in bold, stylized colors with a modern, minimalist design. The man’s skin is shaded in tones of cool blue, contrasting sharply with his black hair and goatee. His expression is somewhat skeptical or curious, with raised eyebrows and eyes looking off to the side, as if pondering something. His short, dark hair is neatly styled, and his facial hair—a small mustache and goatee—adds a touch of personality to his appearance. He is wearing a plain white T-shirt, drawn with smooth, sharp lines that accentuate the folds in the fabric, giving the illustration a sense of depth and movement. The man is holding a lit cigar or vape pen in his right hand, from which a small, pink flame or vapor is rising, adding a pop of bright color to the otherwise cool-toned figure. His posture is relaxed, with his left arm by his side and his right arm raised, casually holding the cigar or vape. The background is a solid, bold red, which contrasts sharply with the blue tones of the man's skin and the white of his shirt, making him stand out prominently in the composition. The illustration style is clean and graphic, with simple shading and flat colors, typical of modern vector art. The overall mood is casual and reflective, with a hint of playfulness introduced by the pink flame. output: url: images/example_opsa2vq8y.png - text: >- This digital cartoon illustration presents a surreal and whimsical character blending human and geographical features. The character’s face is shaped like the Earth, with landmasses resembling continents mapped onto the head. Brown patches outline areas like North America, South America, and parts of Europe, creating the impression of a "world head." The figure's skin tone is beige, and the continents blend seamlessly into the face, which adds to the imaginative and quirky design. The character has vibrant red, curly hair, drawn in stylized, swirling waves that frame the face, giving a sense of dynamic movement. The green eyes are wide and expressive, with long eyelashes, and the eyebrows are thick and bright red, matching the hair. Below the eyes, the character wears a large, playful smile, showing gapped teeth, adding a touch of humor and friendliness to the otherwise odd appearance. The character is dressed in a purple top, accessorized with a large, pearl necklace around the neck, adding an element of elegance. The background is a dark blue, with subtle radial patterns, which contrasts with the bright colors of the figure, making the character stand out. The overall style is bold and cartoonish, with smooth lines, bright colors, and playful surrealism. The combination of human features and world geography gives the image a creative, out-of-the-box feel, blending elements of fantasy and humor. output: url: images/example_xys9oxbc6.png - text: >- This digital cartoon illustration presents a surreal and whimsical character blending human and geographical features. The character’s face is shaped like the Earth, with landmasses resembling continents mapped onto the head. Brown patches outline areas like North America, South America, and parts of Europe, creating the impression of a "world head." The figure's skin tone is beige, and the continents blend seamlessly into the face, which adds to the imaginative and quirky design. The character has vibrant red, curly hair, drawn in stylized, swirling waves that frame the face, giving a sense of dynamic movement. The green eyes are wide and expressive, with long eyelashes, and the eyebrows are thick and bright red, matching the hair. Below the eyes, the character wears a large, playful smile, showing gapped teeth, adding a touch of humor and friendliness to the otherwise odd appearance. The character is dressed in a purple top, accessorized with a large, pearl necklace around the neck, adding an element of elegance. The background is a dark blue, with subtle radial patterns, which contrasts with the bright colors of the figure, making the character stand out. The overall style is bold and cartoonish, with smooth lines, bright colors, and playful surrealism. The combination of human features and world geography gives the image a creative, out-of-the-box feel, blending elements of fantasy and humor. output: url: images/example_55hapj7qs.png - text: >- This digital cartoon illustration portrays a fierce, tribal warrior with a bold and powerful presence. The character's dark brown skin is complemented by intense facial features, including a wide-open mouth showing sharp teeth, which adds to the aggressive and commanding expression. The warrior's eyes are wide, with sharp, angular black eyebrows giving a sense of strength and intensity. The figure is adorned with traditional warrior attire, including a large, golden collar that sits around the neck, styled with broad, horizontal bands. Three large, sharp white tusks or teeth are attached to the collar, further enhancing the character’s intimidating appearance. These tusks add an element of raw, primal power to the warrior’s look, emphasizing a connection to nature or animals. The warrior wears a helmet or headpiece made of a greenish-brown material, shaped to fit snugly around the head. The helmet has a central rounded crest on top, adding a sense of status or importance, suggesting this character could be a leader or chief within their tribe. The background is a simple, muted brown, which helps focus attention on the detailed and striking figure. The art style is clean and sharp, with smooth lines and flat colors typical of vector illustrations, giving the character a bold and distinct appearance. The overall mood of the image is one of strength, tradition, and authority, capturing the essence of a powerful warrior. output: url: images/example_ka2enotmh.png - text: >- This digital cartoon illustration depicts a humorous and whimsical portrait of a man wearing a classic novelty disguise. The character’s head is large and prominently featured, with short, spiky white hair. His face is adorned with a fake, oversized black mustache, thick bushy eyebrows, and a comically large nose—all part of a playful disguise, reminiscent of the classic Groucho Marx glasses. The man is also wearing sunglasses, which cover most of his eyes, further adding to the humor and lighthearted tone of the image. A cigarette is perched in his mouth, completing the playful look, with a puff of smoke rising from the end. The details of the face, including wrinkles and shading, are stylized in a textured, almost crumpled paper-like effect, giving the illustration an added layer of visual interest. The background is a vibrant blue with a radial burst pattern, emanating outward in darker and lighter shades, which adds dynamic energy to the composition. The color contrast between the cool blue background and the neutral tones of the face and disguise elements makes the character pop visually. The overall style is bold, fun, and cartoonish, with clean lines and a clear focus on humor. The image evokes a playful, carefree vibe, capturing the essence of a comedic and lighthearted character who doesn’t take themselves too seriously. output: url: images/example_3x0ewdrc1.png - text: >- This digital cartoon-style portrait features a serious-looking individual with a calm and composed expression. The person has short, neatly styled black hair, with some strands falling slightly across the forehead, adding a sense of naturalness to the look. Their facial features are strong, with high cheekbones, a defined jawline, and prominent eyebrows that are thick and neatly shaped, giving the character a focused and thoughtful demeanor. The skin tone is a rich, deep brown, and the shading is smooth and subtle, enhancing the natural contours of the face. The eyes are slightly narrowed, giving the impression of concentration or introspection, and the lips are painted a dark maroon, which adds a touch of elegance and formality to the appearance. The individual is dressed in a high-collared turtleneck, light gray in color, which contrasts with the dark teal-green suit jacket. The formal attire, combined with the composed facial expression, suggests a professional or authoritative figure. The background is a soft yellow, which contrasts gently with the figure’s darker tones, allowing the character to stand out while maintaining a neutral and balanced composition. The overall style is clean and minimalist, typical of modern vector art, with an emphasis on smooth shading and bold shapes. The portrait conveys a sense of quiet strength, confidence, and professionalism. output: url: images/example_zappl0fnu.png - text: >- This digital cartoon-style portrait features a woman with a distinctive and elegant appearance. She has straight, jet-black hair, styled in a blunt fringe that perfectly frames her pale face. The rest of her hair is pulled back into a tight bun at the top of her head, with a few long strands falling down the sides, adding a touch of sophistication and sleekness to her overall look. Her face is sharply defined with soft pink tones and precise shading, giving it a minimalist, modern appearance. The eyes are large and slightly downturned, with soft pink irises and long, subtle lashes, contributing to a delicate and introspective expression. The woman’s lips are painted in a muted red-pink color, adding a subtle warmth to the cool palette of the image. She wears long, elegant black drop earrings that complement her sleek hairstyle and formal attire. Her clothing is dark and textured, consisting of a thick, charcoal-gray turtleneck sweater with a subtle knit pattern. The high collar frames her neck and provides a sense of warmth and coziness, while the dark tones of her outfit contrast against her light skin and the vibrant blue background. The illustration is rendered in a clean, geometric vector art style, with smooth lines, flat colors, and sharp angles. The use of symmetry in her facial features and the sleekness of her overall design create a sense of calm and refinement. The mood of the portrait is sophisticated, modern, and slightly melancholic, evoking a quiet elegance in its simplicity and composition. output: url: images/example_gfid6ml5h.png - text: >- This digital cartoon-style portrait features a woman with a distinctive and elegant appearance. She has straight, jet-black hair, styled in a blunt fringe that perfectly frames her pale face. The rest of her hair is pulled back into a tight bun at the top of her head, with a few long strands falling down the sides, adding a touch of sophistication and sleekness to her overall look. Her face is sharply defined with soft pink tones and precise shading, giving it a minimalist, modern appearance. The eyes are large and slightly downturned, with soft pink irises and long, subtle lashes, contributing to a delicate and introspective expression. The woman’s lips are painted in a muted red-pink color, adding a subtle warmth to the cool palette of the image. She wears long, elegant black drop earrings that complement her sleek hairstyle and formal attire. Her clothing is dark and textured, consisting of a thick, charcoal-gray turtleneck sweater with a subtle knit pattern. The high collar frames her neck and provides a sense of warmth and coziness, while the dark tones of her outfit contrast against her light skin and the vibrant blue background. The illustration is rendered in a clean, geometric vector art style, with smooth lines, flat colors, and sharp angles. The use of symmetry in her facial features and the sleekness of her overall design create a sense of calm and refinement. The mood of the portrait is sophisticated, modern, and slightly melancholic, evoking a quiet elegance in its simplicity and composition. output: url: images/example_46n1987wp.png - text: >- This digital cartoon-style portrait depicts a woman with a modern, minimalist aesthetic. She has a sleek, angular bob hairstyle that frames her face, with straight black hair featuring subtle gray highlights. The blunt fringe sits just above her eyebrows, giving her a polished and symmetrical look. Her facial features are soft yet defined, with a pale complexion and light shading that adds dimension to her face. Her large, almond-shaped eyes are highlighted by soft pink tones in the irises, giving her a thoughtful, calm expression. Her lips are painted in a muted pink, which complements her delicate features without overwhelming the overall subtlety of the portrait. The nose is narrow, and the contours of her face are sharp and symmetrical, contributing to a sense of balance and poise. She is dressed in a simple white top, possibly a tank top, with thin straps. The top reveals part of her chest and shoulders, keeping the focus on her face and hairstyle. The background is a deep, muted red, which contrasts with the cool tones of her hair and skin, making her stand out more distinctly. The overall illustration style is clean and geometric, with smooth lines and flat colors typical of vector art. The mood of the portrait is serene and sophisticated, with an emphasis on simplicity and symmetry. The minimalistic details in her expression and clothing give the character a sense of quiet confidence and modern elegance. output: url: images/example_xg7x1mfh6.png - text: >- This digital cartoon-style portrait features a youthful and vibrant character with a bold, futuristic look. The person has short, platinum-blonde hair styled in an edgy, asymmetrical cut, with long bangs covering one eye and shorter layers peeking out in the back. The bright, almost neon blonde hair contrasts with the colorful background, giving the character a modern, eye-catching appearance. The makeup is striking, with one eye featuring bold pink eyeshadow extending toward the brow, and black eyeliner framing the eye for a dramatic effect. On the opposite side, the person has small white dots decorating the skin just below the eye, adding a playful, creative element to the look. Their lips are lightly glossed in a soft pink, complementing the overall color palette while maintaining the focus on the vibrant eye makeup. The character wears a green top with pink and red accents on the shoulders, and a large, chunky gold necklace around their neck, adding a touch of bold fashion to the portrait. The clothing is modern and casual yet edgy, perfectly fitting the character's vibrant and confident style. The background is a gradient of dark blues and purples with soft glowing streaks of light, resembling a nightclub or futuristic setting, enhancing the lively and electric mood of the portrait. The overall art style is smooth and clean, typical of vector illustrations, with sharp lines and bright, neon-like colors. The character exudes confidence and individuality, with a fashion-forward, avant-garde aesthetic that feels both trendy and creative. output: url: images/example_5hfhfbf52.png - text: >- This digital cartoon-style portrait features a youthful and vibrant character with a bold, futuristic look. The person has short, platinum-blonde hair styled in an edgy, asymmetrical cut, with long bangs covering one eye and shorter layers peeking out in the back. The bright, almost neon blonde hair contrasts with the colorful background, giving the character a modern, eye-catching appearance. The makeup is striking, with one eye featuring bold pink eyeshadow extending toward the brow, and black eyeliner framing the eye for a dramatic effect. On the opposite side, the person has small white dots decorating the skin just below the eye, adding a playful, creative element to the look. Their lips are lightly glossed in a soft pink, complementing the overall color palette while maintaining the focus on the vibrant eye makeup. The character wears a green top with pink and red accents on the shoulders, and a large, chunky gold necklace around their neck, adding a touch of bold fashion to the portrait. The clothing is modern and casual yet edgy, perfectly fitting the character's vibrant and confident style. The background is a gradient of dark blues and purples with soft glowing streaks of light, resembling a nightclub or futuristic setting, enhancing the lively and electric mood of the portrait. The overall art style is smooth and clean, typical of vector illustrations, with sharp lines and bright, neon-like colors. The character exudes confidence and individuality, with a fashion-forward, avant-garde aesthetic that feels both trendy and creative. output: url: images/example_e9rn43rth.png - text: >- This is a playful and cartoonish digital illustration of a broccoli character brought to life with exaggerated and humorous features. The broccoli's "head" consists of a large, fluffy green crown, representing the florets, with soft shading to give it depth and texture. The stalk below is a lighter green, and from this stalk emerge the character's comical facial features. The broccoli has two large, wide eyes with oversized black pupils, giving it a surprised and whimsical expression. Below the eyes is a wide, toothy grin, with a set of perfectly straight, white teeth and a slight gap between the two front ones. The mouth is open, with a hint of a red tongue inside, further adding to the character's friendly and playful demeanor. The background is a bright, cheerful yellow, which contrasts nicely with the various shades of green in the broccoli and makes the character pop. The overall art style is clean, simple, and bold, typical of modern vector art, with smooth lines and a minimalistic approach that emphasizes the humor and charm of the character. This adorable broccoli is both silly and fun, making vegetables look lively and engaging! output: url: images/example_ujh3a55p2.png - text: >- This is a playful and cartoonish digital illustration of a broccoli character brought to life with exaggerated and humorous features. The broccoli's "head" consists of a large, fluffy green crown, representing the florets, with soft shading to give it depth and texture. The stalk below is a lighter green, and from this stalk emerge the character's comical facial features. The broccoli has two large, wide eyes with oversized black pupils, giving it a surprised and whimsical expression. Below the eyes is a wide, toothy grin, with a set of perfectly straight, white teeth and a slight gap between the two front ones. The mouth is open, with a hint of a red tongue inside, further adding to the character's friendly and playful demeanor. The background is a bright, cheerful yellow, which contrasts nicely with the various shades of green in the broccoli and makes the character pop. The overall art style is clean, simple, and bold, typical of modern vector art, with smooth lines and a minimalistic approach that emphasizes the humor and charm of the character. This adorable broccoli is both silly and fun, making vegetables look lively and engaging! output: url: images/example_aplbc5hzj.png - text: >- This digital cartoon-style illustration features a young girl with an innocent, wide-eyed expression. She has a round face and short, light brown hair with neat bangs that frame her forehead. The hair is drawn in simple, smooth lines, giving it a soft, childlike appearance. Her eyes are oversized and shiny, with large black pupils and small white reflections, giving her an adorable and curious look. The simplicity of her eyes, along with the exaggerated size, enhances the charm and innocence of the character. Her small nose and slightly open mouth show a sweet smile, with a couple of baby teeth visible, adding to her youthful appearance. She is wearing a red dress with a matching red collar that has subtle decorative details, such as small dot patterns, making the outfit look quaint and appropriate for a young child. The collar is outlined in white, providing a soft contrast against the red tones of the dress. The background is left plain white, keeping the focus entirely on the girl. The art style is minimalist, with clean lines and flat colors typical of vector illustrations. The overall mood of the image is cheerful and lighthearted, capturing the innocence and sweetness of a young child. output: url: images/example_e883h75eg.png - text: >- This digital cartoon-style illustration features an elderly man dressed as a ranger or outdoorsman, evoking a sense of adventure and nature. The man has a kind, slightly weathered face with pale skin and short, neatly combed white hair. He sports a small white mustache, adding to his distinguished appearance. His facial expression is calm and relaxed, with half-open eyes that give him a wise and experienced look. He is wearing a wide-brimmed ranger hat in olive green, which matches the natural theme of the image. His outfit consists of a light yellow-green coat with a high collar, and he has a red neckerchief or tie tucked under his collar, adding a touch of formality to his otherwise practical attire. The background is filled with overlapping green leaves, creating a rich, natural environment. The leaves vary in shades of green, providing depth and texture while maintaining the overall simplicity of the design. The pattern reinforces the outdoorsy, nature-oriented theme of the character. The illustration uses flat, clean colors and smooth lines typical of vector art. The character’s gentle demeanor, along with his traditional ranger outfit, suggests a wise, approachable figure, possibly someone who is experienced in nature or conservation work. The overall vibe is peaceful and grounded, capturing the essence of an experienced outdoorsman at home in nature. output: url: images/example_dn6bx0smg.png - text: >- This digital cartoon-style illustration features a young, stylish man enjoying a glass of milk. He has a modern, edgy look with dark brown hair styled into a high, voluminous quiff, while the sides of his head are shaved in a buzz-cut pattern. His facial hair is a neatly trimmed beard that gives him a well-groomed appearance. His expression is relaxed and confident, with raised eyebrows and a slight smirk as he sips from the glass. The man wears a sleeveless black tank top, showcasing his muscular arms. He has a silver hoop earring in his right ear, adding to his trendy and bold style. The shaved side of his head is detailed with small stubble, providing contrast to his fuller hair on top. He holds the glass of milk close to his mouth, mid-sip, and the hand gripping the glass is well-drawn with clear details on his fingers. The milk inside the glass is bright white, standing out against the warmer tones of his skin and the dark background. The background is a deep gradient of purple and maroon, giving the image a vibrant, nighttime feel that adds a sense of energy and coolness to the scene. The overall art style is clean and smooth, typical of vector illustrations, with bold colors and sharp lines. The image conveys a laid-back, confident vibe, combining a modern aesthetic with a casual activity. output: url: images/example_urxnxnjlr.png - text: >- This playful digital cartoon illustration portrays a gorilla with a human-like twist. The gorilla has a calm, confident demeanor, wearing a sleek black business suit paired with a white shirt and a magenta tie, adding a touch of formality to the character. Its large, round eyes are expressive, with a warm amber color that adds a sense of intelligence and emotion to its face. Adding to the character's cool, laid-back vibe is a lit cigarette dangling from its mouth, with a small trail of smoke rising, contributing to a more rebellious or nonchalant attitude. The gorilla's face is drawn with smooth lines and subtle shading, accentuating its thick fur and prominent features such as the wide nose and strong jawline. The background is a solid, bright lime green, which contrasts sharply with the darker tones of the gorilla and its suit, making the character stand out vividly. The overall art style is clean, bold, and cartoonish, with smooth, polished lines typical of vector illustrations. The combination of the formal attire, casual smoking pose, and expressive eyes creates a unique and humorous character, blending the primal strength of a gorilla with the swagger of a businessperson. output: url: images/example_c9wam3g4v.png - text: >- This digital cartoon-style illustration depicts an elderly man with a distinctive and cheerful appearance. He has a large, bushy white mustache that curves outward, giving him a friendly and grandfatherly look. His facial features are soft and rounded, with prominent, rosy cheeks and a big nose that adds to his warm expression. The man’s eyes are wide and expressive, featuring a subtle sparkle, while the skin around them shows light orange shading, possibly suggesting a bit of aging or sun exposure. His hair is a light brown, slightly wavy, and styled in a simple, classic manner, with white eyebrows that match his mustache. He is dressed in a bright yellow collared shirt, adding a pop of color to his otherwise neutral tones, and giving him a lively and approachable appearance. The background is a dark navy blue, which contrasts nicely with the bright yellow of his shirt and the white of his mustache, making the character stand out. The overall style is clean and cartoonish, with bold lines, flat colors, and smooth shading typical of vector art. The character radiates warmth and friendliness, suggesting someone with a gentle, welcoming personality. output: url: images/example_7zzdu4kss.png - text: >- This digital cartoon-style illustration features a colorful toucan standing against a vibrant blue background. The toucan has a distinctive, large beak that transitions from bright orange to yellow with a black tip, capturing the iconic appearance of this tropical bird. The beak is exaggerated in size, adding a playful and whimsical touch to the character. The bird’s body is mostly black, with a bright white patch on its chest. Its small, round eye is blue with a yellow ring around it, giving the bird a lively, curious expression. The toucan’s tail and wings are simple and black, complementing its sleek, cartoonish form. What stands out is the bird’s vibrant, circular head crest, which is composed of bold stripes in green, yellow, and red, resembling a stylized hat or headpiece. This adds a fun and creative twist to the toucan's design, enhancing its tropical vibe. The bird stands on orange feet with three toes, and its simple shadow below adds depth to the otherwise flat, colorful illustration. The overall art style is clean, smooth, and minimalist, with bold lines and bright colors typical of modern vector artwork. The illustration radiates energy and playfulness, bringing the exotic toucan to life in a cheerful and imaginative way. output: url: images/example_4l3txxl1k.png --- # Tosti vector 1 (1500 steps) Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `tostiok`. ## Trigger words You should use `in a dark fantasy style, grainy` to trigger the image generation. <Gallery /> ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('lichorosario/flux-samhtr-remastered', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
utahnlp/squad_roberta-base_seed-2
utahnlp
"2024-09-19T01:24:44Z"
0
0
null
[ "safetensors", "roberta", "region:us" ]
null
"2024-09-19T01:24:23Z"
Entry not found
cstinson/chuck-lora
cstinson
"2024-09-19T01:24:25Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:24:25Z"
Entry not found
utahnlp/squad_roberta-base_seed-3
utahnlp
"2024-09-19T01:25:10Z"
0
0
null
[ "safetensors", "roberta", "region:us" ]
null
"2024-09-19T01:24:47Z"
Entry not found
dogssss/Qwen-Qwen1.5-1.8B-1726709091
dogssss
"2024-09-19T01:24:55Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:24:52Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
SALUTEASD/google-gemma-2b-1726709112
SALUTEASD
"2024-09-19T01:26:38Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-09-19T01:25:11Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
utahnlp/squad_roberta-large_seed-1
utahnlp
"2024-09-19T01:26:08Z"
0
0
null
[ "safetensors", "roberta", "region:us" ]
null
"2024-09-19T01:25:16Z"
Entry not found
shc0110/xlm-roberta-base-finetuned-panx-de
shc0110
"2024-09-19T01:32:53Z"
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-09-19T01:26:02Z"
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1356 - F1: 0.8547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 263 | 0.1551 | 0.8191 | | 0.2139 | 2.0 | 526 | 0.1359 | 0.8465 | | 0.2139 | 3.0 | 789 | 0.1356 | 0.8547 | ### Framework versions - Transformers 4.44.2 - Pytorch 1.13.1+cu116 - Datasets 2.21.0 - Tokenizers 0.19.1
tronsdds/Qwen-Qwen1.5-1.8B-1726709169
tronsdds
"2024-09-19T01:26:22Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:26:09Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
utahnlp/squad_roberta-large_seed-2
utahnlp
"2024-09-19T01:27:13Z"
0
0
null
[ "safetensors", "roberta", "region:us" ]
null
"2024-09-19T01:26:13Z"
Entry not found
xpabloms/xpabloms
xpabloms
"2024-09-19T01:26:15Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:26:15Z"
Entry not found
huazi123/google-gemma-2b-1726709234
huazi123
"2024-09-19T01:27:18Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-09-19T01:27:12Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
utahnlp/squad_roberta-large_seed-3
utahnlp
"2024-09-19T01:28:17Z"
0
0
null
[ "safetensors", "roberta", "region:us" ]
null
"2024-09-19T01:27:18Z"
Entry not found
xuetaogz/Qwen-Qwen1.5-1.8B-1726709238
xuetaogz
"2024-09-19T01:27:22Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:27:18Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
jerseyjerry/Qwen-Qwen1.5-1.8B-1726709265
jerseyjerry
"2024-09-19T01:27:51Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:27:45Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
GRBalance8/Amateur_Photography_v4
GRBalance8
"2024-09-19T01:29:04Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:27:50Z"
Entry not found
tronsdds/google-gemma-2b-1726709276
tronsdds
"2024-09-19T01:28:31Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-09-19T01:27:57Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
ZeeeWP/segformer-b0-finetuned-segments-satellite-terrain
ZeeeWP
"2024-09-19T01:27:58Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:27:58Z"
--- library_name: transformers license: other base_model: nvidia/mit-b0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b0-finetuned-segments-satellite-terrain results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-segments-satellite-terrain This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the ZeeeWP/terrain_map dataset. It achieves the following results on the evaluation set: - Loss: 1.6840 - Mean Iou: 0.2213 - Mean Accuracy: 0.3591 - Overall Accuracy: 0.5524 - Accuracy Unlabeled: nan - Accuracy Sand: 0.6831 - Accuracy Cliff: 0.7355 - Accuracy Bedrock flat: 0.5851 - Accuracy Bedrock lowhill: 0.0878 - Accuracy Bedrock highhill: 0.0 - Accuracy Gravel low hill: 0.4134 - Accuracy Gravel high hill: 0.0086 - Iou Unlabeled: 0.0 - Iou Sand: 0.5501 - Iou Cliff: 0.4902 - Iou Bedrock flat: 0.3157 - Iou Bedrock lowhill: 0.0705 - Iou Bedrock highhill: 0.0 - Iou Gravel low hill: 0.3403 - Iou Gravel high hill: 0.0037 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Sand | Accuracy Cliff | Accuracy Bedrock flat | Accuracy Bedrock lowhill | Accuracy Bedrock highhill | Accuracy Gravel low hill | Accuracy Gravel high hill | Iou Unlabeled | Iou Sand | Iou Cliff | Iou Bedrock flat | Iou Bedrock lowhill | Iou Bedrock highhill | Iou Gravel low hill | Iou Gravel high hill | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:-------------:|:--------------:|:---------------------:|:------------------------:|:-------------------------:|:------------------------:|:-------------------------:|:-------------:|:--------:|:---------:|:----------------:|:-------------------:|:--------------------:|:-------------------:|:--------------------:| | 1.9213 | 2.5 | 20 | 2.0018 | 0.1657 | 0.3442 | 0.4439 | nan | 0.3991 | 0.7817 | 0.4519 | 0.0293 | 0.4048 | 0.2201 | 0.1225 | 0.0 | 0.3675 | 0.4547 | 0.2506 | 0.0263 | 0.0125 | 0.1916 | 0.0222 | | 1.5399 | 5.0 | 40 | 1.8057 | 0.1919 | 0.3421 | 0.5081 | nan | 0.5153 | 0.7518 | 0.6825 | 0.0661 | 0.1095 | 0.2280 | 0.0418 | 0.0 | 0.4524 | 0.4784 | 0.3195 | 0.0587 | 0.0152 | 0.1955 | 0.0156 | | 1.5445 | 7.5 | 60 | 1.7148 | 0.2205 | 0.3572 | 0.5462 | nan | 0.6329 | 0.7368 | 0.5898 | 0.1010 | 0.0 | 0.4201 | 0.0195 | 0.0 | 0.5329 | 0.4856 | 0.3152 | 0.0793 | 0.0 | 0.3423 | 0.0088 | | 1.4825 | 10.0 | 80 | 1.6840 | 0.2213 | 0.3591 | 0.5524 | nan | 0.6831 | 0.7355 | 0.5851 | 0.0878 | 0.0 | 0.4134 | 0.0086 | 0.0 | 0.5501 | 0.4902 | 0.3157 | 0.0705 | 0.0 | 0.3403 | 0.0037 | ### Framework versions - Transformers 4.44.1 - Pytorch 2.3.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
utahnlp/squad_microsoft_deberta-v3-base_seed-1
utahnlp
"2024-09-19T01:28:52Z"
0
0
null
[ "safetensors", "deberta-v2", "region:us" ]
null
"2024-09-19T01:28:21Z"
Entry not found
utahnlp/squad_microsoft_deberta-v3-base_seed-2
utahnlp
"2024-09-19T01:29:32Z"
0
0
null
[ "safetensors", "deberta-v2", "region:us" ]
null
"2024-09-19T01:28:57Z"
Entry not found
DrNicefellow/Qwen2.5-32B-Instruct-4.5bpw-exl2
DrNicefellow
"2024-09-19T01:30:05Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-09-19T01:29:19Z"
--- license: apache-2.0 --- This is a 4.5bpw quantized version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) made with [exllamav2](https://github.com/turboderp/exllamav2). ## License This model is available under the Apache 2.0 License. ## Discord Server Join our Discord server [here](https://discord.gg/xhcBDEM3). ## Feeling Generous? 😊 Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
anthonymeo/Glaive-Q4_K_M-GGUF
anthonymeo
"2024-09-19T01:29:44Z"
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:anthonymeo/Glaive", "base_model:quantized:anthonymeo/Glaive", "region:us" ]
null
"2024-09-19T01:29:21Z"
--- base_model: anthonymeo/Glaive tags: - llama-cpp - gguf-my-repo --- # anthonymeo/Glaive-Q4_K_M-GGUF This model was converted to GGUF format from [`anthonymeo/Glaive`](https://huggingface.co/anthonymeo/Glaive) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/anthonymeo/Glaive) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo anthonymeo/Glaive-Q4_K_M-GGUF --hf-file glaive-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo anthonymeo/Glaive-Q4_K_M-GGUF --hf-file glaive-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo anthonymeo/Glaive-Q4_K_M-GGUF --hf-file glaive-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo anthonymeo/Glaive-Q4_K_M-GGUF --hf-file glaive-q4_k_m.gguf -c 2048 ```
dogssss/Qwen-Qwen1.5-0.5B-1726709363
dogssss
"2024-09-19T01:29:26Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-09-19T01:29:23Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
utahnlp/squad_microsoft_deberta-v3-base_seed-3
utahnlp
"2024-09-19T01:30:08Z"
0
0
null
[ "safetensors", "deberta-v2", "region:us" ]
null
"2024-09-19T01:29:37Z"
Entry not found
GRBalance8/Antiblur
GRBalance8
"2024-09-19T01:30:44Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:29:53Z"
Entry not found
SALUTEASD/Qwen-Qwen1.5-0.5B-1726709399
SALUTEASD
"2024-09-19T01:30:17Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-09-19T01:29:58Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
utahnlp/squad_microsoft_deberta-v3-large_seed-1
utahnlp
"2024-09-19T01:31:30Z"
0
0
null
[ "safetensors", "deberta-v2", "region:us" ]
null
"2024-09-19T01:30:15Z"
Entry not found
mradermacher/CSCupcakeCoder-GGUF
mradermacher
"2024-09-19T01:31:42Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:30:17Z"
--- base_model: CSgaoshouGroup/CSCupcakeCoder language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/CSgaoshouGroup/CSCupcakeCoder <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.Q2_K.gguf) | Q2_K | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.IQ3_XS.gguf) | IQ3_XS | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.Q3_K_S.gguf) | Q3_K_S | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.IQ3_S.gguf) | IQ3_S | 1.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.IQ3_M.gguf) | IQ3_M | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.IQ4_XS.gguf) | IQ4_XS | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.Q3_K_L.gguf) | Q3_K_L | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.Q5_K_S.gguf) | Q5_K_S | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.Q5_K_M.gguf) | Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.Q6_K.gguf) | Q6_K | 2.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CSCupcakeCoder-GGUF/resolve/main/CSCupcakeCoder.Q8_0.gguf) | Q8_0 | 3.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
huazi123/Qwen-Qwen1.5-0.5B-1726709444
huazi123
"2024-09-19T01:30:44Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-09-19T01:30:41Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
timaeus/tetrahedron-saes-full-attn
timaeus
"2024-09-19T01:31:28Z"
0
0
saelens
[ "saelens", "region:us" ]
null
"2024-09-19T01:30:54Z"
--- library_name: saelens --- # SAEs for use with the SAELens library This repository contains the following SAEs: - blocks.0.attn.hook_z - blocks.1.attn.hook_z - blocks.0.hook_resid_pre - blocks.0.hook_resid_post - blocks.1.hook_resid_post Load these SAEs using SAELens as below: ```python from sae_lens import SAE sae, cfg_dict, sparsity = SAE.from_pretrained("timaeus/tetrahedron-saes-full-attn", "<sae_id>") ```
brandonshit/Qwen-Qwen1.5-0.5B-1726709464
brandonshit
"2024-09-19T01:31:10Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-09-19T01:31:06Z"
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
kennyhv2712/thya-lora
kennyhv2712
"2024-09-19T01:31:41Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:31:15Z"
Entry not found
amonig/dippy_8365716660
amonig
"2024-09-19T01:31:18Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:31:18Z"
Entry not found
GRBalance8/FluxRealSkin_v2
GRBalance8
"2024-09-19T01:31:52Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:31:24Z"
Entry not found
utahnlp/squad_microsoft_deberta-v3-large_seed-2
utahnlp
"2024-09-19T01:32:51Z"
0
0
null
[ "safetensors", "deberta-v2", "region:us" ]
null
"2024-09-19T01:31:39Z"
Entry not found
tronsdds/google-gemma-7b-1726709500
tronsdds
"2024-09-19T01:32:29Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-7b", "base_model:adapter:google/gemma-7b", "region:us" ]
null
"2024-09-19T01:31:40Z"
--- base_model: google/gemma-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
Anil14349/medquad-text-generation-gpt2
Anil14349
"2024-09-19T01:32:54Z"
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-19T01:31:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jerseyjerry/google-gemma-2b-1726709535
jerseyjerry
"2024-09-19T01:32:24Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-09-19T01:32:15Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
GRBalance8/Phlux
GRBalance8
"2024-09-19T01:32:45Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:32:26Z"
Entry not found
pkmadhyastha/Disease_Detection
pkmadhyastha
"2024-09-19T01:32:34Z"
0
0
null
[ "license:llama2", "region:us" ]
null
"2024-09-19T01:32:34Z"
--- license: llama2 ---
Krabat/Qwen-Qwen1.5-1.8B-1726709556
Krabat
"2024-09-19T01:32:38Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:32:36Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
utahnlp/squad_microsoft_deberta-v3-large_seed-3
utahnlp
"2024-09-19T01:34:13Z"
0
0
null
[ "safetensors", "deberta-v2", "region:us" ]
null
"2024-09-19T01:32:59Z"
Entry not found
xuetaogz/google-gemma-2b-1726709609
xuetaogz
"2024-09-19T01:33:35Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-09-19T01:33:29Z"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
brandonshit/Qwen-Qwen1.5-1.8B-1726709631
brandonshit
"2024-09-19T01:33:57Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
"2024-09-19T01:33:52Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
GRBalance8/JG_Sept
GRBalance8
"2024-09-19T01:33:57Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:33:57Z"
Entry not found
SALUTEASD/Qwen-Qwen1.5-1.8B-1726709638
SALUTEASD
"2024-09-19T01:33:57Z"
0
0
null
[ "region:us" ]
null
"2024-09-19T01:33:57Z"
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0