Rashed-vit-model / README.md
Rashed-Mamdi's picture
End of training
19c1532 verified
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Rashed-vit-model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Rashed-vit-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0047
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.2279 | 1.9048 | 200 | 0.4485 | 0.9111 |
| 0.1335 | 3.8095 | 400 | 0.0680 | 0.9889 |
| 0.0061 | 5.7143 | 600 | 0.0047 | 1.0 |
| 0.0025 | 7.6190 | 800 | 0.0606 | 0.9778 |
| 0.0624 | 9.5238 | 1000 | 0.2500 | 0.9556 |
| 0.0013 | 11.4286 | 1200 | 0.0868 | 0.9889 |
| 0.001 | 13.3333 | 1400 | 0.0908 | 0.9889 |
| 0.0008 | 15.2381 | 1600 | 0.0935 | 0.9889 |
| 0.0006 | 17.1429 | 1800 | 0.0960 | 0.9889 |
| 0.0005 | 19.0476 | 2000 | 0.0979 | 0.9889 |
| 0.0004 | 20.9524 | 2200 | 0.0996 | 0.9889 |
| 0.0004 | 22.8571 | 2400 | 0.1013 | 0.9889 |
| 0.0003 | 24.7619 | 2600 | 0.1027 | 0.9889 |
| 0.0003 | 26.6667 | 2800 | 0.1040 | 0.9889 |
| 0.0003 | 28.5714 | 3000 | 0.1054 | 0.9889 |
| 0.0002 | 30.4762 | 3200 | 0.1065 | 0.9889 |
| 0.0002 | 32.3810 | 3400 | 0.1076 | 0.9889 |
| 0.0002 | 34.2857 | 3600 | 0.1085 | 0.9889 |
| 0.0002 | 36.1905 | 3800 | 0.1094 | 0.9889 |
| 0.0002 | 38.0952 | 4000 | 0.1102 | 0.9889 |
| 0.0002 | 40.0 | 4200 | 0.1109 | 0.9889 |
| 0.0001 | 41.9048 | 4400 | 0.1115 | 0.9889 |
| 0.0001 | 43.8095 | 4600 | 0.1120 | 0.9889 |
| 0.0001 | 45.7143 | 4800 | 0.1124 | 0.9889 |
| 0.0001 | 47.6190 | 5000 | 0.1126 | 0.9889 |
| 0.0001 | 49.5238 | 5200 | 0.1128 | 0.9889 |
### Framework versions
- Transformers 4.43.3
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1