nalindew commited on
Commit
769f2bc
1 Parent(s): 1807e1c

Model save

Browse files
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: shi-labs/nat-mini-in1k-224
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - image_folder
8
+ metrics:
9
+ - accuracy
10
+ - f1
11
+ model-index:
12
+ - name: nat-mini-in1k-224-finetuned-breakhis
13
+ results:
14
+ - task:
15
+ name: Image Classification
16
+ type: image-classification
17
+ dataset:
18
+ name: image_folder
19
+ type: image_folder
20
+ config: default
21
+ split: train
22
+ args: default
23
+ metrics:
24
+ - name: Accuracy
25
+ type: accuracy
26
+ value: 0.9669421487603306
27
+ - name: F1
28
+ type: f1
29
+ value: 0.9612429172231991
30
+ ---
31
+
32
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
+ should probably proofread and complete it, then remove this comment. -->
34
+
35
+ # nat-mini-in1k-224-finetuned-breakhis
36
+
37
+ This model is a fine-tuned version of [shi-labs/nat-mini-in1k-224](https://huggingface.co/shi-labs/nat-mini-in1k-224) on the image_folder dataset.
38
+ It achieves the following results on the evaluation set:
39
+ - Loss: 0.0983
40
+ - Accuracy: 0.9669
41
+ - F1: 0.9612
42
+ - Roc Auc: 0.9648
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 5e-05
62
+ - train_batch_size: 32
63
+ - eval_batch_size: 32
64
+ - seed: 42
65
+ - gradient_accumulation_steps: 4
66
+ - total_train_batch_size: 128
67
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
68
+ - lr_scheduler_type: linear
69
+ - lr_scheduler_warmup_ratio: 0.1
70
+ - num_epochs: 5
71
+
72
+ ### Training results
73
+
74
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Roc Auc |
75
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:-------:|
76
+ | 0.3247 | 0.99 | 59 | 0.2084 | 0.9157 | 0.8968 | 0.8836 |
77
+ | 0.1338 | 2.0 | 119 | 0.1686 | 0.9355 | 0.9266 | 0.9437 |
78
+ | 0.1078 | 2.99 | 178 | 0.0986 | 0.9694 | 0.9636 | 0.9597 |
79
+ | 0.0795 | 4.0 | 238 | 0.0957 | 0.9719 | 0.9668 | 0.9660 |
80
+ | 0.0522 | 4.96 | 295 | 0.0983 | 0.9669 | 0.9612 | 0.9648 |
81
+
82
+
83
+ ### Framework versions
84
+
85
+ - Transformers 4.38.1
86
+ - Pytorch 2.1.2
87
+ - Datasets 2.1.0
88
+ - Tokenizers 0.15.2
runs/Mar18_09-23-06_4a1998b12619/events.out.tfevents.1710753825.4a1998b12619.129.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f61c369ac4d71bc2cb6cf8019a3241a64c3955c60782d3d645f27a738de8d8f6
3
- size 11423
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:090cbcc11a81ae51503b1fc9dbd205c41c92b5ffe5f0b677ca43d76ca2846621
3
+ size 13463