wdika commited on
Commit
cf3eaf4
1 Parent(s): 0355f1a

Upload config

Browse files
Files changed (1) hide show
  1. readme_template.md +164 -0
readme_template.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: atommic
6
+ datasets:
7
+ - SKMTEA
8
+ thumbnail: null
9
+ tags:
10
+ - image-segmentation
11
+ - DynUNet
12
+ - ATOMMIC
13
+ - pytorch
14
+ model-index:
15
+ - name: SEG_DynUNet_SKMTEA
16
+ results: []
17
+
18
+ ---
19
+
20
+
21
+ ## Model Overview
22
+
23
+ AttentionUNet for MRI Segmentation on the SKMTEA dataset.
24
+
25
+
26
+ ## ATOMMIC: Training
27
+
28
+ To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
29
+ ```
30
+ pip install atommic['all']
31
+ ```
32
+
33
+ ## How to Use this Model
34
+
35
+ The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
36
+
37
+ Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/SKMTEA/conf).
38
+
39
+ ### Automatically instantiate the model
40
+
41
+ ```base
42
+ pretrained: true
43
+ checkpoint: https://huggingface.co/wdika/SEG_DynUNet_SKMTEA/blob/main/SEG_DynUNet_SKMTEA.atommic
44
+ mode: test
45
+ ```
46
+
47
+ ### Usage
48
+
49
+ You need to download the SKM-TEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/SEG/SKMTEA/README.md) page for more information.
50
+
51
+
52
+ ## Model Architecture
53
+ ```base
54
+ model:
55
+ model_name: SEGMENTATIONDYNUNET
56
+ segmentation_module: DYNUNet
57
+ segmentation_module_input_channels: 1
58
+ segmentation_module_output_channels: 4
59
+ segmentation_module_channels:
60
+ - 32
61
+ - 64
62
+ - 128
63
+ - 256
64
+ - 512
65
+ segmentation_module_kernel_size:
66
+ - 3
67
+ - 3
68
+ - 3
69
+ - 3
70
+ - 1
71
+ segmentation_module_strides:
72
+ - 1
73
+ - 1
74
+ - 1
75
+ - 1
76
+ - 1
77
+ segmentation_module_dropout: 0.0
78
+ segmentation_module_norm: instance
79
+ segmentation_module_activation: leakyrelu
80
+ segmentation_module_deep_supervision: true
81
+ segmentation_module_deep_supervision_levels: 2
82
+ segmentation_module_normalize: false
83
+ segmentation_module_norm_groups: 2
84
+ segmentation_loss:
85
+ dice: 1.0
86
+ dice_loss_include_background: true # always set to true if the background is removed
87
+ dice_loss_to_onehot_y: false
88
+ dice_loss_sigmoid: false
89
+ dice_loss_softmax: false
90
+ dice_loss_other_act: none
91
+ dice_loss_squared_pred: false
92
+ dice_loss_jaccard: false
93
+ dice_loss_flatten: false
94
+ dice_loss_reduction: mean_batch
95
+ dice_loss_smooth_nr: 1e-5
96
+ dice_loss_smooth_dr: 1e-5
97
+ dice_loss_batch: true
98
+ dice_metric_include_background: true # always set to true if the background is removed
99
+ dice_metric_to_onehot_y: false
100
+ dice_metric_sigmoid: false
101
+ dice_metric_softmax: false
102
+ dice_metric_other_act: none
103
+ dice_metric_squared_pred: false
104
+ dice_metric_jaccard: false
105
+ dice_metric_flatten: false
106
+ dice_metric_reduction: mean_batch
107
+ dice_metric_smooth_nr: 1e-5
108
+ dice_metric_smooth_dr: 1e-5
109
+ dice_metric_batch: true
110
+ segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
111
+ segmentation_activation: sigmoid
112
+ magnitude_input: true
113
+ log_multiple_modalities: false # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
114
+ normalization_type: minmax
115
+ normalize_segmentation_output: true
116
+ complex_data: false
117
+ ```
118
+
119
+ ## Training
120
+ ```base
121
+ optim:
122
+ name: adam
123
+ lr: 1e-4
124
+ betas:
125
+ - 0.9
126
+ - 0.98
127
+ weight_decay: 0.0
128
+ sched:
129
+ name: InverseSquareRootAnnealing
130
+ min_lr: 0.0
131
+ last_epoch: -1
132
+ warmup_ratio: 0.1
133
+
134
+ trainer:
135
+ strategy: ddp_find_unused_parameters_false
136
+ accelerator: gpu
137
+ devices: 1
138
+ num_nodes: 1
139
+ max_epochs: 20
140
+ precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
141
+ enable_checkpointing: false
142
+ logger: false
143
+ log_every_n_steps: 50
144
+ check_val_every_n_epoch: -1
145
+ max_steps: -1
146
+ ```
147
+
148
+ ## Performance
149
+
150
+ Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
151
+
152
+ Results
153
+ -------
154
+
155
+ Evaluation
156
+ ----------
157
+ DICE = 0.6888 +/- 0.1359 F1 = 0.05911 +/- 0.2638 HD95 = 8.973 +/- 4.507 IOU = 0.01517 +/- 0.06638
158
+
159
+
160
+ ## References
161
+
162
+ [1] [ATOMMIC](https://github.com/wdika/atommic)
163
+
164
+ [2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022