Title of Competition
stringclasses
168 values
Title of Writeup
stringlengths
9
139
User
stringclasses
1 value
Writeup
stringlengths
419
61.5k
LLM - Detect AI Generated Text
[21th Solution] Secret Sauce [0.986 Public - Selected Private: 0.932 Best:0.957]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you <strong>ALL</strong> for such a great time and competition :-) </p> <p><strong>LB Solution: - The Secret Sauce</strong></p> <p><strong>Phase 1: Selecting Datasets.</strong></p> <p>I used the following datasets : </p> <ul> <li>DAIGT V2 Train Dataset by <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a></li> <li>LLM: Mistral-7B Instruct texts by <a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a></li> <li>LLM Extra dataset (Private - generated by me), which I generated from Mistral-7B using the same way that <a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a> did.</li> <li>Gemini Pro LLM - DAIGT by me (Public)</li> <li>Competition dataset.</li> </ul> <p><strong>Phase 2: Dataset preparation.</strong><br> All 15 prompts were kept (at this phase).<br> The datasets were merged into 3 datasets (Train dataset - all data), (LLM-data only the Mistral data with the Gemini data - all data here with label 1), the original dataset (which 99% human with label 0), named it train_old (Don’t know why !) </p> <p><strong>Phase 3: Training a LogisticRegression for prompt_name classification.</strong><br> A simple and quick phase but needed as insurance, why? <br> I trained the model to predict each test text prompt_name.<br> I counted the unique number of prompt_ids ( I know it's 5 but wanted to make it a more general solution).<br> I selected Top N repeated prmpt_names (where N is the unique number of prompt_ids ) from the 9000+ test set.<br> In this way, I have for sure the prompt_names in the test set (This can be applied on different datasets also) </p> <p><strong>Phase4: Selecting the Training DataSets</strong><br> Now after finding out the prompts used, I selected their crosspondig texts from the Train dataset only. <br> I didn’t do this to the LLM-Data set or the old_train (I wanted to keep more general data for less overfitting) </p> <p><strong>Phase5: Spelling Phase</strong><br> Added later based on Levenshtein distance for error correction - based on <a href="https://www.kaggle.com/piotrkoz" target="_blank">@piotrkoz</a> discussion thread and <a href="https://www.kaggle.com/wickkey2" target="_blank">@wickkey2</a> posted method ( which saved me some time writing a similar one) </p> <p>The error correction was done on All train datasets and test dataset.</p> <p><strong>Phase6: The magic of Byte Pair Encoding thanks to <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> golden approach.</strong><br> I used vocab size of only 5000 which was enough. <br> I created 3 tokenized datasets, Train , test and test+llm_data which is used for fitting.</p> <p><strong>Phase 7: Tf-idf</strong> with 3 to 6/7 ngrams range and min_df = 2 (which showed best results) <br> <strong>Phase 8: MaxAbsScaler + Ridge/LinearSVR is all you need.</strong></p> <p>Until here the public score was around 0.976/7 (with spell checking) and 0.972/3 without and as far as I can tell between 0.894 and 0.905 private</p> <p>I knew that this was not enough and most likely I would be taking a hit by private LB so I went to digg more. </p> <p><strong>Phase 9: I trained different transformers models</strong> with stemmed words, POS, Lemma, and mixup sentences … I tried a lot but nothing worked well until <a href="https://www.kaggle.com/mustafakeser4" target="_blank">@mustafakeser4</a> shared the results over distilroberta.<br> So I retried with distilroberta I was able to reach 0.915. But in the end, I used the 0.913 version of his work (the 0.927 didn’t show higher results on LB and I selected the weaker model) I didn’t choose mine because it should a slightly lower ensembling results. </p> <p><strong>Phase 10 Ensembling:</strong> I ensemble distilroberta results with my Ridge model results and pushed the score from 0.972 to 0.975 on the public leader board.</p> <p><strong>Phase 11 : Secret Sauce and the reason behind my 0.932 and 0.957 private scores.</strong></p> <p>Since the test data is somehow manipulated and it's hard to come up with one similar enough. I thought why not train with test! <br> I ranked the result for my top ensemble notebook (0.975 back then) and selected X top rows as AI generated (high scores) and bottom Y as human (low scores) (first exp X and Y was 1000) and added them to training phase and rescored, this pushed the score from 0.975 to 0.982.<br> I repeated the process 4 times (last exp I repeated it 5 times) each time I increased the X and Y selection by 200/250 rows and I was able to score 0.983 and private 0.922</p> <p><strong>Phase 12: More Sauce:</strong><br> -This time I selected the median 50 rows from test (rows with scores around the median)</p> <ul> <li>I ran similarity with train data to find top X (X was set to 10) rows from the train data to those test rows.</li> <li>I retrained the Ridge model and updates the 50 rows scores.</li> <li>This puhsed public LB to 0.984 and private to 0.929</li> </ul> <p><strong>Phase 13: I thought that this is not enough and that I need more transformers:</strong><br> The data selected from Phase 10 (The Top X and Bottom Y) was used as validation data to distilroberta ( I needed something fast - I am training while submitting !) <br> I ran it for a couple of epochs (the best was 4.0) and the best scoring model was sselected. <br> I generated new scores which I ensemble by the final results of Phase12 and Phase11, this gave LB score of 0.982 and a private score of 0.957 (without phase 11) and 0.953 with phase 11.</p> <p>That sumurize my LB solution. </p> <p>Best Private Score Notebook ( need cleaning) : <a href="https://www.kaggle.com/asalhi/modelsxrunv1deeplearning" target="_blank">https://www.kaggle.com/asalhi/modelsxrunv1deeplearning</a><br> Chosen Notebook : <a href="https://www.kaggle.com/code/asalhi/secretsauceartridge" target="_blank">https://www.kaggle.com/code/asalhi/secretsauceartridge</a></p> <p>I didn't choose my best work :/ this is very frustrating but that's a mistake I have to accept and move on.</p> <p>I am not sure about the efficiency prize score yet! I am not putting hopes :/</p>
LLM - Detect AI Generated Text
6nd place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Many thanks to Kaggle and the organizers for creating the competition.</p> <p>Link to training and inference code: <a href="https://www.kaggle.com/code/davidecozzolino/coder-one2" target="_blank">https://www.kaggle.com/code/davidecozzolino/coder-one2</a> <br> Link to github repository: <a href="https://github.com/davin11/entropy-based-text-detector" target="_blank">https://github.com/davin11/entropy-based-text-detector</a><br> Link to model summary documnt: <a href="https://github.com/davin11/entropy-based-text-detector/blob/main/Documentation.pdf" target="_blank">https://github.com/davin11/entropy-based-text-detector/blob/main/Documentation.pdf</a></p> <p>Solution:</p> <ol> <li>A pre-trained Large Language Model (LLM) is used to compute entropy-based synthetic features.</li> <li>Starting from feature vectors of few elements, a One-Class SVM is trained using only the human-written essays provided by the organizers as training-set.</li> </ol> <p>Note:</p> <ul> <li>I used <a target="_blank">DAIGT-V4-TRAIN-DATASET</a> to select the best features.</li> <li>I tried different LLMs; phi-2 proved to be the best</li> </ul>
Child Mind Institute - Detect Sleep States
🥉 146th place – 9 models 1h inference - Segmentation through UNet
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Short description of the solution below:</p> <p>As I only used 1-3 days of time for modifying, training and tuning the solution stayed at 9 models 1h inference but could easily been more modified and with also more models for better score.</p> <p>For the problem I used the base code from</p> <ul> <li><a href="https://www.kaggle.com/code/tubotubo/cmi-submit" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-submit</a></li> <li><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states</a></li> <li><a href="https://www.kaggle.com/code/tubotubo/cmi-code" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-code</a></li> </ul> <p>The original code had resnet34 as default and that was also the public pretrained model. Based on that I trained several of different models with Spec2DCNN: Segmentation through UNet - resnet18, efficientnet-b0 and resnext101_32x4d.</p> <p>Then I created a validation notebook below to get the validation score for some of the pretrained public and also the own trained models.</p> <ul> <li><a href="https://www.kaggle.com/kirderf/eval-child-mind-institute-detect-sleep-states" target="_blank">https://www.kaggle.com/kirderf/eval-child-mind-institute-detect-sleep-states</a></li> </ul> <p>The final inference solution was a torch stacked mean ensemble of 9 models using 1h test time inference with the same duration used, code below.</p> <ul> <li><a href="https://www.kaggle.com/kirderf/cmi-submit-9-models-1h-inference" target="_blank">https://www.kaggle.com/kirderf/cmi-submit-9-models-1h-inference</a>.</li> </ul> <p>With more time following had been tested: different architectures, more models, augmentation, using different durations…</p> <p>That’s it!<br> Happy Kaggling!</p>
LLM - Detect AI Generated Text
12th place solution: DeBERTa + TF-IDF
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for hosting such an interesting competition.<br> Final Solution used an ensemble of DeBERTa and TF-IDF.</p> <h2>Key point</h2> <ol> <li>TF-IDF uses test dataset</li> <li>As broad of data as possible</li> </ol> <p>Let me share my thoughts on TFIDF. This method only considers the n-grams from the test set, which is the set that needs to be predicted. This means that the n-grams we obtain from the test set are definitely clean and useful features. Using these features to train on the training set is very correct and efficient. From another perspective, this is tantamount to leaking data (features) from the test set. In Kaggle competitions, data leakage can be particularly useful, so there's no reason not to employ it.</p> <h2>Dataset</h2> <p>In the early stage, data was generated for each prompt in the Persuade corpus. However, when I trained the DeBERTa model on this data, I found that it performed poorly on LB. Therefore, I gave up using this part of the dataset for training and used it as my local CV dataset. This competition made me think of Stable Diffusion - Image to Prompts, so I started considering trying to use a more extensive dataset.</p> <p>In the process of searching for datasets, I found that there were many datasets to choose from. After careful consideration, I selected the tiiuae/falcon-refinedweb dataset. The reason for choosing this dataset is mainly because it has undergone detailed filtering and performed better than other datasets in Falcon's paper.</p> <p>Because my goal is to find the most comprehensive data possible, I will consider the entire text of the first chunk file in dataset as a starting text pool. For the text in other chunk files, I will calculate the similarity(using BAAI/bge-base-en-v1.5) between the current text and all the text in the previous text pool. Only when the similarity is below the threshold (0.7) will the text be added to the text pool.</p> <p>After length filtering (&gt;768 tokens), the final dataset contains approximately 600k data.</p> <p>For each text, I randomly extract a part of the text. Then, the text of random length(128~384) at the beginning is taken as context guidance, while the remaining text is used as training data (generated=0). For the extracted context, input to different LLMs to continue writing the text based on the style of the context (generated=1).</p> <p>Based on my experience and observation, the quality of LLMs generated by all other models except mistral and llama2 is not very high, so I mainly use these two LLMs and their variations. </p> <p>Some LLMs generate very short sequences, and most of these short sequences are of low quality, so I added a length filter(&gt;170 tokens) afterwards.</p> <p>Finally, LLM generated approximately 410k of data. Therefore, the final data set consists of 820k text with a 1:1 ratio of positive and negative samples.</p> <p>For a given context, I have a real text and a text generated by LLM, with the lengths of the two texts varying significantly. Therefore, during the training process of DeBERTa, the text is randomly truncated to randint(128, min(origin_len, LLM_len)), which ensures that the length distribution of texts with different labels is consistent during the training process, avoiding the model from learning the irrelevant variable of text length.</p> <h2>Model</h2> <h3>deberta</h3> <ul> <li>model id: deberta-v3-large</li> <li>train &amp; infer length: 512</li> <li>epoch: 3</li> <li>head: mean-pooling</li> <li>LLRD</li> </ul> <h3>TF-IDF</h3> <p>copy from <a href="https://www.kaggle.com/code/batprem/llm-daigt-analyse-edge-cases" target="_blank">LLM DAIGT Analyse edge cases</a></p> <p>I attempted to make some modifications, but the improvement was not significant enough for me to adopt them. <br> There must be a huge difference between the distribution of local data and test set. In order to avoid overfitting on local data and better utilize the test set, I have decided not to make any modifications.</p> <h3>Ensemble</h3> <p>TF-IDF : deberta = 0.85 : 0.15</p> <p>The weights are determined based on the average of local and LB performance.</p> <h2>Abandoned idea</h2> <p>The following ideas were evaluated after initial attempts and found to be possibly beyond my capabilities or require a significant amount of computation, so they were abandoned halfway through. Perhaps there are better methods among these ideas, but I don't have the time/GPU to try them out.</p> <ol> <li><p>Finetuning LLM to mimic the style of Persuade</p> <p>After fine-tuning, I found that the quality of LLM generated has decreased a lot and it did not make DeBERTa more difficult to distinguish, so I abandoned that approach.</p></li> <li><p>Adversarial learning</p> <p>Let LLM generate data, DeBERTa selects difficult-to-distinguish hard samples, LLM continues to train on the hard samples and generate…</p> <p>After evaluating the required computing, I decisively gave up.</p></li> <li><p>Finetuning LLM to classify</p> <p>It's need too much computing power. I sampled 82k from the 820k dataset to train Mistral-7b. Each epoch takes more than 10 hours using 3090Ti, but the score on LB is only 0.642. I don't have enough resources to keep trying.</p></li> </ol> <h2>What don't work for me</h2> <ol> <li>soft label</li> <li>text classification based on spelling errors</li> <li>fix spelling errors, tried many solutions, distance-based and neural network-based, but none of them worked</li> <li>prompt engineer on deberta</li> </ol> <h2>Acknowledgements</h2> <p>Thanks to all the enthusiastic members of kaggle community for providing datasets.</p> <p>Thanks to <a href="https://www.kaggle.com/batprem" target="_blank">@batprem</a> for providing a high score notebook. Special thanks to <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> for providing interesting insights. </p>
LLM - Detect AI Generated Text
Learning log from the competition 🥈 - draft to be updated
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Just publishing what we had learnt from the comp , though we didn't select our best but there were a lot to learn from all contributors to great notebooks dataset and discussions . some of our quick findings/things should have done .. 😃</p> <p>to the team .. Thx bogo , <a href="https://www.kaggle.com/treesky" target="_blank">@treesky</a> , <a href="https://www.kaggle.com/alejopaullier" target="_blank">@alejopaullier</a> and Ravi for bearing with me .</p> <h2>1 <strong>Catboost overfitted the least on the tfidf model and linear models were surprisingly good</strong>.</h2> <p>Adding catboost helped the most in the linear model rest we observed sgd and mnb did overfit a lot in some leave one prompt out validations . Though the issue was less public lb score and we should have maybe selected this one which gave around 0.919-0.92 as our best private lb .</p> <p>Catboost low on cv but balanced ensemble<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F768166%2F4263f683bbb69672e69f35607a7b77c3%2FScreenshot%202024-01-22%20at%209.31.56PM.png?generation=1705977171010050&amp;alt=media"> </p> <p><strong>Experiment on validation on ["Exploring Venus" ,"The Face on Mars"] prompt left out for validation</strong></p> <pre><code># MNB # SGD </code></pre> <p><strong>LGB</strong></p> <pre><code># LGB Early stopping iteration ish {': ,': -,': metric': ', ': , ': , ': , ': , ': , ': , ': , ': ,':random_state} # LGB Early Stopping , looks better (' : 3000, ' : 0.7800, ' : 0.8000, ' : 6743, ' : '', ' : '', ' : 0.00581909898961407, </code></pre> <p><strong>CAT</strong> low cv but doesn't overfit like sgd mnb that train very fast , <strong>so thus it was really not individually that good but did balance overfitting from sgd and mnb</strong> </p> <pre><code>. iterations. iterations = ,subsample = .,learning_rate = ., </code></pre> <h2>2 <strong>LOGODDS helped in ensemble</strong>:</h2> <p>we observed also our second best private sub was a log odds ensemble . which think was helpful in ensembling disparate models . due to low public lb unfortunately we didnt select that .<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F768166%2F133fc6d8976b1b4bbf92e329aea6811f%2FScreenshot%202024-01-22%20at%209.21.31PM.png?generation=1705976520756254&amp;alt=media"></p> <h2>3 <strong>distilbert was best in our transformer based model and LLMs didn't help, deberta close second</strong> .</h2> <p>LLMS for us specially mistral was miserable in private public . but distilbert thanks to the public notebook and tuned by us score really well <code>0.887</code> in private LB which we should have added to our ensemble .<br> <a href="https://www.kaggle.com/code/gauravbrills/inference-detectai-distilroberta" target="_blank">https://www.kaggle.com/code/gauravbrills/inference-detectai-distilroberta</a><br> This was trained in feedback data,drcat and several public palm mistral and Gemini datasets</p> <p>Mistral was bad :( <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F768166%2F992a71f6ccf070f05c69f6a71d3dd635%2FScreenshot%202024-01-22%20at%209.29.52PM.png?generation=1705977084984276&amp;alt=media"></p> <h2>4 <strong>Stacking with pseudo labelling did work but we didn't perceiver</strong></h2> <p>Initially moth tried a technique to rank preds and then pseudo label the top and botton and do a stacking approach . This did give a considerable bump but we did not perceiver on this which proved to be the <code>secret sauce</code> in many top solutions .This was. mainly cause CAT was too slow and hence we could not ensemble more models .</p> <h2>5 <strong>Tried paper approaches few worked few think interesting to try</strong></h2> <p>we tried few paper approaches <br> Things from paper which worked <br> -- cleaner for spaces from <a href="https://github.com/YuchuanTian/AIGC_text_detector/blob/13ab846c6540c37e1eb3091edb2f2b99682a3303/corpus_cleaning_kit.py" target="_blank">https://github.com/YuchuanTian/AIGC_text_detector/blob/13ab846c6540c37e1eb3091edb2f2b99682a3303/corpus_cleaning_kit.py</a><br> -- <strong>PHD intrinsic dimension</strong> seems really interesting could not finish testing that maybe will give a try not sure anyone tried this <a href="https://arxiv.org/pdf/2306.04723.pdf" target="_blank">https://arxiv.org/pdf/2306.04723.pdf</a> </p>
NeurIPS 2023 - Machine Unlearning
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: NeurIPS 2023 - Machine Unlearning <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>At first, thanks to Kaggle for hosting such an interesting competition and every Kagglers. It’s been a truly memorable experience for me, not only because of the active discussion zone, which provides interesting and useful directions for thought and practice, but also because of the meaningful topic of unlearning. I'm surprised to have this chance to introduce my solution here.<br> My solution is based on a 2-stage training method, which contains a one epoch forgetting stage on forget dataset and an 8-epoch adversarial fine-tuning stage on both forget dataset and retain dataset. Here are the details:</p> <h2>Overview:</h2> <p>(1). In order to let the model initially forget in the logits space, we optimize the KL-divergence between the output logits and a uniform pseudo label.<br> (2). An adversarial fine-tuning stage: <br> <strong>Forget Round:</strong><br> Instance-scale SCL(self supervised contrastive learning) could be a beneficial method when facing some unsupervised learning task such as clustering feature embeddings. Self supervised contrastive learning aims to pull the distance between positive samples and their enhanced samples closer, and to push the distance between positive samples and all samples further. In my assumption, we need to let the forget sample as uniform as possible in feature space, which could be seen as pushing the distance between any forget sample and all retain samples further. Hence, I have proposed two kinds of contrastive loss here:</p> <ol> <li><p>For a forget sample x, use its enhanced version x’ as a positive pair, and use all samples in the retain batch as a negative pair. The final loss formula could be written as this: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9888178%2F5ad52ea1d39d433629469ce7663e740b%2FKER3I(PZHAPA4Q0CBUN.png?generation=1701420464802101&amp;alt=media" alt=""></p></li> <li><p>Each retain sample could have the same probability to be chosen as positive sample with the any forget sample, which means they have the same weight in the formula of the contrastive loss. The final loss formula could be written as this:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9888178%2Fa49dca72b4fc934c0ba13cb732080025%2FE8Q9FVJMZ2IQYR7FYX.png?generation=1701414982494286&amp;alt=media" alt=""><br> I have use second loss function as the final submission version (Because the first loss function performs not as good as the second preliminarily, I give up this idea due to the days limitation). I set temperature coefficient t=1.15 here.<br> the forget training round code here:</p></li> </ol> <pre><code>for sample_forget, sample_retain in (forget_loader, retain_ld4fgt): t = ##temperature coefficient inputs_forget,inputs_retain = sample_forget[],sample_retain[] inputs_forget, inputs_retain = inputs_forget.(DEVICE), inputs_retain.(DEVICE) optimizer_forget.() outputs_forget,outputs_retain = (inputs_forget),(inputs_retain).() loss = (- * nn.(dim=-)(outputs_forget @ outputs_retain.T/t)).() ##weighted CL ##Loss function loss.() optimizer_forget.() </code></pre> <p>Due to the different steps in forget_loader and retain_loader (here referred to as retain_ld4fgt, a new dataloader for Contrastive Learning), there is an unexpected random shuffle at different checkpoints, which can be viewed as an ensembling method.<br> <strong>Retain Round:</strong><br> To enhance performance on the retain dataset, fine-tuning it is unavoidable. Here, I just simply retrain the model using cross-entropy loss.</p> <h2>Some useful tricks:</h2> <ol> <li>Actually, because of our competition time limitation, the max epoch could be maximized to 6 when retain batchsize is 64. (This conclusion can be continued to use even for adversarial training because the forget round takes little time compared with the retain round). "I've tried to add more epochs by increasing the batch size of the retain dataloader and the learning rate. Results show that eight epochs of training with a batch size of 256 performs best on the public leaderboard.</li> </ol> <p>2.CosineAnnealingLR could also be useful in forget rounds (Simply adding this, score on public lb increases from 0.084 to 0.091)</p> <h2>Some directions I have not investigated deeply :</h2> <ol> <li>Add more forget rounds in an epoch. For example, to intensify the adversarial stage (as a fiercer confrontation may enhance forgetting performance), adding another forget round could increase the total steps of forgetting.</li> <li>Replace the first stage with Contrastive learning loss.</li> </ol> <h2>Conclusion:</h2> <p>During the competition, my thinking went through multiple stages, and each stage had a different understanding of the competition and tasks. In fact, many excellent codes or discussions in the forum have been of great help to me. For example, <a href="https://www.kaggle.com/Maria" target="_blank">@Maria</a> Gorinova 's <a href="https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/discussion/454949" target="_blank">evaluation notebook</a> is an excellent work that help you quickly understand the evaluation metrics. I have also witnessed many vastly different ideas on the forum. I believe that each idea has its own reasons, but the competition scenes are limited and idealized. While high-scoring methods may have limitations in practical scenarios, their underlying innovative approaches are universally applicable. I always look forward to seeing any creative solution.</p> <p>Solution link:<a href="https://www.kaggle.com/code/fanchuan/2nd-place-machine-unlearning-solution?kernelSessionId=153137657" target="_blank">2nd place Solution</a></p>
AI Village Capture the Flag @ DEFCON31
1188th Place Solution for the AI Village Capture the Flag @DEFCON31 Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: AI Village Capture the Flag @ DEFCON31 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I will be honest, I had no idea how to progress in this competition. I was into Machine Learning but had never even heard of cybersecurity or CTF on ML models. In short, I was new and learned a lot.</p> <p>I could not complete any of the challenges except the test one whose solution even the host provided. In this discussion, I will be providing links on getting started or casually exploring cybersecurity in ML to beginners like me who didn't understand anything. I will also be listing a very short analysis on how top solutions solved the challenges. please take them as a hint if you get stuck somewhere and not just implement them without understanding the concept.</p> <p>Must Watch Links:</p> <ul> <li><a href="https://www.youtube.com/watch?v=a92SWvqgMCE" target="_blank">AI poisoning</a></li> <li><a href="https://www.bucks.edu/media/bcccmedialibrary/con-ed/itacademy/fos2022/Casey-AI-Poisoning-pdf.pdf" target="_blank">PPT on AI poisoning</a></li> <li><a href="https://medium.com/swlh/how-to-fool-artificial-intelligence-fcf230bf37e" target="_blank">Fooling AI article</a></li> </ul> <p>HINTS SECTION (Try all challenges only then read)</p> <ul> <li>Test - answer given in demo notebook</li> <li>Clusters 1 - Try to find high scoring numerical value in dataset, use that as a key to get the flag.</li> <li>Clusters 2 - count of clusters is asked. one easy solution is to just randomly check all values 1 … solution. Credits: solution for place 3.</li> <li>Cluster 3 - use the tnse_model model with config tsne_model = TSNE(perplexity=17, n_components=2, init='pca', n_iter=2500, random_state=23) and plot the output as a scatterplot with annotations.</li> <li>MNIST - Count of pixel values is just throwing away the solution at you.</li> <li>Count CIFAR - Unsolved</li> <li>Granny 1 - Difficult Challenges ( would suggest solving at last ) hint would be using model that gives similar output to the given model, ultra hint modified MobileNetV2. then use Basic Iterative Method FGSM.</li> <li>Granny 2 - similar to Granny 1 - Basic Iterative Method FGSM again. refer to 3rd place solution <a href="https://www.kaggle.com/code/mrmldjr/granny-1-and-2-solutions-ai-village-ctf/" target="_blank">here</a></li> <li>Granny 3 - Unsolved</li> <li>Passphrase - similar to granny find a model that gives similar output as example. solution <a href="https://www.kaggle.com/code/mrmldjr/passphrase-solution-ai-village-ctf-defcon31" target="_blank">here</a></li> <li>Pixelated - try sending server different HTML like queries as input you will get the answer in a few tries.</li> <li>Spanglish - send flag as input ;D</li> <li>Hush - answer is a quote. try giving multiple words/letters as input to pyttsx3.</li> <li>Pirate Flag - Multiple solutions, try asking the model directly for solution as if asking another human.</li> <li>semantle - try n number of english words, if you get a good score, try synonyms. Ans = rocks in sky.</li> <li>semantle 2 - try same solution with 5 word sets.</li> <li>inversion - a bit difficult but could be solved using guide <a href="https://www.kaggle.com/code/cdeotte/solutions-d3fc0n-ctf-lb-0-894#Inference-(id:-inference)----400-Points" target="_blank">here</a>, if could not solve hint "We then take the first column response[:,0] and reshape this into a 32x32 image. This is our first letter! We display all 8 letters below". still difficult? find solution <a href="https://www.kaggle.com/code/mrmldjr/inversion-solution-ai-village-ctf-defcon31" target="_blank">here</a></li> <li>Flag 1-6 : prompt injection. ask model for flag.</li> <li>guess who's back - just give a punchline from after the phrase guess who's back.</li> <li>Pickle - change pickled RCE obj to safe pickle</li> <li>Whats my IP? - change IP of <a href="mailto:joe@simpson.com">joe@simpson.com</a></li> <li>what's my IP? 2 - find the whole set this time.</li> </ul> <p>This concludes all the found solutions. <br> Special reference:</p> <ul> <li><a href="https://www.kaggle.com/code/asalhi/ctf-23-flags-solution" target="_blank">Solutions Notebook 1</a></li> <li><a href="https://www.kaggle.com/competitions/ai-village-capture-the-flag-defcon31/discussion/454720" target="_blank">Solution Place 3</a></li> </ul> <p>Best of luck hacking!!</p>
Child Mind Institute - Detect Sleep States
159th solution 🥉
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I started to working on this competition just after <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a> published his solution to learn how to solve this kind of time series problem.<br> I learned a lot from his approach and appreciate sharing great ideas!</p> <p><a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940</a></p> <p>My solution is basically same as the referenced one.<br> Here are my ideas to improve the score.</p> <h1>Continuous time features</h1> <p>The original code uses hour_sin/hour_cos features to teach the model when of the day it is looking at.<br> I changed this feature to seconds in a day so the feature has continuous value rather than digital rapidly change on the border of hours. This boosted my cv score about ~ 0.01</p> <h1>Smaller threshold and larger distance of the peaks</h1> <p>Considering the competition metric, I changed post processing parameters.</p> <ul> <li>peak distance: 100steps</li> <li>peak threshold: 0.005</li> </ul> <p>The Larger peak distance decreases false positives that is too close to each other.<br> The smaller peak threshold increases true positives without increasing false positives that have effects on the MAP metric (additional peaks that have relatively small scores are never referenced before the higher peaks are counted as TP/FP when calculating MAP score so the setting smaller threshold is a reasonable strategy.)</p> <h1>Excluding predictions in periodic times</h1> <p>There are periodic times without any label events in training data.<br> Predictions in these period should be removed so that the number of FP decreases.<br> I compared the series features around each predicted events with that of preceding and following day, and excluded the event if it is identical to at least one of them.<br> This had little effect on my cv score but boosted the public score about +0.007 (private: +0.016)!</p> <h1>etc.</h1> <p>Actually I have no Idea about how to improve the model architectures that is already sophisticated and worked on minor changes like post processing. I will catch up the high ranker solutions and learn how to construct models!</p>
LLM - Detect AI Generated Text
5th place solution: 1.7 million training examples + domain adaptation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the competition organizers for creating such an interesting competition! I learned a lot. And congrats to the other winners!</p> <p>Here's an explanation of the 5th place solution…</p> <h1>Overview</h1> <p>Finetuned <a href="https://huggingface.co/microsoft/deberta-v3-large" target="_blank">deberta-v3-large</a> and <a href="https://huggingface.co/state-spaces/mamba-790m" target="_blank">mamba-790m</a> on varying mixtures of the following datasets:</p> <table> <thead> <tr> <th>Dataset</th> <th># Human documents</th> <th># Generated documents</th> </tr> </thead> <tbody> <tr> <td>PERSUADE essays</td> <td>25,996</td> <td>327,268</td> </tr> <tr> <td>Uncopyrighted Pile Completions</td> <td>512,371</td> <td>512,371</td> </tr> <tr> <td>SlimPajama Completions</td> <td>233,146</td> <td>233,146</td> </tr> <tr> <td>Tricky Crawl</td> <td>125,192</td> <td>0</td> </tr> </tbody> </table> <p></p>Exact sampling proportions varied across models in the ensemble, but a reoccurring pattern is that everything except the Pile data was undersampled. The best models used 62 - 99% Pile data. It was our highest quality dataset. The Persuade essays were always 1% of the data mixture.<p></p> <p>Then, at test time, our inference notebook does the following:</p> <ol> <li><strong>Teacher model inference:</strong> The test dataset is labeled with 1 DeBERTa model + 2 Mamba models with a context length of 1024 tokens. Final soft labels are a weighted average with 90% of the weight given to the DeBERTa model. <strong>The use of Mamba hurt us here, we would have gotten 3rd place (0.977) without it.</strong></li> <li><strong>Student model training:</strong> Finetune 2 "short context student" models to imitate the teacher ensemble's predictions based on short randomly selected chunks of the test documents. One student model uses a context length of 128 <em>characters</em> (~32 tokens), the other uses 256 characters. These student models are pretrained on 1 million documents from the datasets above, mostly Pile data, and are both deberta-v3-large based.</li> <li><strong>Student model inference:</strong> The finetuned student models make predictions on overlapping chunks of the test documents with a stride equal to half their context length. Final predictions for each document are the average of all these predictions, with 60% of the weight given to predictions from the model with a context length of 128 characters, 40% to the one with a context length of 256 characters.</li> </ol> <p>My intuition for why this domain adaptation strategy is beneficial is that the short-context student models learn to look for dataset-specific wording quirks that are correlated with whatever the long-context models look for in the broader documents. This strategy was primarily inspired by <a href="https://arxiv.org/abs/2002.05709" target="_blank">SimCLR</a>, but in hindsight I realize it is also similar to <a href="https://arxiv.org/abs/1911.04252" target="_blank">Noisy Student Training</a> and <a href="https://arxiv.org/abs/1904.12848" target="_blank">UDA</a></p> <p>On its own, the full-context DeBERTa model we used scores 0.970 on the private LB, the Mamba models score ~0.957. The final student models score 0.977 without Mamba, 0.972 with Mamba. We didn't pick the best submission as one of our final 3, so we got 5th place instead of 3rd. Also, the DeBERTa model we used wasn't the best… we had one score 0.976 on its own, without domain adaptation.</p> <p><strong>Links:</strong></p> <ul> <li><strong>Inference/domain adaptation notebook:</strong> <a href="https://www.kaggle.com/jsday96/multi-context-students/" target="_blank">https://www.kaggle.com/jsday96/multi-context-students/</a></li> <li><strong>Data:</strong> <a href="https://www.kaggle.com/datasets/jsday96/ai-content-detection" target="_blank">https://www.kaggle.com/datasets/jsday96/ai-content-detection</a></li> <li><strong>Local training &amp; data generation code:</strong> <a href="https://github.com/jday96314/AiContentDetection" target="_blank">https://github.com/jday96314/AiContentDetection</a></li> </ul> <h1>Data</h1> <h2>Things all datasets had in common</h2> <ul> <li>Ran <a href="https://github.com/vllm-project/vllm" target="_blank">vLLM</a> on 2x RTX 3090s + 1x RTX 4090 to generate examples of AI-authored text.</li> <li>Used 4-bit AWQ quantization to conserve GPU memory for any model with &gt;= 13 billion parameters.</li> <li>Sampling temperatures randomly varied from 0 - 2 (used different randomly selected temperature for each vLLM API request).</li> <li>Top-K filter randomly selected from [disabled, 20, 40] for each vLLM API request.</li> <li>Top-P filter randomly varied from 0.5 - 1 for each vLLM API request.</li> <li>Frequency penalty randomly varied from 0 - 0.5 for each vLLM API request.</li> </ul> <h2>Uncopyrighted Pile Completions</h2> <p>The general strategy for creating this dataset was to randomly select documents from <a href="https://pile.eleuther.ai/" target="_blank">The Pile</a>, randomly truncate them, and prompt locally hosted LLMs to generate plausible continuations for the last 25 - 75% of each document. I initially used an old version of the pile which contains copyrighted data and is no longer publicly available, but filtered it to only contain document completions based on subsets of The Pile which are free of copyright restrictions and are <a href="https://huggingface.co/datasets/monology/pile-uncopyrighted" target="_blank">still publicly available</a>. </p> <p>Document completions were generated with the following models. Document counts below are from after the filtering which ran as a post-processing step (and discarded ~16% of the data if I recall correctly).</p> <table> <thead> <tr> <th>Model</th> <th>Document count</th> </tr> </thead> <tbody> <tr> <td>Airoboros-L2-13B-2.1-AWQ</td> <td>46,358</td> </tr> <tr> <td>CodeLlama-34B-AWQ</td> <td>29,885</td> </tr> <tr> <td>falcon-7b</td> <td>11,178</td> </tr> <tr> <td>Llama-2-13B-AWQ</td> <td>61,531</td> </tr> <tr> <td>Llama-2-13B-chat-AWQ</td> <td>43,145</td> </tr> <tr> <td>Llama-2-70B-AWQ</td> <td>31,297</td> </tr> <tr> <td>Mistral-7B-Instruct</td> <td>46,825</td> </tr> <tr> <td>Mistral-7B-v0.1</td> <td>61,851</td> </tr> <tr> <td>mpt-7b</td> <td>12,232</td> </tr> <tr> <td>OpenHermes-2.5-Mistral-7B</td> <td>21,221</td> </tr> <tr> <td>StableBeluga2-70B-AWQ</td> <td>21,046</td> </tr> <tr> <td>WizardCoder-Python-34B-V1.0-AWQ</td> <td>41,716</td> </tr> <tr> <td>WizardLM-70B-V1.0-AWQ</td> <td>41,979</td> </tr> <tr> <td>zephyr-7b-beta</td> <td>42,093</td> </tr> </tbody> </table> <p></p> For foundation models, the prompts were just document prefixes. For models that were instruction or chat tuned, I randomly broke the document prefixes into two parts, used the first part to form "user" instructions requesting a plausible continuation to the document and used the second part as "leading words" in a part of the token sequence that would ordinarily be filled in by the model (after something like a "### Response:" tag which varies a bit depending on what format the model was trained to expect). The leading words help to ensure the models generate plausible looking document completions, without any weird prefixes at the start of their responses or refusals.<p></p> <p>This data was generated in 2 stages. My observations from training models on the early data and the changes I made for the last ~15% are described below.</p> <ul> <li>Models trained on my initial data were worst at identifying text generated with a sampling temperature close to 1. In response, I switched from picking the sampling temperature from a uniform distribution ranging from 0 - 2, to instead using a gaussian distribution centered at 1 with a standard deviation of 0.2, clipped to be in the range 0 - 2. This helped to concentrate more of my hardware resources on the tricky data my classification models were struggling with.</li> <li>My models were best at identifying AI generated text related to the medical subsets of The Pile (PubMed Central &amp; PubMed Abstracts). As a result, I figured adding more medical related text to the training dataset would be a waste of hardware resources and started filtering it out (before the LLM is prompted, not as a post-processing step like what I did to discard the data with copyright issues).</li> <li>My models were worst at identifying AI generated code. This data didn't seem too relevant to the competition, so I started filtering out everything from the GitHub slice of The Pile, similar to what I did for PubMed.</li> </ul> <h2>SlimPajama Completions</h2> <p>The way I created this dataset is very similar to what I did for the the last of the Pile completions, just with source documents from <a href="https://huggingface.co/datasets/cerebras/SlimPajama-627B" target="_blank">SlimPajama</a> instead of The Pile and a slightly different combination of models. This data was generated somewhat preemptively to hopefully mitigate any accuracy lost from filtering out the problematic (copyrighted) parts of The Pile. It didn't really help on the public LB, but might have on the private LB - I'd need to look back at the scores more closely. </p> <p>The mixture of models I used to create this dataset is outlined below.</p> <table> <thead> <tr> <th>Model</th> <th>Document count</th> </tr> </thead> <tbody> <tr> <td>airoboros-l2-70B-gpt4-1.4.1-AWQ</td> <td>23,399</td> </tr> <tr> <td>deepseek-coder-33B-base-AWQ</td> <td>20,718</td> </tr> <tr> <td>dolphin-2.6-mistral-7b</td> <td>24,168</td> </tr> <tr> <td>Mistral-7B-Instruct-v0.2</td> <td>24,813</td> </tr> <tr> <td>Mistral-7B-v0.1</td> <td>24,618</td> </tr> <tr> <td>mixtral-8x7b-v0.1-AWQ</td> <td>24,609</td> </tr> <tr> <td>Mixtral-8x7B-Instruct-v0.1-GPTQ</td> <td>24,581</td> </tr> <tr> <td>Nous-Hermes-2-SOLAR-10.7B-AWQ</td> <td>22,833</td> </tr> <tr> <td>Nous-Hermes-Llama2-AWQ</td> <td>18,741</td> </tr> <tr> <td>SOLAR-10.7B-v1.0-AWQ</td> <td>24,623</td> </tr> </tbody> </table> <p></p>One concern I had about SlimPajama (and why I didn't use it initially) is that it contains text from the internet that was collected <em>after</em> the release of ChatGPT, so there might be some AI-generated text mislabeled as "human" text when I train on it. It is plausible this might be why it seemed to be lower quality than The Pile data.<p></p> <h2>PERSUADE essays</h2> <p>I generated a bit over 300k essays for the same assignments as the human essays in the PERSUADE corpus. These essays were divided into 3 subsets with varying generation strategies:</p> <ul> <li><strong>Basic:</strong> ~137k essays using variety of prompting strategies. 0-shot or 1-shot with ~140 different sets of instructions used to alter the LLM's writing style. Used a combination of 11 different models.</li> <li><strong>Student Imitator:</strong> In an attempt to make the AI-generated essays harder to identify, I finetuned Mistral 7B and Llama 2 13B to imitate the writing style of the students in the PERSUADE corpus. I used the resulting models to generate ~237k essays, ~177k of which were used for training and cross-validation purposes. The rest were discarded due to issues during finetuning that caused the LLMs to output incoherent text.</li> <li><strong>Adversarial:</strong> I adversarially generated ~12k essays that confuse a mixture of 17 "victim classifiers". The main strategy for generating these essays to back-track and try again whenever the in-progress essay starts to sound "too AI generated". However, some LLMs are seemingly incapable of generating introductions that fool my classifiers for certain assignments, so many of these essays begin with authentic human text in order to "get past the intro". However, even with that in place these essays were still ~100x more computationally expensive to generate than "normal" essays, so I don't have many of them.</li> </ul> <p>Unfortunately, these essays were not very useful in comparison to the Pile data. My classifiers seem to rapidly overfit to it and need more diverse data to generalize well.</p> <p>I feel the exact combinations of models &amp; instructions I used would be a bit too much to include here, so I suggest looking at the dataset I linked to in the introduction &amp; my data generation code for details.</p> <h2>Tricky Crawl</h2> <p>This was created by filtering the Common Crawl subset of The Pile (Pile-CC) to pull out human-authored text that was misclassified as AI generated by one of 2 moderately strong general-purpose AI content detection models. One of the classifiers was based on deberta-v3-base, the other on deberta-v3-xsmall.</p> <p>The version of this dataset used to train the models in the final submission was created by filtering 1.5 million documents to pull out the ~125k most confusing to the victim classifiers. </p> <p>The general idea behind this dataset was to reduce the number of false positives by boosting the amount of tricky human text in the training dataset. Mixing this into the training dataset with a 5% sampling proportion (so 5% of the training examples are from this dataset) seemed beneficial for the short-context models, but didn't make much difference for the full-context ones. </p> <h1>Data augmentation</h1> <p>The competition organizer's data seems to be a bit corrupted, so the models in our solution were trained with the following data augmentation steps to make them more robust.</p> <ul> <li><strong>Buggy spell check</strong> similar to the implementation described by <a href="https://www.kaggle.com/deltawi" target="_blank">@deltawi</a> in <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/456142" target="_blank">this discussion thread</a>. However, I used a regex which seems to handle ' characters in a way more consistent with the competition data and used <a href="https://github.com/bakwc/JamSpell" target="_blank">jamspell</a> instead of pyspellchecker because jamspell is faster. This executes with 70% probability for persuade documents, 20% for all other datasets.</li> <li><strong>Blacklisted character removal</strong> with the same blacklist suggested by <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> in <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/452172" target="_blank">this discussion thread</a>. This executes with 70% probability for persuade documents, 20% for all other datasets.</li> <li><strong>Typos are added.</strong> More specifically…<ul> <li>All typos of typos supported by the <a href="https://github.com/ranvijaykumar/typo" target="_blank">typo library</a> have 2 opportunities to be added to all training examples with 10% probability each time.</li> <li>Capitalization of random characters is flipped. 2 opportunities to decapitalize an upper case letter, 1 opportunity to capitalize a lower case letter. All these "opportunities" for typos to be added have a 10% occurring for each document.</li></ul></li> </ul> <h1>Mamba vs. DeBERTa</h1> <h2>Efficiency</h2> <p><code>mamba-790m</code> is roughly as fast as <code>deberta-v3-large</code> and consumes less GPU memory during training despite having over 2x as many parameters. It seems to have some major efficiency advantages due to being a structured state space model instead of a transformer.</p> <h2>Accuracy (and stability problems)</h2> <p><code>mamba-790m</code> did <em>almost</em> as well as DeBERTa during local CV on the Pile data (0.001 - 0.002 lower) and <em>almost</em> as well on the public LB (~0.003 lower than an "unlucky" DeBERTa run, ~0.006 lower than a "good" DeBERTa run). The main difference arose on the private LB. Mamba's score staid roughly the same from public to private while most DeBERTa models jumped by ~0.01. As a result, Mamba seemed to be slightly useful on the public LB when included in an ensemble with low weight, but wound up dragging us down on the private LB.</p> <p>I think the weakness of Mamba here wasn't really due to architectural flaws or because structured state space models are inherently inferior to transformers, but rather because this was my first time using Mamba and I rushed to train it in the last 3-4 days of the competition. Additionally, the <a href="https://github.com/state-spaces/mamba" target="_blank">mamba-ssm</a> library seems to be primarily intended for text generation, not classification, so it isn't super straightforward to use for workloads like this.</p> <p>To elaborate on the <strong>mistakes I made</strong> when training it, I generally initialized the Mamba model like</p> <pre><code>model = from model.lm_head = nn. </code></pre> <p>which causes it to outputs predictions with shape (batch size, token count, 2). During early training runs, I then used the output from the last token in each sequence as class label predictions using an operation like</p> <pre><code> = output.logits[:, -, :] </code></pre> <p>The problem with this approach is that the token sequences are padded and the padding was impacting the predictions. Getting predictions from the last token before the start of padding with an operation like</p> <pre><code>last_token_indices = torch(attention_masks(dim = ) - , min = ) raw_predictions = torch( logits, dim=, index=last_token_indices()()(-, -, logits) )() </code></pre> <p>seemed to increase the optimal learning rate by a factor of roughly 8x and allowed for much faster convergence to a more accurate model during relatively small-scale training runs with a single epoch over "only" 100,000 example documents. However, it was unstable when I attempted to scale up to more data with a learning rate that high. I didn't really have time to train it properly after figuring this out. As a result, the 2 Mamba models used by the 5th place solution were trained using the following non-ideal setups:</p> <ul> <li>1.25 million training documents with the original logit selection approach ([:, -1, :]) and a relatively low learning rate (2e-6 with a batch size of 4). This training run executed in the background while I used other GPUs to figure out how to train Mamba "properly".</li> <li>350,000 training documents with the max lr set to a problematically high value (1.6e-5) for the first ~70% of training, followed by repeated issues in which the loss became nan and the model started outputting garbage, thereby requiring repeated restarts from old checkpoints with much lower learning rates. I wound up manually dropping the learning rate all the way down to 5e-7 in the last 30% of training and did not have time to train it on as much data as most other models. I have a suspicion I would have gotten better results if I started the run from the beginning with a well-configured learning rate schedule and used a larger amount of data.</li> </ul> <h1>Additional training details</h1> <ul> <li><strong>Dropout:</strong> Totally disabled for all models. <ul> <li>Encountered <strong>strange behavior</strong> in which if I trained the DeBERTa models with Huggingface transformers library's default dropout rates, they were consistently more accurate at inference time if I left dropout enabled at inference time (i.e. leaving the model in "train" mode instead of switching to "eval" mode). I suspect this is because DeBERTa uses non-standard <code>StableDropout</code> layers that appear to do some sort of internal normalization that the model was learning to rely on, thereby forcing me to keep dropout enabled for best results. Needless to say, randomly dropping-out connections in the model at test time isn't good for accuracy. I got better results with dropout disabled both at training time and at test time, even in comparison to averaging across multiple forward passes through a model with dropout enabled.</li> <li>The <code>mamba-ssm</code> library doesn't support dropout.</li> </ul> </li> <li><strong>Learning rate schedule:</strong> Linear warm-up followed by linear decay. Generally warmed up for the first 5% of training during the local training runs, 30% during domain adaptation at test time. <strong>Longer warmup during domain adaptation seemed to make it more stable</strong>, with good results more consistently, but I didn't experiment with this much.</li> <li><strong>Batch size:</strong> For the models with a context length of 1024 tokens, I used a batch size of 2 for DeBERTa, 4 for Mamba, primarily because <i>Mamba is more memory efficient</i>. I generally used a batch size of 16 for the shorter context models used in domain adaptation (32 - 96 tokens).</li> <li><strong>Data mixtures:</strong> <ul> <li><strong>DeBERTa model with 1024 token context:</strong> 99% Pile, 1% Persuade <ul> <li>The other datasets didn't seem to help on the public LB with long-context models, but our best single model on the private LB scored 0.976 with the data mixture used for the final submission's shorter-context DeBERTa models.</li> </ul> </li> <li><strong>Short-context "student" DeBERTa models:</strong> 62% Pile, 32% Pajama, 5% Tricky Crawl, 1% Persuade</li> <li><strong>Mamba:</strong> 77% Pile, 20% Pajama, 2% Tricky Crawl, 1% Persuade</li> </ul> </li> <li><strong>Data volumes:</strong> Typically just did a single epoch with some subsets of the available data undersampled. <ul> <li><strong>Full-context DeBERTa:</strong> 1.4 million training documents (too impatient to wait &gt;2 days for it to train on more than this and doubt it would help much without an increase in <i>data diversity</i>)</li> <li><strong>Short-context DeBERTa:</strong> 1 million training documents (these models are finetuned at test time, so I didn't see any benefit pretraining longer)</li> <li><strong>Mamba:</strong> 350k - 1.25M training documents (would have trained on more if I had time - see section about DeBERTa vs. Mamba for rant about how this was rushed and non-ideal)</li> </ul> </li> </ul> <h1>Alternatives we experimented with</h1> <ul> <li>1D conv ResNets (scored &lt;= 0.87 on the LB, but EXTREMELY fast, ~10 minutes to evaluate on the leaderboard without GPU acceleration - might have scored better if I scaled it up, but DeBERTa gave better results right off the bat)</li> <li>Training deep learning based solutions using only Persuade related data (they didn't generalize well to the LB data - scaling up to bigger transformers doesn't help with this)</li> <li>Ensembling DeBERTa models with tf-idf (scored well on the public LB but got wrecked during the shakeup) </li> <li>Smaller DeBERTa models (they don't score as well)</li> <li>Just using models with smaller context windows (they don't score as well)</li> <li>Tried an iterative pseudolabeling strategy similar to the one used in <a href="https://www.kaggle.com/asalhi" target="_blank">@asalhi</a>'s 21st place solution (<a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/470148" target="_blank">https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/470148</a>). Discarded this due to big random score variations that seemed to depend on how the training/test data is shuffled. The domain adaptation strategy used in our 5th place solution scores well more consistently.</li> </ul>
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
13th Place Solution for the UBC-OCEAN Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all we would like to thank Kaggle and the sponsors for this interesting research competition.</p> <h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/UBC-OCEAN/overview" target="_blank">https://www.kaggle.com/competitions/UBC-OCEAN/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/UBC-OCEAN/data" target="_blank">https://www.kaggle.com/competitions/UBC-OCEAN/data</a></li> </ul> <h1>Overview of the approach</h1> <p>Our final solution was a combination of separated high scoring TMA and WSI models. For WSI models we’ve used some self-supervised learning (SSL) pretrained features extractor executed on 224 pixels tiles followed by Multiple Instance Learning (MIL) models. Same for TMA model with additional regular Transformer and CNN backbones. Outliers detection takes place in postprocessing and relies on both embedding distance and probabilities distributions.</p> <h1>Detail of the submission</h1> <p><strong>WSI models:</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fe18dd035e408c85875e7b74d4862bc2e%2Fwsi_model.png?generation=1704326923139467&amp;alt=media" alt=""><br> <strong>TMA models:</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2F5ba1a216d5328e17c20cc55e866fcbb5%2Ftma_model.png?generation=1704326948322245&amp;alt=media" alt=""><br> <strong>Post processing:</strong><br> TMA models have been trained with ArcFace loss and an ArcMarginProduct sub center module to try to separate embeddings space as much as possible per class. It allows detecting outliers based on a similarity distance. Our distance threshold has been fine tuned on public LB as we didn’t have any solid sample of real outlier/rare class. <a href="https://github.com/facebookresearch/faiss" target="_blank">Faiss</a> package has been used to find nearest neighbors.<br> We also performed another thresholding based on probabilities distributions. When all probabilities are low enough we switch the predicted class as outlier. The threshold has been calibrated on CV to avoid more than 10% outliers and checked on public LB.</p> <h1>WSI model training</h1> <p>Before training the WSI model, our medical intuition supported by this article led us to hypothesize that the relevant information for the subtype prediction was probably more located at low level. All the 513 available WSI in the training set were thus downscaled to 10x magnification. Since all these WSI had a black unicolor background, we then performed an otsu thresholding, in order to discard all the background tiles. We then tiled all the detected tissue into N non-overlapping 224px tiles. Insofar as it was not possible to infer tumor segmentation during submission because of the time limitation, we decided to keep all the tumor and the non-tumor tiles for training.<br> All these N tiles were then encoded with CTransPath and Lunit-DINO, 2 features extractors trained using self-supervised learning on diverse pathology dataset. According to this article about the robustness of these models to stain variations, we did not perform any kind of augmentation or normalization preprocessing.<br> We then trained and evaluated several MIL architecture into a weighted ensemble. The best CV were obtained by combining three of them : </p> <ul> <li>Clustering-constrained attention MIL (<a href="https://github.com/mahmoodlab/CLAM" target="_blank">https://github.com/mahmoodlab/CLAM</a>)</li> <li>Dual-stream MIL (<a href="https://github.com/binli123/dsmil-wsi" target="_blank">https://github.com/binli123/dsmil-wsi</a>)</li> <li>A weighted sum of the embeddings.<br> MIL training procedure and parameters:</li> <li>CV4, Stratified Group KFold</li> <li>No augmentation</li> <li>No normalization</li> <li>Batch size =1, epochs = 32</li> <li>AdamW optimizer, CosineAnnealingLR, LR=5e-3</li> <li>Cross-Entropy Loss</li> </ul> <h1>TMA model training</h1> <p>The UBC training dataset was coming with only 25 TMA samples and we know, according to the description, that TMA in the test set are the majority. We’ve detected around 65% to 70% of images with sides less than 6000 pixels. We’ve decided to generate some TMA based on the WSI provided in the training set. We’ve developed a custom augmentation that is detailed in this notebook: <a href="https://www.kaggle.com/code/mpware/ubc-tma-generator-from-wsi" target="_blank">https://www.kaggle.com/code/mpware/ubc-tma-generator-from-wsi</a></p> <pre><code> (): A.Compose([ SimulateTMA((-, -), radius_ratio=(, ), ellipse_ratio=(, ), angle=(-, ), background_color=(-, -, -), background_color_ratio=(, ), noise_level=(/, /), black_replacement_color=, p=, always_apply=), A.OneOf([ Stainer(ref_images=tma_images, method=, luminosity=, p=), Stainer(ref_images=, method=, luminosity=, p=), Stainer(ref_images=tma_images, method=, luminosity=, p=), ], p=), ], p=p) </code></pre> <p>The idea is to identify tiles with tumoral tissue and crop an ellipse shape as could be a real TMA picked by an operator.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fd37aaf25fc882b4f0c477f975514b388%2Ftma_v1.png?generation=1704327065491575&amp;alt=media" alt=""><br> The crops are then augmented with stains based on the 25 TMA as references. As the WSI magnification is mainly x20 the generated TMA are also x20. Here are some generated samples:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2F7d4e508fdbfd96b7f04f8356408803d2%2Ftma_v2.png?generation=1704327082748102&amp;alt=media" alt=""><br> A final step was to review them manually to drop bad generated TMA (especially when tumor mask was not available / complete). Our best private LB (0.58) was with such validated TMA. Unfortunately we did not select it as final submission.<br> <strong>MIL</strong> training procedure and parameters:</p> <ul> <li>CV4, Stratified Group KFold</li> <li>Random batch sampler to balance samples</li> <li>Augmentations: Stain: Vahadane, Macenko, Reinhard</li> <li>Mask on attention, batch size = 32, epochs = 32</li> <li>EMA</li> <li>AdamW optimizer, CosineAnnealingLR, LR=1e-3</li> <li>Cross Entropy Loss<br> <br><br> <strong>Transformer/CNN</strong> training procedure and parameters:</li> <li>ImageNet pretrained backbones (<a href="https://github.com/huggingface/pytorch-image-models" target="_blank">Timm</a>):<ul> <li>tiny_vit_21m_512.dist_in22k_ft_in1k</li> <li>tf_efficientnetv2_s_in21ft1k</li></ul></li> <li>Augmentations: <ul> <li>H/V flips, Rot90</li> <li>Stain: Vahadane, Macenko, Reinhard</li> <li>Random BrightnessContrast/Gamma, HueSaturationValue, ColorJitter, CLAHE</li> <li>GaussianBlur, MotionBlur, GaussNoise</li> <li>Cut Mix, DropOut</li></ul></li> <li>EMA</li> <li>Batch size = 32, Epochs = 32</li> <li>AdamW optimizer, CosineAnnealingLR, LR=1e-4</li> <li>Cross Entropy Loss<br> Models have been trained with full data after checking stability on cross validation.<br> Here is a 2D t-SNE projection of TinyVit trained embeddings on generated 23k TMA:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fe0859f7f5adffe00c3e30cb246b40101%2Ftma_training_embeddings.png?generation=1704327121454397&amp;alt=media" alt=""></li> </ul> <h1>Other useful strategies or approaches</h1> <p>Validation was quite difficult, MIL models were overfitting quite fast. Using EMA helped to limit it. </p> <h1>Model inference</h1> <p>Most of the inference time was lost in image loading/tiling. We’ve implemented a multiprocess inference to benefit from all CPUs but optimized to balance the memory issues due to concurrent large images loading. It reduced the loading + tiling of all images to around 5h30-6h. Features exaction was the most time consuming task that is why we’ve limited ourselves to the two best ones. We’ve limited the number of tiles to 350 max and at the end our inference ran in around 11h15-30min.</p> <h1>What did not work or improve?</h1> <p>A quick sum up of what did not work or not improve:<br> Resnet50-based features extractors such as RetCCL and Lunit-BT.<br> External data:</p> <ul> <li><a href="https://portal.gdc.cancer.gov/repository?facetTab=cases&amp;filters=%7B%22op%22%3A%22and%22%2C%22content%22%3A%5B%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22cases.primary_site%22%2C%22value%22%3A%5B%22ovary%22%5D%7D%7D%2C%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22files.data_type%22%2C%22value%22%3A%5B%22Slide%20Image%22%5D%7D%7D%5D%7D" target="_blank">TCGA</a></li> <li><a href="https://www.cancerimagingarchive.net/collection/ovarian-bevacizumab-response/" target="_blank">ATEC</a><br> <br><br> Usually adding more data is always better but here it did not help on both CV and LB. However the quality of many slides was very bad and could explain it. Also the labeling of some was not obvious.<br> Train a WSI model based on ImageNet pre-trained backbone. It worked but Ctranspath and LunitDINO outperformed it.<br> Down scale to x5 for WSI models (instead of x10)<br> Train a tumor segmentation model in order to sample tumor TMA, but since subtypes have significant morphological variations, we preferred to train a stroma segmentation model and predict the carcinoma mask by complementarity. Finally it was impossible to set up a unique threshold for TMA selection because of high variation in epithelial surface area between solid and mucinous architecture. <br> Pseudo labeling has not been tried.</li> </ul> <h1>Sources</h1> <ul> <li>1) Deep Learning for Detecting BRCA Mutations in High-Grade Ovarian Cancer Based on an Innovative Tumor Segmentation Method From Whole Slide Images: <a href="https://www.modernpathology.org/article/S0893-3952(23)00209-0/fulltext" target="_blank">https://www.modernpathology.org/article/S0893-3952(23)00209-0/fulltext</a></li> <li>2) A Good Feature Extractor Is All You Need for Weakly Supervised Learning in Histopathology: <a href="https://arxiv.org/pdf/2311.11772.pdf" target="_blank">https://arxiv.org/pdf/2311.11772.pdf</a><br> <br><br> We had a lot of fun solving this kaggle. It was a lot of data to handle, in addition to the ML challenge it was an optimization challenge to make the inference fast.<br> <br><br> Raphaël Bourgade and MPWARE</li> </ul>
SenNet + HOA - Hacking the Human Vasculature in 3D
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for host, I learned a lot from this competiton!<br> also special thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> . I refer <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> discussion topics many times.</p> <h1>TL;DR</h1> <ul> <li>MaxVit tiny</li> <li>xy, xz, yz inference</li> <li>heavy regularization</li> <li>probability threshold</li> </ul> <h1>Solution</h1> <h2>Model</h2> <ul> <li>MaxVit tiny</li> </ul> <pre><code> (nn.Module): (): (SenUNetStem, self).__init__() kwargs = ( in_chans=in_chans, features_only=, pretrained=, out_indices=((encoder_depth)), ) self.conv_stem = Conv2dReLU(in_chans, , , use_layernorm=, padding=) self.encoder = timm.create_model(encoder_name, **kwargs) self._out_channels = [ , ] + self.encoder.feature_info.channels() self.decoder = UnetDecoder( encoder_channels=self._out_channels, decoder_channels=decoder_channels, n_blocks=encoder_depth, use_batchnorm=decoder_use_batchnorm, center= encoder_name.startswith() , attention_type=decoder_attention_type, ) self.segmentation_head = SegmentationHead( in_channels=decoder_channels[-] + , out_channels=classes, activation=activation, kernel_size=, ) self.n_time = n_time self.pickup_index = pickup_index (): B, C, H, W = x.shape h = (H//)* w = (W//)* x = x[:,:,:h,:w] stem = self.conv_stem(x) features = self.encoder(x) features = [ stem, ] + features decoder_output = self.decoder(*features) masks = self.segmentation_head(decoder_output) masks = F.pad(masks,[,W-w,,H-h,,,,], mode=, value=) masks[:,] SenUNetStem( encoder_name=, classes=, activation=, ) </code></pre> <h2>Dataset</h2> <ul> <li>kidney1 and 3 dataset.</li> </ul> <h2>Training Tricks</h2> <p>I focus on regularization trick.<br> because this competiton have unstable cv, public is not related.<br> Moreover host shared public/Private LB information images, I hink it indicate unstable.</p> <ul> <li>EMA</li> <li>50epochs</li> <li>AdamW(Weight Decay 1e-2)</li> <li>CutMix(until 25ep)</li> <li>MixUp(until 25ep)</li> <li>DiceLoss(smooth_factor=0.1)</li> <li>Heavy Augmentation </li> </ul> <pre><code> train_aug = A.Compose([ A.HorizontalFlip(=0.5), A.VerticalFlip(=0.5), A.RandomBrightness(=0.1, =0.7), A.OneOf([ A.GaussNoise(var_limit=[10, 50]), A.GaussianBlur(), A.MotionBlur(), A.MedianBlur(=3), ], =0.4), A.OneOf([ A.GridDistortion(=5, =0.3, =1.0), A.OpticalDistortion(=1., =1.0) ],=0.2), A.ShiftScaleRotate(=0.7, =0.5, =0.2, =30), A.CoarseDropout(=1, =0.25, =0.25), ToTensorV2(=) ]) </code></pre> <ul> <li>Crop(512)</li> </ul> <h2>Inference</h2> <p>Inference is xy, xz, yz axis, and crop 512, stride 256.</p> <h2>Post-Process</h2> <p>Probability threshold(=sigmoid output). I didn't use percentile which method used many public notebook and past segmentation competiton(e.g. Volcano).<br> Because I checked percentile threshold in local cv, it's not stable. I didn't use it.</p> <h2>Not worked</h2> <ul> <li>Bigger models(maxvit base, small)</li> <li>large size inference(1024), 512 is enough for this competiton.</li> <li>Rotate90</li> <li>Pretrained Other volumes(kidney_2/kidney_1_volumes)</li> </ul> <p><a href="https://www.kaggle.com/code/tereka/simpleunet-xy-xz-yz-v2-nbp-b749ff/notebook" target="_blank">https://www.kaggle.com/code/tereka/simpleunet-xy-xz-yz-v2-nbp-b749ff/notebook</a></p>
SenNet + HOA - Hacking the Human Vasculature in 3D
4th place solution. Boundary DoU Loss is all you need!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to start my solution description with a few important words:</p> <p><em>I would like to thank the Armed Forces of Ukraine, the Security Service of Ukraine, Defence Intelligence of Ukraine, and the State Emergency Service of Ukraine for providing safety and security to participate in this great competition, complete this work, and help science, technology, and business not to stop but to move forward.</em></p> <h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation</a> </li> <li>Data context: <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/data" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/data</a></li> </ul> <h1>Overview of the approach:</h1> <p>My final model is a mixture of 2d and 3d models with d4 tta. For the 2d model, the multiview tta was applied. All models were trained in a 2-fold setup with kidney_2 and kidney_3_dense selected as validation sets. The ensembling was performed with equal weights for both 2d and 3d models.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F207760%2F0e15deeafd39981c337888fdaf27e2c9%2FScreenshot%202024-02-07%20at%2002.40.31.png?generation=1707266453616337&amp;alt=media"></p> <h1>Details of the submission</h1> <h2>Data preparation and training data and validation scheme</h2> <p>All final (3d and 2d) models were trained on kidney_1_dense, kidney_2, kidney_3_dense, kidney_3_sparse and pseudo labels <a href="http://human-organ-atlas.esrf.eu" target="_blank">50um_LADAF-2020-31_kidney_pag-0.01_0.02_jp2_</a>. Initially, I used slice-wise normalization to normalize images but later switched to stack-wise normalization based on percentiles.</p> <p>The 2D model was trained in a multiview setup: all images were stacked in a tensor and sliced in different axes afterward. During the training, the set of augmentations and sampling strategy was crucial. The weighted sampling was based on sparsity percentage: dense samples had a weight of 1, while sparse samples had a weight equal to their sparsity. For pseudo labels, I chose the same weight as for kidney_2, e.g.: </p> <pre><code>kidney_1_dense: , kidney_2: , kidney_3_dense: , kidney_3_sparse: , 50um_LADAF--31_kidney_pag-_jp2_: . </code></pre> <p>The augmentation scheme was the next one, with a chance of 0.5 CutMix augmentation being applied. The cropping was performed from the same organ and the same projection axis. Afterward, on top of CutMix, the next augmentation pipeline was applied:</p> <pre><code>A.Compose( [ A.PadIfNeeded(*crop_size), A.CropNonEmptyMaskIfExists(*crop_size, p=), A.ShiftScaleRotate(scale_limit=), A.HorizontalFlip(p=), A.VerticalFlip(p=), A.RandomRotate90(p=), A.OneOf([ A.RandomBrightnessContrast(), A.RandomBrightness(), A.RandomGamma(), ],p=,), ],p=,) </code></pre> <p>The crop size was set to 512. I’ve also tried higher resolution, but it performs +- the same result. </p> <p>I did some experiments with 2.5d approaches (3 and 5 channels), but it produced the same result or worse. </p> <p>The 3d model augmentation scheme contained only d4 augmentations and random crops. The cropping was performed with a 0.5 probability of an empty mask. This was motivated by false positives that appeared outside the kidney volume. This could be improved by incorporating the two-class 3D segmentation, but I didn’t have much time and resources to perform such an experiment. Thus, I decided to create a post-processing that would handle this. <br> The crop size for the 3d model was 192x192x192.</p> <p>Both models were trained in a 2-fold setup where as validation, I used kidney_2 (fold_1) and kidney_3_dense (fold_0). Removal of kidney_1 from the training set caused performance degradation in performance in both CV and LB, so I dropped the fold_2 and didn't perform training in that setup.</p> <h2>Model setup</h2> <p>The best results I was able to get using the efficientnet family models with UnetPlusPlus decoder and SCSE attention from the segmentation_models_pytorch library. I’ve tried the resnet50 model, like it was mentioned in the discussion section, different transformers and seresnext models, but could overcome the performance of efficientnet-b5 (which performed the best on both CV and LB). On my local validation, the score I was able to get with efficientnet_b7 encoder and mit_b5 encoder, but on the LB the score was significantly lower.<br> The training was performed for 30 epochs with a Cosine LR scheduler starting from 3e-4 to 1e-6. I saved the top 3 checkpoints and used the best-last checkpoint for the submission.<br> The model efficientnet_b5_UnetPlusPlus trained in such a setup was able to score 0.878 on the public LB and 0.714 on the private LB at a 0.05 threshold. </p> <p>Here yellow is TP, green is FP, and red is FN.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F207760%2F0484711261976f944aca4341305752f2%2FindividualImage.png?generation=1707265123800657&amp;alt=media"><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F207760%2F157fa4895ef81f6e1fc0dbb8e685c17d%2FindividualImage-2.png?generation=1707265135912886&amp;alt=media"></p> <p>The 3d model was heavily inspired by the nnUnet model architecture and was pretty much the same. Instead of the native nnUnet model, I used DynUnet from the monai library with almost default configuration and trained in almost the same setup as for nnUnet. As the optimizer, I used SGD with initial LR 0.01 and the Cosine Annealing LR scheme instead of LinearLR and trained for 500 epochs with 2000 samples per epoch. <br> This model scored 0.869 (0.868 and 0.866 -- 0 and 1 folds respectively) on the public LB and 0.694 on the private LB (0.758 and 0.663 -- 0 and 1 folds respectively). </p> <p>Both models were trained using the BoundaryDOULoss (<a href="https://arxiv.org/pdf/2308.00220.pdf)" target="_blank">https://arxiv.org/pdf/2308.00220.pdf)</a>, which performed the best. I’ve tried to modify it to perform better on sparse data but failed. </p> <h2>Pseudo labeling</h2> <p>Based on the preprint, I downloaded the additional data from <a href="http://human-organ-atlas.esrf.eu" target="_blank">http://human-organ-atlas.esrf.eu</a> site (2 datasets). It appeared, that one of the datasets overlaps with the kidney_3, so I dropped it to prevent leakage. I used the other one to generate pseudo labels. For pseudo labeling, I used an ensemble of 2d models (efficientnet-b5 and efficientnet-b6 with UnetPlusPlus) trained with the same setup but without CutMix. The correct setup of CutMix as well as the 3d model I was able to discover close to the competition deadline, so I didn’t retrain the original ensemble and stick to the first version of pseudos. </p> <h2>Inference setup and Post-processing</h2> <p>The inference for both models was performed using sliding_window_inference from monai library. Additionally, for 2d model I performed multi-view tta, which helped to detect small vessels and improve overall performance. </p> <p>For the 2d model, the crop size was 800 pix, while for the 3d – 256 pix with 0.25 overlap and Gaussian merging. All models used d4_transform from ttach library. I’ve forked the ttach repository and implemented the logic for 3d images, but the inference time increased significantly, and there was no major boost in performance, so I’ve sticked with 2d d4_transform for both 2d and 3d models :)</p> <p>As I mentioned before, the 3d model had decent performance on the non-empty cubes, while empty ones were confusing the model. To handle this issue, decided to experiment with post-processing. The idea was the next one: let's try to find ROI where the vessels were presented. Since the 2d model didn’t have such a problem I’ve decided to find a bounding polygon for vessels for each 2d slice. Having a mask of ROI, I multiplied it with 3d model predictions and got a boost from 0.869 to 0.881 public LB and 0.701 private LB for a single 3d model.</p> <p>Ensembling the 2d model and 3d model predictions with weights 1 and 1, I was able to improve the score from 0.881 to 0.884 on the public LB and 0.712 on the private LB.</p> <p>Another post-processing approach that I’ve tried is to use Canny filters from cv2 to segment the kidney. This segmentation algorithm was not perfect, but applying such post-processing boosted my score from 0.884 to 0.892 on the public LB while failing on the private LB, scoring just 0.313.</p> <h2>What didn’t work</h2> <ul> <li>nnUnet out of the box. At the beginning of the challenge and after the pre-print reading, I tried to reproduce the result with nnUnet. The local score was promising, but the LB was 0. My intuition behind this issue related to the data normalization and spacing (scale), but I didn’t try to fix it and decided to build my own solution.</li> <li>BCE and Focal Loss.</li> <li>Transformers in both 2d and 3d model</li> <li>Zoom and brightness augmentation for 3d images</li> <li>Pseudo on top of sparse datasets. I’ve tried to fulfill the sparsity of the dataset by pseudo labeling and aggregation, but it didn’t improve the score.</li> <li>Additional projections. I’ve performed experiments with 2d models and additional slices generated from the 3d stack, but LB performance dropped by 20% while CV was about the same. </li> <li>Auxiliary outputs such as distance transform or center of mass. </li> <li><strong>and the most important: validation</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F207760%2Fcd379db59f3575778ebc4eca63ef0189%2FScreenshot%202024-02-07%20at%2002.29.01.png?generation=1707265761683297&amp;alt=media"></li> </ul> <p>P.S. If you were able to read all of this, the top score on the private LB was a simple mix of 2d and 2.5d models with 1 and 3 channels :) </p> <p>P.P.S. Thank you for reading!</p> <h2>Links</h2> <ul> <li><a href="https://www.kaggle.com/datasets/igorkrashenyi/50um-ladaf-2020-31-kidney-pag-0-01-0-02-jp2" target="_blank">Pseudo labels </a></li> <li><a href="https://github.com/burnmyletters/blood-vessel-segmentation-public" target="_blank">Source code</a></li> <li>Inference code <a href="https://www.kaggle.com/code/igorkrashenyi/4th-place-solution/notebook" target="_blank">https://www.kaggle.com/code/igorkrashenyi/4th-place-solution/notebook</a> + <a href="https://www.kaggle.com/code/igorkrashenyi/fork-of-multiview-2-5-sennet-hoa-inference-v3" target="_blank">https://www.kaggle.com/code/igorkrashenyi/fork-of-multiview-2-5-sennet-hoa-inference-v3</a></li> </ul>
Child Mind Institute - Detect Sleep States
34th Place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank the organizers and kaggle staff for organizing the competition. I learned a lot through the competition.</p> <h1>Summary</h1> <ul> <li>based on 213tubo's repository</li> <li>Data cleansing</li> <li>Ensemble models</li> <li>Reduce FP by post-processing</li> </ul> <h1>data cleansing</h1> <p>When I visualized <code>train_series.parquet</code> and <code>train_events.csv</code> by series_id as shown below, I noticed that the label data was dirty.</p> <p>Labels were not assigned where they should have been, or were assigned where they should not have been (where the organizer may have filled in values for missing values).</p> <p>There were many areas of concern, such as labels that were assigned at times that were probably tens of minutes or more off from the time the subjects were onset/wakeup.</p> <p>We thought that such dirty label data would be an obstacle in training the model, so we examined the data for every series_id and cleansed the label data by shifting or adding the position of the labels.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4472591%2F54084323b61ae128f11ae023af23f952%2F1.png?generation=1701821199709357&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4472591%2F094a8b23da8b1a11b44d8518081a552d%2F2.png?generation=1701821213143683&amp;alt=media" alt=""></p> <h1>Model</h1> <ul> <li>Unet</li> <li>encoder<ul> <li>maxvit_tiny</li> <li>eca_nfnet_l1</li> <li>dm_nfnet_f0</li></ul></li> <li>Augment<ul> <li>Mixup</li> <li>Spec_augment</li></ul></li> <li>Label_smoosing(sleep)</li> <li>Sliding inference</li> <li>Not using both ends of the inference</li> </ul> <h1>Post-processing</h1> <p>When I visualized the data, I noticed that some of the data showed unnatural periodic patterns, such as <code>1e6717d93c1d</code>.</p> <p>I considered these unnatural periodic patterns to be invalid because the organizers had filled in common values for each time of the day for periods when no data were measured.</p> <p>The model may output onset/wakeup predictions within these periodic patterns, but they are considered false positives.</p> <p>We post-processed the model to remove predictions within these periodic patterns.</p> <p>The criteria for periodic patterns are as follows,</p> <ul> <li>When anglez and envo are shifted back and forth 17280*n steps, if both anglez and envo values are consistent, the prediction is considered invalid.</li> <li>However, if they do not match for two or more consecutive steps, they are not judged invalid (considering those due to coincidence)</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4472591%2F2f9154d75cc09c12501a037c8fe0b2d6%2F3.png?generation=1701821229161164&amp;alt=media" alt=""></p>
SenNet + HOA - Hacking the Human Vasculature in 3D
6th place solution: Luck is All You Need
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi, Kagglers!</p> <p>After finding yourself on the Leaderboard after Grand Shakeup and restoring your mental health, we can dive deep into 6th place solution, but before this, a few very important words:<br> <em>I would like to thank the Armed Forces of Ukraine, Security Service of Ukraine, Defence Intelligence of Ukraine, and the State Emergency Service of Ukraine for providing safety and security to participate in this great competition, complete this work, and help science, technology, and business not to stop but to move forward.</em></p> <h1>Validation. Not really</h1> <p>I was using for validation 2 organs (in different folds): kidney_3_dense and kidney_2. After releasing a <a href="https://www.kaggle.com/code/junkoda/fast-surface-dice-computation" target="_blank">fast version of 3D Surface Dice</a>, I was able to compute validation scores while training, and I received the next insights:</p> <ul> <li>Tracking the score on kidney_2 was useless for me. The validation score decreased from epochs 2-3 on kidney_2</li> <li>Scores on kidney_3_dense were meaningful for checking “radical” features, like additional data and new losses. But then optimal score fluctuated between 0.9-0.925 dice without any reasonable correlation with Public or Private score</li> <li>The optimal threshold on kidney_3_dense was optimal for Private, Public, and kidney_3_dense scores - 0.1 and lower <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1690820%2Fa4050c6e4e6d27662cfddc62201e2f71%2Fkidney_3_dense_val.png?generation=1707321782497972&amp;alt=media"></li> <li>Resize to constant um/voxel (I have picked 50 um/voxel) for prediction increased optimal threshold both on CV and Public but decreased optimal score dramatically. But on Private, it became one of the most robust approaches<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1690820%2F6f0a8a87cf758aa4920353ace23c11cf%2Fkidney_3_dense_resize_val.png?generation=1707323076488076&amp;alt=media"><br> In summary, Validation did not work (at least mine). It is not strange because of solo data point in CV, Public, and Private </li> </ul> <h1>Data</h1> <p>I was using all train data except kidney_1_voi sample<br> In order to enlarge training data I have used 50um_LADAF_2020_31_kidney_pag from <a href="https://human-organ-atlas.esrf.eu/search?organ=kidney" target="_blank">Human Organ Atlas</a> <br> For data normalization, I was using the approach proposed by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> - <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/456118#2552053" target="_blank">percentile normalization</a></p> <h1>Training setup</h1> <p>I mostly stick to the 2.5D approach with 5 slices.<br> I started from one view model and iterated along the last axis, but then I switched to a multiview and used slices by all three axes in the training<br> I have used 512 square crops with Non Empty probability of 0.5, pretty much standard augmentations and CutMix within one organ and one view with 0.5 probability and 1.0 alpha :</p> <pre><code>: : [ A.PadIfNeeded( min_height=crop_size, min_width=crop_size, always_apply=, ), A.OneOrOther( first=A.CropNonEmptyMaskIfExists(crop_size, crop_size), second=A.RandomCrop(crop_size, crop_size), p= ), ], : A.Compose( [ A.ShiftScaleRotate( scale_limit=, ), A.RandomRotate90(p=), A.HorizontalFlip(p=), A.VerticalFlip(p=), A.Transpose(p=), A.OneOf( [ A.RandomBrightnessContrast(), A.RandomBrightness(), A.RandomGamma(), ], p=, ), ToTensorV2(transpose_mask=), ] ), : , : {: , : }, </code></pre> <p>I was using Adam optimizer and reduced learning rate with CosineAnnealingLR starting from 1e-3 and ending with 1e-6<br> Regarding loss function choice, I started with classical BCE+Dice loss and then tried to implement the loss function, which will directly optimize the metric, but unfortunately, it did not work well. Luckily, I have come across BoundaryLoss, which worked firstly comparable to BCE+Dice loss and then better. Interesting fact that the best (not selected) model was trained on BoundaryLoss + 0.5 Focal Symmetric Loss and scored 0.756 on Private, 0.867 on Public, and 0.916 on kidney_3_dense, which is a pretty much balanced score (of course, comparing to all other models score distribution 🙂)<br> I was training for 30 epochs. repeating the original train set 3 times and the pseudo train set 2 times<br> I have used a batch size of 14 samples and trained with DDP on 2 GPUs, so the final batch size was 28</p> <p>After I saw <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/461213#2594751" target="_blank">the post</a> about promising results from 3D models, I started exploring 3D approaches, and they worked pretty well. In order to make it train without NaNs I have changed the optimization strategy and switched to SGD, with momentum=0.99, weight_decay=3e-5, nesterov=True and also changed starting learning rate to 1e-6 - taken from <a href="https://github.com/Project-MONAI/tutorials/tree/main/modules/dynunet_pipeline" target="_blank">monai example</a><br> As the overall image resolution of the image was increased dramatically, I had to reduce the batch to 3 on one GPU, so the aggregated batch size was 6. I was training in total for ~120K iterations<br> As for augmentations - they were pretty much the same as in the 2.5D setup, except from Zoom. </p> <pre><code>: : mt.Compose( [ mt.OneOf([ mt.RandRotate90d(keys=(, ), prob=, spatial_axes=(-,-)), mt.RandRotate90d(keys=(, ), prob=, spatial_axes=(-,-)), mt.RandRotate90d(keys=(, ), prob=, spatial_axes=(-,-)) ]), mt.RandFlipd(keys=(, ), prob=, spatial_axis=-), mt.RandFlipd(keys=(, ), prob=, spatial_axis=-), mt.RandFlipd(keys=(, ), prob=, spatial_axis=-), mt.RandScaleIntensityd(keys=(), prob=, factors=) ] ), </code></pre> <p>Zoom worked better on CV but worse on Public and also on Private (Why? - who knows …)</p> <h1>Neural Networks</h1> <p>I was mostly using <a href="https://arxiv.org/abs/1905.11946" target="_blank">EfficientNet family</a> as an Encoder (from noisy student weights), started from B3, then switched to B5, and unfortunately, B7 did not work well for me both on CV and Public </p> <p>Interestingly se_resnext50_32x4d performed not well on Public LB (0.852) and CV (0.909) but really well on Private (0.702)</p> <p>As for Decoder I was mostly using Unet++. I have tried <a href="https://arxiv.org/abs/2004.08790" target="_blank">Unet3+</a> but it showed considerably worse results</p> <p>As for 3D Nets, I was using <a href="https://monai-dev.readthedocs.io/en/fixes-sphinx/networks.html#dynunet" target="_blank">DynUNet</a> and adopted model architecture according to <a href="https://github.com/Project-MONAI/tutorials/blob/main/modules/dynunet_pipeline/create_network.py#L19" target="_blank">next script</a>. I have tried to use pretrained Unet from <a href="https://monai.io/model-zoo.html" target="_blank">MONAI Model Zoo</a> but it performed badly on all sets </p> <h1>Inference and Post Processing</h1> <p>I was using 512 sliding window with 0.5 overlap, flip TTA, and last checkpoint from 2 folds.<br> After switching to multi view model, I have also added multi view TTA</p> <p>The next step was the creation of a kidney mask. I have tried several approaches </p> <ol> <li>Using segmentation net, trained on <a href="https://www.kaggle.com/datasets/squidinator/sennet-hoa-kidney-13-dense-full-kidney-masks" target="_blank">this dataset</a> + slight post-processing for removing binary holes and small connected regions </li> <li>Using an algorithmic approach based on intensity thresholding, erosion, and dilation</li> </ol> <p>The first one had a pretty high FP rate but nearly zero FN rate, while the second one had a pretty high FN rate. Both of them performed nearly ideal on kidney_3, so did not really influence the fold 0 scores, but an algorithmic approach cut out kidney regions for kidney_2 and dramatically reduced the fold 1 score. BUT at the same time, the second approach improved Public score (0.874-&gt;0.882). I understood that it was 90% overfit to Public LB, but I have decided to take the risk</p> <h1>Final Model</h1> <p>For final submission, I have selected the following ones:</p> <ul> <li>Pure 2.5D -&gt; algorithmic post-processing<ul> <li>Public: 0.886</li> <li>Private: 0.681</li> <li>Kidney 3 dense score: 0.917 </li></ul></li> <li>2.5D (weight 3.0) blended with 3D (weight 1.0) <ul> <li>Public: 0.871</li> <li>Private: 0.676</li> <li>Kidney 3 dense score: ~0.918<br> For both models, I used 0.05 threshold </li></ul></li> </ul> <h1>The most popular rubric of this competition: Not Selected Best Submission</h1> <p>Here, I want to point out several of the most exciting approaches for me </p> <ul> <li>Pure 2.5D but add Symmetric Focal loss with 0.5 coefficient <ul> <li>Public: 0.867</li> <li>Private: 0.756</li> <li>Kidney 3 dense score: 0.916</li></ul></li> <li>Resize 2d slices to 50 um/voxel for prediction and then resize back <ul> <li>Public: 0.799</li> <li>Private: 0.753</li> <li>Kidney 3 dense score: 0.907</li></ul></li> <li>Resize the whole volume with scipy.zoom to 50 um/voxel for prediction and than resize back <ul> <li>Public: 0.7 resize back </li> <li>Public: 0.726</li> <li>Private: 0.745</li> <li>Kidney 3 dense score: Have not checked </li></ul></li> <li>Solo 3D model <ul> <li>Public: 0.849</li> <li>Private: 0.723</li> <li>Kidney 3 dense score: 0.915<br> For me, it was logical to pick first or second, but as for all other better submissions, it sounds to me like pure random.</li></ul></li> </ul> <h1>Conclusions</h1> <p>Computing metrics on one data sample leads to severe shakeups 🙂</p> <h1>Closing words</h1> <p>I hope you have not fallen asleep while reading. Finally, I want to thank the entire Kaggle community, congratulate all participants and winners. Special thanks to Indian University Bloomington, University College London, Yashvardhan Jain (@yashvrdnjain), Claire Walsh (@clairewalsh), the Kaggle Team, and other organizers.</p>
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
2nd Place Solution - UBC-OCEAN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Preface</h1> <p>The most significant difficulty in whole slide image (WSI) classification is the extremely high resolution, which should have been experienced by all competitors. Although the organizers of the competition provided a data type difficult to process, fortunately, the resolution of the data is much lower than that of typical WSI. In this discussion, we will provide a detailed introduction to our method.</p> <h1>Overview</h1> <p>Following the commonly used methods in academia, we toke the following steps:</p> <ol> <li><strong>Crop</strong> an entire WSI into thousands of <strong>patches</strong>;</li> <li>Use extractors to <strong>extract the features</strong>;</li> <li>Train the <strong>MIL</strong> models.</li> </ol> <h1>External Data</h1> <p>We used two external data with labels. All competitors can download without payment. We found that although more external data and Other class were used for training, there was no significant improvement in scores. We believe this is due to quality issues with external data or significant differences from competition data. Just as some competitors can achieve high scores without using external data, we believe that the external data is not necessary in this competition.</p> <ul> <li><a href="https://wirtualnymikroskop.mostwiedzy.pl/list/" target="_blank">https://wirtualnymikroskop.mostwiedzy.pl/list/</a></li> <li><a href="https://www.cancerimagingarchive.net/collection/ptrc-hgsoc/" target="_blank">https://www.cancerimagingarchive.net/collection/ptrc-hgsoc/</a></li> </ul> <h1>Crop Patches and Extract Features</h1> <p>We create one Dataset for one WSI. Code is here:</p> <pre><code> (): (): ().__init__() self.data_path = data_path self.wsi_name = wsi_name self.ratio = ratio mode [, ] self.mode = mode self.wsi = pyvips.Image.new_from_file(os.path.join(data_path, , wsi_name + )) self.is_tma = self.wsi.height &lt; self.wsi.width &lt; self.patch_size = patch_size self.transform = T.Compose([T.ToTensor(), T.Resize((, ), antialias=), T.Normalize(mean=[, , ], std=[, , ])]) self.cor_list = self.get_patch() (): cor_list = [] self.is_tma: thumbnail = self.wsi : thumbnail = pyvips.Image.new_from_file(os.path.join(self.data_path, , self.wsi_name + )) wsi_width, wsi_height = self.wsi.width, self.wsi.height thu_width, thu_height = thumbnail.width, thumbnail.height h_r, w_r = wsi_height / thu_height, wsi_width / thu_width down_h, down_w = (self.patch_size / h_r), (self.patch_size / w_r) cors = [(x, y) y (, thu_height, down_h) x (, thu_width, down_w)] x, y cors: tile = thumbnail.crop(x, y, (down_w, thu_width - x), (down_h, thu_height - y)).numpy()[..., :] black_bg = np.mean(tile, axis=) &lt; tile[black_bg, :] = mask_bg = np.mean(tile, axis=) &gt; np.(mask_bg) &lt; (down_h, thu_height - y) * (down_w, thu_width - x) * (cor_list) == self.is_tma: cor_list.append(((x * w_r), (y * h_r))) self.is_tma: cor_list self.wsi.height &lt; self.wsi.width &lt; : R_ratio = self.wsi.height &lt; self.wsi.width &lt; : R_ratio = : R_ratio = random.shuffle(cor_list) cor_list = cor_list[:(((cor_list) * R_ratio), )] cor_list (): (self.cor_list) (): x, y = self.cor_list[idx] tile = self.wsi.crop(x, y, (self.patch_size, self.wsi.width - x), (self.patch_size, self.wsi.height - y)).numpy()[..., :] tile = self.transform(tile) tile </code></pre> <h1>Feature Extraction Model</h1> <p>We used <strong>dino_vit_small_patch16_200ep.torch</strong> and <strong>dino_vit_small_patch8_200ep.torch</strong>.</p> <ul> <li><a href="https://github.com/lunit-io/benchmark-ssl-pathology/releases/tag/pretrained-weights" target="_blank">https://github.com/lunit-io/benchmark-ssl-pathology/releases/tag/pretrained-weights</a></li> </ul> <h1>MIL Model</h1> <ul> <li>ABMIL</li> <li>DSMIL</li> <li>TransMIL</li> </ul> <h1>Codes</h1> <p>Simplified Version</p> <ul> <li><a href="https://www.kaggle.com/code/zznznb/wsi-train" target="_blank">https://www.kaggle.com/code/zznznb/wsi-train</a></li> <li><a href="https://www.kaggle.com/code/zznznb/wsi-inference-public-0-6-private-0-58" target="_blank">https://www.kaggle.com/code/zznznb/wsi-inference-public-0-6-private-0-58</a></li> </ul> <p>Final Version</p> <ul> <li><a href="https://www.kaggle.com/code/hustzx/2nd-0-61-train-abmil-dsmil-transmil" target="_blank">https://www.kaggle.com/code/hustzx/2nd-0-61-train-abmil-dsmil-transmil</a></li> <li><a href="https://www.kaggle.com/code/hustzx/2nd-0-61-infernece-abmil-dsmil-transmil" target="_blank">https://www.kaggle.com/code/hustzx/2nd-0-61-infernece-abmil-dsmil-transmil</a></li> </ul> <p>Feature Extraction Codes</p> <ul> <li><a href="https://github.com/z1186464862/UBC-OCEAN" target="_blank">https://github.com/z1186464862/UBC-OCEAN</a></li> </ul>
SenNet + HOA - Hacking the Human Vasculature in 3D
Rank 46 Solution - keep the CV discipline
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Our solution</h2> <p>Thanks to the organisers for the learning opportunities and thanks <a href="https://www.kaggle.com/shashwatraman" target="_blank">@shashwatraman</a> for helping me in this comp.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F12870466%2Fcd4079e4c550c311d7a36b823d1dfd17%2FScreenshot%20from%202024-02-07%2000-59-23.png?generation=1707264535063295&amp;alt=media"></p> <p><strong>Submission Picking rule:</strong><br> We played against the shakeup by keeping the CV simple and the one that made the most sense for us: validation on kidney_3 dense. We didn't trust the LB at all in this competition and decided to pick the submission that followed the following criteria:</p> <ul> <li>Tiling Model - avoid weird shapes of the private LB</li> <li>Used interpolation augs (resizing and random resize crop) - resolution shift</li> <li>Last epoch weights (no checkpointing) - avoid overconfident CV</li> </ul> <p><strong>After that, the training protocol is very simple and there is no post/preprocessing done.</strong></p> <ul> <li>2D unet</li> <li>Tiling as mentionned before (384x384)</li> <li>maxvit_small_384 single model (no ensembling) (we didn't have time xd)</li> <li>15 epochs</li> <li>volume norm</li> <li>light augs </li> <li>Low threshold: 0.1</li> </ul> <p><strong>Takeaways:</strong><br> Most of the things we tried in the comp didn't work, the things that worked best on CV was the 2.5d and 3d approaches, but I didn't want to trust that the z axis resolution was going to allow such models.<br> I thought about distilling those preds to a 2D model but i didn't have the time for it.<br> I would love to participate in another 3D comp where everything is more stable and where I can use a bit more imagination than this one, where most of the score came from having the discipline of not looking at the LB. <br> Even if I'm proud of our CV, our best submission scored 0.615, and if I really stuck to my rules, I would have picked it, but the LB for it was so low I did coward away from taking it, lesson learned.</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
14th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2F6a7d79a9543e55064e881b6961b14061%2Foverview.jpg?generation=1707324869303897&amp;alt=media" alt="overview"></p> <ul> <li>2.5D segmentation model that inputs N consecutive slices stacked in ch direction and outputs corresponding Nch masks. </li> <li>Input images are cropped to the kidney area only and then resized. </li> </ul> <h1>Pipeline Detail  </h1> <h2>1. Preprocess</h2> <h3>1-1. Normalization</h3> <ul> <li>A histogram of luminance values is calculated for the entire kidney and normalized based on minimum and maximum values. </li> <li>Normalization based on maximum and minimum values per image unit could cause variations in the appearance of images, resulting in unnatural switching of inference results. To counteract this, normalization based on the luminance distribution of the entire kidney was employed. </li> <li>The code is as follows. </li> </ul> <pre><code> (): img_paths = (glob(os.path.join(image_dir, ))) pixels = np.zeros((,), dtype=np.int64) img_path tqdm(img_paths): img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED) _pixels = np.bincount(img.flatten(), minlength=) pixels += _pixels = hist = [] bins = [] i (, +, ): hist.append(pixels[i:i+].()) bins.append(i) hist = np.array(hist) hist_rate = hist/hist.() idxes = np.where(hist_rate&gt;)[] min_idx = idxes[] max_idx = idxes[-] bins[min_idx]-, bins[max_idx]+ min_val, max_val = get_min_max_val(inference_img_dir) img = cv2.imread(path, cv2.IMREAD_UNCHANGED) img = img.astype() img = np.clip(img, min_value, max_value) img = (img-min_value)/max_value </code></pre> <h3>1-2. Crop</h3> <ul> <li>Obtain a rectangle of the kidney region using a segmentation model that infers a mask of the entire kidney. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2F68c25e3b97c72b5b599b5607e8bcff8f%2Fcrop.jpg?generation=1707324937523266&amp;alt=media" alt="crop"></li> <li>The kidney segmentation model used a single model of Unet (backbone: efficientnet_b4) and only kidney_1_dense was used as training data. </li> <li>Cropping only the kidney region eliminates wasted areas in the image and greatly improves the accuracy of vessel segmentation. </li> <li>During training, the height and width of the rectangle were stochastically increased or decreased by ±5% as part of augmentation.</li> </ul> <h3>1-3. Resize</h3> <ul> <li>Because of the strict masking requirements of this competition metric, it was important to resize the image to a larger image size. </li> <li>In my solution, I trained the model by resizing the image to as large as GPU memory would allow, in the range of 1536~1920. </li> <li>If the mask is resized by OpenCV's resize function and then resized back to the original size again, the mask pixels are shifted to the lower right, resulting in a significant loss of accuracy. Therefore, care should be taken in resizing.<ul> <li>In my solution, I used an affine transformation that simultaneously translates by 0.5 pixel and scales the image to prevent pixel misalignment.  </li> <li>Incidentally, this idea is strongly influenced by the contrail competition solution. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3823496%2Fe326568835894e0923ac98172873977f%2Fresize.jpg?generation=1707324969473901&amp;alt=media"></li></ul></li> </ul> <h2>2. Vessel Segmentation</h2> <h3>Model</h3> <ul> <li>Model was Unet (using smp implementation), resnest14d, resnest50d, maxvit_tiny were used for backbone, and ensemble with equal weights was used as final sub. </li> <li>Since I wanted to use depth direction information as well, we employed a 2.5D model that takes an input image consisting of n (5 or 7) consecutive slices stacked in the ch direction and outputs the corresponding n ch masks. </li> </ul> <h3>Data</h3> <ul> <li>The data was kidney_1_dense as training data and kidney_3_dense as validation data. Some models used kidney_2 and Pseudo Labeled data for external data as training data. </li> </ul> <h3>Augmentation</h3> <ul> <li>Use augmentation on rotation, flipping, and brightness (using the albumentations implementation).</li> <li>The shape-changing type augmentaion (e.g., Distortion) was tried but was not used because it worsens the accuracy of both cv/lb.</li> </ul> <h3>Inference</h3> <ul> <li>Inference in each view in XY, XZ, and ZY directions. </li> <li>The accuracy of both cv/lb was increased by inputting a larger size than the image size used for training during inference. </li> <li>The threshold was determined based on CV and used 0.25. </li> </ul> <h3>Summary</h3> <p>The final scores are as follows. </p> <table> <thead> <tr> <th>Model</th> <th>N(ch)</th> <th>train_data</th> <th>validation_data</th> <th>input_size(train)</th> <th>input_size(inference)</th> <th>CV</th> <th>Public</th> <th>Private</th> </tr> </thead> <tbody> <tr> <td>resnest14d</td> <td>7</td> <td>kidney_1_dense</td> <td>kidney_3_dense</td> <td>1920x1920</td> <td>2304x2304</td> <td>0.909</td> <td>0.835</td> <td>0.659</td> </tr> <tr> <td>resnest50d</td> <td>5</td> <td>kideny_1_dense</td> <td>kidney_3_dense</td> <td>1536x1536</td> <td>1920x1920</td> <td>0.903</td> <td>0.818</td> <td>0.599</td> </tr> <tr> <td>maxvit_tiny</td> <td>7</td> <td>kidney_1_dense, kidney_2(pseudo label), extra_data(pseudo label)</td> <td>kidney_3_dense</td> <td>1536x1536</td> <td>2048x2048</td> <td>0.901</td> <td>0.810</td> <td>0.623</td> </tr> <tr> <td>Ensemble</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>0.913</td> <td>0.824</td> <td>0.645</td> </tr> </tbody> </table>
SenNet + HOA - Hacking the Human Vasculature in 3D
🥈24th Place Solution(A potential solution for achieving private 0.70)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>TLDR</h1> <p>The key factors in enhancing the model's robust performance included <strong>Tversky loss</strong>, <strong>increased inference size</strong>, and <strong>resolution augmentation</strong>. These elements ultimately played a significant role in the model's survival during shakeups. The final model is an ensemble composed of 2D U-Net models based on the RegNetY-016 architecture</p> <h1>Interesting Point</h1> <p>It was a truly challenging competition to secure a reliable validation set (which, unfortunately, I couldn't achieve). Upon reviewing the results, I discovered a significant difference between the public and private sets. Surprisingly, there were submissions from the past that would have made it into the gold zone. It's astonishing, considering I didn't think much of those submissions and didn't end up submitting them anyway.<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8251891%2Fe5a2fef330678d164bc8ff54d06c3e34%2F2024-02-07%2021-58-54.png?generation=1707310804960891&amp;alt=media"></p> <p>In my experiments, I found that giving a higher weight to the positive class in the Tversky loss (with a larger beta) resulted in better performance on the private leaderboard. However, for my final submission, where I trained with a smaller beta value in the Tversky loss, the model performed well on the public leaderboard but surprisingly poorly on the private leaderboard. <strong>This discrepancy may be attributed to the higher resolution of the private dataset, which likely contained more detailed input information. Consequently, it suggests that a lower threshold for the model logits might have been more appropriate in this context. Furthermore, scaling the image (1.2, 1.5, …) during inference seemed to dilute input information, reducing the impact of resolution.</strong></p> <h1>Data Processing</h1> <p>Three consecutive slices were used as input for the model, and training was performed with the original image size. Despite experimenting with an increased number of slices (5, 7, 9…), the results revealed a decrease in performance.</p> <p>In the early stages of the competition, I trained the model by resizing the data to a specific size. However, as I mentioned in the <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/463121" target="_blank">discussion</a>, the Resize function in the Albumentations library applies nearest interpolation to masks, causing significant noise in fine labels and resulting in a notable decrease in performance. Therefore, I opted to use the original image size.</p> <h1>Augmentation</h1> <p>Awaring of the different resolutions between public and private data, I aimed to create a model robust to resolution variations. Additionally, understanding the existence of resizing during the binning process, I employed blur augmentation. </p> <pre><code>def blur_augmentation(x): h, w, = x.shape = ..uniform(, ) x = A.Resize(int(h*), int(w*))(=x)[''] x = A.Resize(h, w)(=x)[''] x </code></pre> <p>Moreover, since the channels were constructed by stacking depth, I applied the following augmentations.</p> <pre><code>def channel_augmentation(x, =0.5, =3): assert x.shape[2]==n_channel np.random.rand()&lt;prob: x = np.flip(x, =2) return x </code></pre> <p>Finally, given the inherent noise in the data annotations, I implemented strong cutout augmentation to prevent overfitting.</p> <pre><code>A.Cutout(=8, =128, =128, =0.8), </code></pre> <p>These approaches effectively contributed to performance improvement in both CV and LB.</p> <h1>Model</h1> <p>The model, like most others, employed a 2D U-Net architecture. For the CNN backbone, I utilized the lightweight model RegNetY-016. Other than that, the settings remained consistent with the default values in SMP (Segmentation Models PyTorch).</p> <p>Despite investing significant time in developing a 3D-based model, it failed to demonstrate notable score improvements in both CV and LB. Due to resource constraints, I shifted my focus to a 2D model.</p> <pre><code>class CustomModel(nn.Module): def __init__(self): super(CustomModel, self).__init__() self.n_classes = 1 self.in_chans = 3 self.encoder = timm.create_model( , =, =, =self.in_chans, ) encoder_channels = tuple( [self.in_chans] + [ self.encoder.feature_info[i][] i range(len(self.encoder.feature_info)) ] ) self.decoder = UnetDecoder( =encoder_channels, decoder_channels=(256, 128, 64, 32, 16), =5, =, =, =None, ) self.segmentation_head = SegmentationHead( =16, =self.n_classes, =None, =3, ) self.train_loss = smp.losses.TverskyLoss(=, =0.1, =0.9) self.test_loss = smp.losses.DiceLoss(=) def forward(self, batch, =): x_in = batch[] enc_out = self.encoder(x_in) decoder_out = self.decoder(*[x_in] + enc_out) x_seg = self.segmentation_head(decoder_out) output = {} one_hot_mask = batch[][:, None] training: loss = self.train_loss(x_seg, one_hot_mask.float()) : loss = self.test_loss(x_seg, one_hot_mask.float()) output[] = loss output[] = nn.Sigmoid()(x_seg)[:, 0] return output </code></pre> <h1>Train</h1> <ul> <li>Scheduler : lr_warmup_cosine_decay </li> <li>Warmup Ratio : 0.1</li> <li>Optimizer : AdamW </li> <li>Weight Decay : 0.01</li> <li>Epoch : 20</li> <li>Learning Rate : 2e-4</li> <li>Loss Function : TverskyLoss(mode='binary', alpha=0.1, beta=0.9)</li> <li>Batchsize : 4</li> <li>Gradient Accumulation : 4</li> </ul> <h1>Inference</h1> <p>Scaling the image size by 1.5x during inference consistently resulted in score improvements in CV, public LB, and private LB. This acted as a form of dilation, significantly reducing false negatives and enhancing the model's performance. <strong>Simply increasing the image size by 1.2 for inference resulted in a 0.1 improvement on the private leaderboard.</strong></p> <h1>Didn't Work</h1> <ul> <li>I attempted to enhance the utility of kidney2 and kidney3 through pseudo-labeling, but it did not result in significant score improvement.</li> <li>Efforts to create a more robust model using heavier augmentations did not yield substantial effects.</li> <li>Experimenting with larger CNN models led to issues of overfitting.</li> </ul> <h1>Code</h1> <p><a href="https://github.com/siwooyong/SenNet-HOA-Hacking-the-Human-Vasculature-in-3D" target="_blank">https://github.com/siwooyong/SenNet-HOA-Hacking-the-Human-Vasculature-in-3D</a></p>
LLM - Detect AI Generated Text
[5th Place Solution in Efficiency Prize] 327th Place in Private LB
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank Kaggle and THE LEARNING AGENCY LAB for hosting such an exciting competition. <br> I really happy to be 5th place in efficiency prize.<br> I'm explain about my solution. </p> <p><strong>Solution Summary:</strong></p> <p><strong>Baseline</strong><br> I used the following public notebooks.<br> <a href="https://www.kaggle.com/code/hubert101/0-960-phrases-are-keys" target="_blank">https://www.kaggle.com/code/hubert101/0-960-phrases-are-keys</a><br> Thanks to <a href="https://www.kaggle.com/hubert101" target="_blank">@hubert101</a> and everyone who shared their notebooks!</p> <p><strong>Datasets</strong><br> The following datasets were used.<br> <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset</a><br> Thanks to <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> !<br> Since this data set contained multiple sources, we selected only those sources that pushed LB higher and used only 10 sources. Three of the sources used only specific prompts.</p> <p><strong>Encoding</strong><br> I used the Byte-Pair Encoding tokenizer without any changes from my notebook.<br> However, I applied the following code to speed up the process. I doubt if it was effective.<br> <code>num_batches = (len(dataset) + batch_size - 1) // batch_size for batch_index in range(num_batches): start_index = batch_index * batch_size end_index = min((batch_index + 1) * batch_size, len(dataset)) yield dataset[start_index:end_index]["text"]</code></p> <p><strong>Model</strong><br> The model was ensembled using.<br> 3 types of ComplementNB.<br> 3 types of SGDClassifier.<br> One LGBMClassifier.<br> SGD used class_weight and LGBM reduced n_iter to 300 to reduce inference time.</p> <p>I am really happy because I did not expect it.</p> <p>By the way, please let me know if you know anything about this.<br> My team name is in 5th place on Efficiency LB, but I haven't heard anything from Kggle staff, should I just wait for an email?<br> Do I need to contact Kggle staff in any way?<br> This is my first prize and I am nervous every day that it might be a mistake!</p> <p>Thank you!</p> <p><strong>Code</strong><br> <a href="https://www.kaggle.com/code/ootake/efficiency-lb-5th-place-solution" target="_blank">https://www.kaggle.com/code/ootake/efficiency-lb-5th-place-solution</a></p>
Child Mind Institute - Detect Sleep States
14th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition was a great learning experience for me. I would like to thank the organizers!</p> <h2>Overview</h2> <ul> <li>The neural network was trained to directly predict the timing of events.</li> <li>The training was performed as a regression task, with 'wakeup' set to 1 and 'onset' set to -1 and decayed before and after the occurrence of those events as the target.</li> <li>The neural network structure is a modified version of U-Net with 1D-CNN.</li> <li>Submission file was created by extracting peaks for the average of the model output in the 10-fold CV.</li> <li>Post-processing before peak extraction improved the CV score and Public LB Score, but had little effect on the Private LB Score.</li> <li>The public/private scores of my final submission are 0.755/0.821 without post-processing, and 0.770/0.824 with post-processing.</li> </ul> <p>The notebooks of my solution are as follows:</p> <ul> <li><a href="https://www.kaggle.com/code/rsakata/cmi-14th-place-solution-train" target="_blank">https://www.kaggle.com/code/rsakata/cmi-14th-place-solution-train</a></li> <li><a href="https://www.kaggle.com/code/rsakata/cmi-14th-place-solution-inference" target="_blank">https://www.kaggle.com/code/rsakata/cmi-14th-place-solution-inference</a></li> </ul> <p>The public/private scores of the above notebook are 0.748/0.822 without post-processing, and 0.768/0.822 with post-processing. The differences appear to be due to minor changes made in the process of refactoring the code or simply due to randomness. (I consider it not essential.)</p> <h2>Input</h2> <p>The data for each series_id was divided into daily segments and input to the neural network. To be precise, however, in order to avoid the influence of padding in the CNN, the data of the previous and following days were partially combined and input.</p> <p>The number of input channels to the neural network is three, as follows:</p> <ul> <li>logarithm of the 2-minute moving standard deviation of 'angelz'</li> <li>logarithm of 'enmo'</li> <li>flag to identify dummy data</li> </ul> <p>Like the others, dummy data was detected by duplicating (anglez, enmo, time) in the same series_id. Predictions for dummy data were set to 0 before calculating the loss, so as not to affect backpropagation. (Therefore, the third input is not that important.)</p> <p>Features related to time were not added to the input to the neural network, and all temporal trends were considered in post-processing.</p> <h2>Target</h2> <p>In the evaluation metrics of this competition, scores can be obtained even if the timing of event occurrence detection is slightly off, so the target should reflect not only the moment of event occurrence but also its surroundings when training.<br> On the other hand, however, insufficient target decay tended to make the timing of peaks only roughly predictable, leading to worse scores at smaller thresholds. After searching for the method that would yield the best validation score, I finally adopted an exponentially decaying form, as shown in the figure below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F169364%2F40d85921f006a2b2c7c9522d8138d4f5%2F__results___5_0.png?generation=1702044292018443&amp;alt=media" alt=""></p> <h2>Model Architecture</h2> <p>The architecture of my neural network is based on the following notebook of the other competition written by K_mat (2nd place in this competition).<br> <a href="https://www.kaggle.com/code/kmat2019/u-net-1d-cnn-with-keras" target="_blank">https://www.kaggle.com/code/kmat2019/u-net-1d-cnn-with-keras</a></p> <p>The input granularity is every 5 seconds, but the output granularity is every minute, which differs from regular U-Net. I tried various structures in terms of number of layers, number of channels, kernel size, etc., but the search was limited and I believe that better configurations exist.<br> For details on the structure of the model, please see the notebook.</p> <h2>Post-processing</h2> <p>No time information was input to the neural network, which was taken into account in post-processing. The minute-by-minute scores output by the neural network were multiplied by weights according to the trend by time of day.<br> Specifically, they are as follows:</p> <ul> <li>probability of each events by time of day across all series_id</li> <li>probability of each events by 'minute mod 15'</li> <li>average of scores by time of day for each series_id (to capture periodicity at the series_id level)</li> </ul> <p>In addition, as a post-processing step unrelated to time, the scores were multiplied by a coefficient based on the percentage of dummy data. This is because the series_id with more dummy data tended to be somewhat more difficult to guess the peak accurately, and I aimed to improve PR-AUC by placing more confident prediction at the top of the list.</p> <p>Peak extraction using the <code>scipy.signals.find_peaks</code> function was performed on the adjusted scores. A two-step strategy was used, first extracting the larger peaks and then the smaller peaks. The former extracts approximately one peak per day, while the latter is countless. By amplifying the former scores, I aimed to improve the PR-AUC.</p> <p>These post-processing improved the CV score by about 0.005 to 0.01, but unfortunately, as mentioned earlier, the impact of these post-processing on the Private LB Score was limited.</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
132th Solution: 2D UNet with p1-p99 normalization
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>A simple tu-seresnext50_32x4d UNet with volumes normalized on percentiles 1 and 99. The inference was a votation accumulation from x3 axis x4 rot90 scan of the minimum square possible padding with reflections. TH = 6 (of 12, .5)<br> <a href="https://www.kaggle.com/code/sacuscreed/smp-2d-inference-back-to-basics?scriptVersionId=161523508" target="_blank">https://www.kaggle.com/code/sacuscreed/smp-2d-inference-back-to-basics?scriptVersionId=161523508</a> <br> The training was made on kidney_1_dense and kidney_3_dense 512x512 centred on labels centroid crops padded with reflections too and basic augmentations. Batches of 8 (4 of each kidney) during 50 epochs without validation:<br> <a href="https://www.kaggle.com/code/sacuscreed/143th-solution" target="_blank">https://www.kaggle.com/code/sacuscreed/143th-solution</a></p>
SenNet + HOA - Hacking the Human Vasculature in 3D
7th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First and foremost, I'd like to extend my gratitude to Kaggle and the competition organizers for creating such a compelling event. Despite joining the contest relatively late, I was able to quickly get up to speed thanks to insightful discussions by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , the informative videos by <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a> , and the vibrant exchanges within the community.</p> <h2>Overview</h2> <p>I employed a hybrid approach for the model, utilizing 2.5D images as input. The architecture combines a 2D Unet framework with 3D convolutional layers. From my research and the community's insights, it seemed that a full 3D model might offer superior results compared to 2D models. However, due to the high computational costs associated with 3D models, I choose a blend of 3D convolutions within a 2D Unet structure, striking a balance between efficiency and performance.</p> <h2>Data Preparation</h2> <h3>Generating 3D Rotational Slices</h3> <p>I augmented the dataset with 3D rotation. The process begins by assembling the images into a 3D volume, followed by rotating two axes and extracting slices along the remaining axis. The rotation angles used are as follows:</p> <pre><code>rotation_angles = [ [, ], [, -], [-, ], [-, -], [, ], [, -], [-, ], [-, -], [, ], [, -], [-, ], [-, -] ] </code></pre> <p>Post-rotation, some slices exhibited increased areas of black background. To maintain data quality, I retained only those slices where the target segmentation was present and the black background constituted less than 50% of the slice area. </p> <h4>Rotate data sample</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2369671%2F2d8dc94653829769d2cc750903ab67de%2Fk1_z_pseudo_rotset7_1257.jpg?generation=1707579928485450&amp;alt=media"></p> <h3>Pseudo Labeling</h3> <p>The pseudo labeling process involved:</p> <ol> <li>Generating additional slices for kidney1 and kidney3 using the aforementioned technique.</li> <li>Training a model with the augmented dataset.</li> <li>Applying the model to pseudo label kidney2, followed by generating extra slices for it in a similar manner.</li> </ol> <h2>Model</h2> <h3>Architecture</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2369671%2F2152324997361e303fb5fdfed3811b7d%2FScreenshot%202024-02-10%20at%2023.52.55.png?generation=1707579666652474&amp;alt=media"></p> <h3>Training and Inference Details</h3> <h4>Training</h4> <p>The training setup was as follows:</p> <ol> <li>The model takes a 3-channel 2.5D image as input and outputs a 3-channel prediction.</li> <li>Normalize the input base on the std of each kidney</li> <li>I used a combination of loss functions: BCEWithLogitsLoss, DiceLoss, and FocalLoss. The loss for each of the three channels was calculated separately, with the middle channel assigned a higher weight of 0.9.</li> <li>Optimizer: AdamW</li> <li>Scheduler: CosineAnnealingLR</li> <li>Images were cropped to a size of 1024x1024 for processing.</li> </ol> <h4>Inference</h4> <p>For inference:</p> <ol> <li>A single model was used for predictions.</li> <li>The model operated at the original image resolution.</li> <li>Similar to training, the input comprised 3-channel 2.5D images, with the output being a 3-channel prediction, primarily focusing on the middle channel.</li> <li>Normalize the input base on the std of each kidney</li> <li>Predictions were made along the x, y, and z axes, and the results were averaged.</li> <li>Test Time Augmentation (TTA) included horizontal flipping.</li> <li>Post-processing steps involved applying thresholds of 0.2, followed by a 3D closing operation.</li> </ol>
LLM - Detect AI Generated Text
[3rd Place Efficiency Prize] Simple Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, I want to thank Kaggle and THE LEARNING AGENCY LAB for holding such an interesting competition. I also want to thank <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> for providing such a wonderful <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">training dataset</a>.</p> <p>I joined this competition in the late December. At that time the leaderboard score is already very high. I made a decision to go for the efficiency prize instead. Later on I found teamates in the discussion.</p> <p>The baseline model we chose is <a href="https://www.kaggle.com/code/hubert101/0-960-phrases-are-keys/notebook?scriptVersionId=153589869" target="_blank">the well-known 0.960 notebook</a> which improves the performance of tfidf based on the idea from <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a>. The runtime of the baseline model is more than 2 hours. There are a lot of spaces to opitmize. The optimization consists of three phases.</p> <h1><strong>Runtime optimization</strong></h1> <ol> <li>Remove super weak estimator with long excution time. After the randomization of the weights of the ensemble models, it is clear to find out the contribution of lgb is so little. Remove lgb caused roughly 0.001 performance degradation in the public LB but save 1 hour runtime.</li> <li>Reduce the number of features. The 3,5 ngram sparse matrix contains roughly 4million features while 3,4 contains roughly half of them. When changing 3,5 to 3,4, the performance decreased roughly 0.003 but save half the runtime.</li> <li>Reduce the training iterations of catboost. Other linear regression models are almost instantly train and inference, reduce the training iterations from 1000 to 100 then to 10 can save most of the time while the performance stays almost the same.</li> <li>Remove another weak estimator MNB. Remove MNB could get 0.005 improvement in the local CV and 0.003 in the public leaderboard.</li> <li>Use bayes optimization to search for a new parameters for catboost for 10 iters.</li> </ol> <p><code>sgd = SGDClassifier(max_iter=8000, tol=1e-4, loss="modified_huber",random_state=2000)</code><br> <code>cat = CatBoostClassifier(iterations=10,</code><br> <code>verbose=0,</code><br> <code>depth=4,</code><br> <code>l2_leaf_reg=7.34674002393291,</code><br> <code>learning_rate=0.5996646210492591,</code><br> <code>allow_const_label=True,loss_function = 'CrossEntropy')</code></p> <p>With all optimization mentioned above, the runtime can be reduced to 8 mins and the AUC is above 0.956 in the public leaderboard.</p> <h1><strong>Another ensembling</strong></h1> <p>Thanks for the sharing from <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> with a brilliant idea to do the <a href="https://www.kaggle.com/code/aerdem4/daigt-superfast-unsupervised-baseline" target="_blank">unsupervised clustering</a>, the public score can be improved roughly 0.0015 while the runtime only increases 60 seconds. However this will cause a performance degradation in the private score. In my opinion, the performance degradation is caused by somehow the imbalance of the hidden test data.</p> <h1><strong>Clean code</strong></h1> <p>As mentioned above, the runtime is very low. After cleaning the code and removing unnecessary print information. This saves roughly 20 seconds runtime.</p> <h1><strong>Code</strong></h1> <p><a href="https://www.kaggle.com/code/chg0901/v69-957-0-955-phrases-are-keys" target="_blank">https://www.kaggle.com/code/chg0901/v69-957-0-955-phrases-are-keys</a></p>
SenNet + HOA - Hacking the Human Vasculature in 3D
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>What's happened? My name is written at 2nd place? I remember my public score was 0.43 and my place was 1052…</p> <h1>Overview</h1> <p>My solution consists of U-Net3D (128x128x32), threshold adjustment, and <strong>post-processing</strong> to remove unconnected vessels as they are false positives.</p> <ul> <li>data augmentation using random rotation (and position), same as 1st place solution.</li> <li>U-Net3D. I assumed that 3D would be more accurate because it provides more information. I think this assumption might be wrong, since the 1st place solution uses 2.5D, .</li> <li>Binary-focal loss. Since there are a few positive data.</li> <li>Adjusting threshold. Since the volume ratio of blood vessels are not so different betwwen persons, the threshold is set according to the ratio.</li> <li><strong>Post-processing</strong>. Since blood vessels are supposed to be connected, extract small chunks with depth-first-search and remove them.</li> </ul> <p><a href="https://github.com/tail-island/blood-vessel-segmentation" target="_blank">https://github.com/tail-island/blood-vessel-segmentation</a><br> <a href="https://www.kaggle.com/code/ojimaryoji/sennet-hoa-2nd-place-solution?scriptVersionId=159388443" target="_blank">https://www.kaggle.com/code/ojimaryoji/sennet-hoa-2nd-place-solution?scriptVersionId=159388443</a></p> <h1>Data</h1> <p>To make it easier cutting out the data, I created a 3D Numpy array and adjusted the scale. I created <em>all</em> and <em>dense</em> data because I was planning to do curriculum learning in the order of <em>all</em> to <em>dense</em>. However, since it took a long time to learn in my PC, I only trained on sparse data this time. Also, I did not normalize or clipping the data because I thought there should not be a big difference since the data is visible to the human eye.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/create_volumetric_images.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/create_volumetric_images.py</a></p> <h1>Train</h1> <p>I generate data from random positions and rotations in each <em>n</em> epochs. To reduce data generation time, I used multiple processes.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/dataset.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/dataset.py</a></p> <p>The neural network is U-Net3D.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/model.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/model.py</a></p> <p>I used binary-focal loss. Optimizer is AdamW and learning rate is scheduled by cosine-decay.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/train_0.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/train_0.py</a></p> <h1>Submit</h1> <p>Prediction is made by tiling. get_candidate() finds candidates with a given ratio and <strong>get_blood_vessels()</strong> removes small unconnected chunks.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit.py</a></p> <p>Searching the big blood vessel chunk (and clip) version, private score is 0.756793 and public score is <strong>0.000000</strong>…</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit_.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit_.py</a></p> <p>Other scores…</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5895718%2Ff7be4975a186ddc226fd463f32632d0a%2Fscores.png?generation=1707479299719710&amp;alt=media"></p> <h1>What's happend?</h1> <p>What's happened? My name is written at 2nd place? I remember my public score was 0.43 and my place was 1052…</p> <h1>Overview</h1> <p>My solution consists of U-Net3D (128x128x32), threshold adjustment, and <strong>post-processing</strong> to remove unconnected vessels as they are false positives.</p> <ul> <li>data augmentation using random rotation (and position), same as 1st place solution.</li> <li>U-Net3D. I assumed that 3D would be more accurate because it provides more information. I think this assumption might be wrong, since the 1st place solution uses 2.5D, .</li> <li>Binary-focal loss. Since there are a few positive data.</li> <li>Adjusting threshold. Since the volume ratio of blood vessels are not so different betwwen persons, the threshold is set according to the ratio.</li> <li><strong>Post-processing</strong>. Since blood vessels are supposed to be connected, extract small chunks with depth-first-search and remove them.</li> </ul> <p><a href="https://github.com/tail-island/blood-vessel-segmentation" target="_blank">https://github.com/tail-island/blood-vessel-segmentation</a></p> <h1>Data</h1> <p>To make it easier cutting out the data, I created a 3D Numpy array and adjusted the scale. I created <em>all</em> and <em>dense</em> data because I was planning to do curriculum learning in the order of <em>all</em> to <em>dense</em>. However, since it took a long time to learn in my PC, I only trained on sparse data this time. Also, I did not normalize or clipping the data because I thought there should not be a big difference since the data is visible to the human eye.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/create_volumetric_images.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/create_volumetric_images.py</a></p> <h1>Train</h1> <p>I generate data from random positions and rotations in each <em>n</em> epochs. To reduce data generation time, I used multiple processes.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/dataset.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/dataset.py</a></p> <p>The neural network is U-Net3D.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/model.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/model.py</a></p> <p>I used binary-focal loss. Optimizer is AdamW and learning rate is scheduled by cosine-decay.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/train_0.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/train_0.py</a></p> <h1>Submit</h1> <p>Prediction is made by tiling. get_candidate() finds candidates with a given ratio and <strong>get_blood_vessels()</strong> removes small unconnected chunks.</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit.py</a></p> <p>Searching the big blood vessel chunk (and clip) version, private score is 0.756793 and public score is <strong>0.000000</strong>…</p> <p><a href="https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit_.py" target="_blank">https://github.com/tail-island/blood-vessel-segmentation/blob/main/src/submit_.py</a></p> <p>Other scores…</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5895718%2Ff7be4975a186ddc226fd463f32632d0a%2Fscores.png?generation=1707479299719710&amp;alt=media"></p> <h1>What's happend?</h1> <p></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5895718%2Fc3db76cc66c0f14a28e86393c735b925%2Fpublic-private.png?generation=1707470652290411&amp;alt=media"></p> <p></p> <p>What's happened?</p>
Linking Writing Processes to Writing Quality
Bronze medal journey
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Linking Writing Processes to Writing Quality <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, thanks to organizers and Kaggle team for hosting such a great competition. It was a pleasure to see many amazing ideas and notebooks shared by contestants during the last several months. Since we all had some concerns about the grading subjectiveness, metric, distribution of private test data, etc. for this competition I decided to follow several independent approaches.</p> <h1>1. Conservative model</h1> <p>This model used selected features shared in the excellent <a href="https://www.kaggle.com/code/awqatak/silver-bullet-single-model-165-features" target="_blank">Silver Bullet</a> notebook. My improvements were</p> <ul> <li>streamlining CV scheme;</li> <li>adding punctuation features (didn't work);</li> <li>adding mouse events features (didn't work);</li> <li>minor tuning of LightGBM model (worked on both CV and LB);</li> <li>designing XGBoost model (worked on CV, but didn't translate to LB);</li> <li>designing CatBoost model (worked on both CV and LB);</li> <li>trying many other things like regularization, etc. (most of them didn't work, but some did).</li> </ul> <p>Model's public/private scores: 0.581/0.576.</p> <h1>2. Advanced model</h1> <p>This model was a blend of many LightGBM, XGBoost and NN models. Most of the models were highly tuned, some with grid search, some with Optuna. This model had my best public score and I was hoping it will also perform well on private LB due to its diversity.</p> <p>Model's public/private scores: 0.577/0.570.</p> <h1>3. Random model</h1> <p>Since for this competition we could select upto three submissions, I just randomly <a href="https://www.kaggle.com/code/kononenko/lgbm-x2-nn-fusion" target="_blank">blended</a> two promising public notebooks. Ironically, this random blend performed best when going from public to private. It could even be in silver, if tuned a little bit more, but I didn't spend time on it as it was only a random guess.</p> <p>Model's public/private scores: 0.578/0.568.</p> <h1>Conclusion</h1> <p>When the private LB was revealed, my rank dropped by ~100 places. Despite the decent shake-up, Model #3 appeared to be in bronze. However, what still puzzles me is why for all of my models, and also for many shared notebooks, <code>CV score &gt; Public score &gt; Private score</code>. Were hard cases manually moved to training data or this is just a coincidence?</p> <p>Anyways, I would like to take an opportunity to thank again to host, Kaggle and community. It was a fun competition and I'm looking forward to the next ones.</p>
LLM - Detect AI Generated Text
Winning Solution 34th Solution (Possible 22nd Solution) LB Public: 0.954 LB Private: 0.92
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>This project was made as an extension to the ML@Berkeley NMEP (New Member Education Program).</strong><br> <strong>Please check out our organization if interested!</strong><br> <a href="https://ml.berkeley.edu" target="_blank">https://ml.berkeley.edu</a><br> <a href="https://www.linkedin.com/company/machine-learning-at-berkeley/mycompany/" target="_blank">https://www.linkedin.com/company/machine-learning-at-berkeley/mycompany/</a></p> <p>Private: 0.92 (Private Best: 0.929) Public: 0.954</p> <p>Here are some approaches that allowed us to achieve 0.92 for private score, ending up at 34th. A similar approach was used in a different notebook to achieve 0.929 (unfortunately this was not selected for the final submission)</p> <p>Referenced and learned a lot from:<br> <a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts" target="_blank">TF-IDF Model (from public notebook):</a><br> Source: LLM DAIGT excluded prompts<br> Scores: Private - 0.895, Public - 0.963</p> <ol> <li><strong>Ensemble Random Forest</strong><br> Added a random forest model for the ensemble to increase model diversity (used light weight that is similar to mnb)</li> </ol> <pre><code> ensemble = VotingClassifier(estimators=[(,clf), (, sgd_model), (,lgb), (, cat), (, rf_model) ], weights=weights, voting=, n_jobs=-) </code></pre> <ol> <li><strong>Balance the Distribution and Add Dataset Diversity</strong><br> <a href="https://github.com/panagiotisanagnostou/AI-GA#" target="_blank">https://github.com/panagiotisanagnostou/AI-GA#</a><br> Made a well balanced and diverse training data by adding additional train data. I checked the distribution of labels, and added additional dataset to make a well-balanced training data. Specifically, AI-GA (AI-Generated Abstracts dataset) was used. </li> </ol> <pre><code>merged_df[].hist(bins=) plt.xlabel() plt.ylabel() plt.title() plt.show() </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3441149%2Fa3d01f665cddf97021049d3c65a4e91c%2FScreenshot%202024-01-23%20at%203.05.03%20PM.png?generation=1706051944583747&amp;alt=media"></p> <p>Check that the distribution is now balanced after adding data<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3441149%2F755a2da27673bbd2dd11f9a0bcaa3224%2FWell%20Balanced.png?generation=1706052016460544&amp;alt=media"></p> <p>Please feel free to ask any follow up questions.</p>
Linking Writing Processes to Writing Quality
[617th Solution Write-Up] Summary and Reflection.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Linking Writing Processes to Writing Quality <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>1. Conclusion</h3> <p>I am very pleased to have participated in this meaningful competition. Although I did not win a medal after the shakeup, I learned a lot in the feature type table competition. Hope to apply what I have learned next time and achieve better results. <br> My thoughts aren't of much reference value, just simply serve to put a definitive end to this competition and share some findings.</p> <h3>2. Feature Selection</h3> <p>Thanks to these excellent public notebooks: <a href="https://www.kaggle.com/code/hiarsl/feature-engineering-sentence-paragraph-features" target="_blank">Feature Engineering: Sentence &amp; paragraph features</a>, <a href="https://www.kaggle.com/code/awqatak/silver-bullet-single-model-165-features" target="_blank">Silver Bullet | Single Model | 165 Features</a>, <a href="https://www.kaggle.com/code/cody11null/lgbm-x2-nn" target="_blank">LGBM (X2) + NN</a>.</p> <ul> <li><p>Important features:</p> <ol> <li><code>sentence_features</code>, <code>word_features</code>, <code>paragraph_features</code></li></ol> <pre><code>**use word count with different lengths** for word_l in [, , , , , , , , , ]: word_agg_df[f] = df[df[] == word_l].groupby([]).count().iloc[:, ] word_agg_df[f] = word_agg_df[f].fillna() </code></pre> <ol> <li><code>pause_time_features</code></li></ol> <pre><code>** one feature** pauses_zero_sec=pl.().( pl.() &lt; ).(), </code></pre> <ol> <li><code>time-related features</code>, <code>word count features</code>, <code>cursor position features</code></li></ol> <pre><code>temp = df()( pl()(), pl(num_cols)(), pl(num_cols)(), pl(num_cols)(), pl(num_cols)(), pl(num_cols)(), pl(num_cols, )(), pl(num_cols, )() ) </code></pre> <ol> <li><code>gaps = [1]</code></li> <li><code>punctuations count</code></li></ol> <pre><code>def match_punctuations(self, df): tmp_df = df.groupby().agg({: }).reset_index() = () li in tqdm(tmp_df[].): cnt = = (Counter(li).()) item in : , v = item[], item[] in self.punctuations: cnt += v .(cnt) = pd.DataFrame({: }) </code></pre></li> <li><p>Features with not very obvious effects</p> <ol> <li><code>activity count</code>, <code>event_count</code> (Tried using Tf-idf and regular proportion calculations, there was basically no difference.)</li> <li><code>gaps = [10, 20, ..., 50, 100]</code></li></ol></li> </ul> <p>In the end, used about 210 features. I tried more features (for example, constructing 300+, 600+, and 700+ features), but the scores on the leaderboard were poor, only around 0.595+. Therefore, I did not adopt them in the final model. In fact, when there are only about 2,500 training data entries, there shouldn't be too many features.</p> <h3>3. Models</h3> <ul> <li>Used LGBM, XGB, and CB, three traditional tree models, with equal allocation in the model proportions.</li> <li>Unable to obtain desired results with NN and TabNet.</li> </ul> <h3>4. Ideas that Could Not be Realized</h3> <p>After reading this creative discussion <a href="https://www.kaggle.com/competitions/linking-writing-processes-to-writing-quality/discussion/451852" target="_blank">here</a>. I tried using a classification model to assist in making certain adjustments to the regression model. </p> <ul> <li>Binary Classification</li> </ul> <pre><code> = = create_binary_score(score): count_label0, count_label1 score &lt;= . or score &gt;= .: += : += </code></pre> <p>I hoped to use a binary classification method to differentiate between marginal scores and middle scores, but the final accuracy was only around 82%. After combining it with the regression model, the results were not satisfactory, so I ultimately abandoned this approach.</p> <ul> <li>Five-category classification</li> </ul> <pre><code> convert_score_to_category(score): score &lt;= .: score == .: score == .: score == .: score == .: score &gt;= .: </code></pre> <p>Here, in order to balance the data volume of each label, I set the division method as mentioned above. Previously, I tried treating each score as a separate category and added weights to minimize the impact of sample imbalance. However, due to the large discrepancy, the model was ultimately unable to train properly. Even when I divided it into the five categories mentioned above, the final classification accuracy was only just over 50%.</p> <p>After reviewing others' solutions, it seems that no one used this idea, indicating that this method indeed does not work well.</p> <h3>5. Summary</h3> <ul> <li>In this competition, it seems that features are not particularly important. Many high-scoring solutions are also based on making minor modifications to the baseline.</li> <li>How to extract more information from text and even use language models to construct features is a very effective approach.</li> <li>Building a trustworthy CV is crucial. In the competition, my CV has consistently lacked a strong correlation with the LB, which directly led to shakeup.</li> </ul> <h3>6. End</h3> <p>Thanks to <code>Kaggle</code> and <code>THE LEARNING AGENCY LAB</code> for hosting a very meaningful competition. The tabular competition has been a process of accumulating experience, and I have learned a lot during this process. Wish everyone good luck.</p>
Linking Writing Processes to Writing Quality
278th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Linking Writing Processes to Writing Quality <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello Kagllers.</p> <p>I was placed 278th this competitions by using this<a href="https://www.kaggle.com/code/shisa07/lgbm-x2-nn-fusion?scriptVersionId=157068932" target="_blank"> notebook</a>. Thanks for sharing.</p> <p>I tried some features engineering and blending, but score wasn't improved. I don't know if it's useful, but I'll share what I did.</p> <ul> <li>Feature Engineering<ul> <li>Add pause time(30, 60, 90) </li> <li>Changing Vectorizer(TF-IDF)</li></ul></li> <li>Blending<ul> <li>Add other model(random forest, svm, cat-boost)</li></ul></li> </ul> <p>I assumed that students who write good essays would spend a certain amount of time thinking about the content of their essays, so I added the above-mentioned pause time as a feature.<br> The reason I used TF-IDF is that is one of a statistic intended to reflect how important a word is.</p> <p>Some winner says "feature engineering is all you need", I feel the same way.</p> <p>Thanks for reading. </p>
Linking Writing Processes to Writing Quality
[No place] Influence Graphs and GNN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Linking Writing Processes to Writing Quality <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone! just wanted to share my approach.<br> First competition that I take this seriously and it was everything I was hoping for.<br> Thanks for sharing your approaches and discussions, and to the organizers for such an interesting problem formulated.</p> <p>It ocurred to me that typography has an inherent structure based on the influence any action has over the others. That is, for every key pressed (Action); What actions taken before did it influence? Maybe we wrote a character a while ago that this action is deleting?</p> <p>So, what Action wrote that character? lets define the relation between these two actions as an Influence. We <strong>deleted</strong> a character <strong>written</strong> by an action before. It nullified the effects the action had on the text.</p> <p>So, apart of the sequence nature our data has, we count with added structural information.</p> <p>I was aiming to model the sequence of actions and their Influence relations as a directed Graph for later learning with an hybrid GNN-CNN model.</p> <p><strong>The nodes I implemented were:</strong></p> <ul> <li>Input node (Green): Input or paste action</li> <li>Deletion node (Red): Deletion action</li> <li>Replacement (Gray): Replace action</li> <li>Pause (Purple): Pause taken between two actions (Artificial node).</li> </ul> <p><strong>The relations I managed to think about were:</strong></p> <ul> <li>Sequence (Green): An action is taken before another action.</li> <li>Deletion (Red): An action deletes a character written by another action.</li> <li>Replacement (Gray): An action replaces a character written by another action.</li> <li>Position (Blue): An action ends up in the same position as another action.</li> <li>Pause (Purple): The relation taken by a pause node with the action taken after.</li> </ul> <p><strong>I proposed several features for each node:</strong></p> <ul> <li>Node type: Type of the node.</li> <li>WInput: Input written by each Input node ('.' , ',', ' ', '\n', 'q').</li> <li>Weight: number of actions that node represents (I applied a coarse method).</li> <li>Time: Time taken by the action or the pause.</li> <li>Len_ : len of the text change produced by the action.</li> <li>Anulada: If the action was nullified by another action.</li> <li>Pos_change: Magnitud of the change in position the action took(compared to the position before).</li> </ul> <p>I just kept the first four, the other ones damaged my CV. </p> <p>Here is an example for each graph of targets 0.5, 1.0, 1.5, 2.0 ; 4.5, 5.0, 5.5 , 6.0 (in that order left-to right).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4335892%2Fb2737c0560efb9ed194e4f94c53e34c3%2FSuper_Tool_Collage.jpeg?generation=1705032934977888&amp;alt=media"></p> <p>In the left, node sizes were determined by Time and colored by Node type.<br> In the right, node sizes were determined by Weight and colored by WInput.</p> <p>Here is a zoom on 5.0, 5.5 and 6.0 graphs.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4335892%2F3eb8e8e26fa4a0dc831e0f29b21c2175%2F50_weights_page-0001.jpg?generation=1705032996005716&amp;alt=media"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4335892%2F3151a2275766f5368e3949cd9ee825d3%2F55_times_page-0001.jpg?generation=1705033023513612&amp;alt=media"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4335892%2Fd8af8642f6ed7e5ac719f3a27d08e911%2F60_times_page-0001.jpg?generation=1705033046281084&amp;alt=media"></p> <p>The model used was an adaptation of a DGCNN architecture, with a custom ordering of the sort pooling layer followed by transformers blocks, interpreting the output of ordered sort pooling as a time series.</p> <p>The intuition behind the hybrid model was that the GNN layers were going to embed the influence relations of the neighborhood into selected nodes that later will form a shorter, more informative sequence (length K), to be consumed by Transformers blocks as a time series, being able to learn influence and temporal patterns.</p> <p>I consider I lacked experience building this models, not sure if an improvement can be done easily.</p> <p>For better learning, I concatenated to the GNN a FFN architecture learning from the features engineered by the community and made them optimice together. I'm sure this step can be easily improved, I lacked all the experience on NN on tabular data and had no time to learn.</p> <p>Here are the training losses of GNN+FFN (the noisy part is when I introduce the F.E Information and the FFN Arch).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4335892%2F922fc32a7899a7ab73311212e3cb7ab1%2FSuper_Tool_Collage%20(3).jpeg?generation=1705034671417473&amp;alt=media"></p> <ul> <li>The GNN achieved 0.65 on stratified CV.</li> <li>The FNN achieved 0.69 on stratified CV.</li> <li>The GNN+FNN achieved 0.62 on stratified CV.</li> <li>Silver Bullet achieved 0.618 on my local CV.</li> <li>Silver Bullet ensembled with GNN+FNN achieved 0.608 on my local CV.</li> <li>Ensemble with LightAutoML gave a score of 0.600 on my local CV.</li> </ul> <p>I have the suspicion that the graphs have minor errors (I'm actually sure, but I know little). The building algorithm was not an easy task, at least for me (it had to be efficient too!). Position relations, for example, were doing damage on CV.</p> <p>I lacked experience developing this kind of models. There may be a direct, easy improvement on the GNN-Transformer arch.<br> I believe FFN can be easily improved.</p> <p>The size of the data was also a natural limit the problem had, especially influential on deep learning approaches.</p> <p>Here is an idea: maybe me can have the history of written essays and their qualifications with the task of predicting the score of the next time step. Knowing how a determined person behaves when scores badly or fine may be a better source of predictive patterns. This could be useful if we have recurrent students.</p> <p>That was my approach! I may keep working on this, just wanted to share it.<br> Cheers and congrats to you all!</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
12st Place Solution for the SenNet + HOA - Hacking the Human Vasculature in 3D
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Context</h1> <p>This solution was implemented as part of a blood vessel segmentation competition organized by the Common Fund’s Cellular Senescence Network (SenNet) Programm in cooperation with the Human Organ Atlas (HOA). <br> Competition overview page: <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation" target="_blank">SenNet + HOA - Hacking the Human Vasculature in 3D</a><br> Competition dataset is <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/data" target="_blank">here</a><br> Many thanks to the organizers for the opportunity!</p> <h1>Overview</h1> <p>Framework — <strong>TensorFlow</strong><br> Data pipeline — 2d, roi, resize (<strong>1024x704</strong>), <strong>tfrecord</strong><br> Model — almost classic <strong>U-net</strong> (details below)<br> The solution is presented in two notebooks:</p> <ul> <li><a href="https://www.kaggle.com/code/pib73nl/sennet-hoa-bvs-12th-place-solution-train" target="_blank">train</a></li> <li><a href="https://www.kaggle.com/code/pib73nl/sennet-hoa-bvs-12th-place-solution-infer" target="_blank">inference</a></li> </ul> <h1>Disclamer</h1> <p>This solution was developed in December 2023, before Santa's New Year's gift, which ultimately helped more than seven hundred participants to jump above 0.8. I left for the holidays in 238th place with a score of 0.567. When I next opened the leaderboard a week later, I had dropped over 150 positions! And that was just the beginning! ;) I worked on this approach for another week. I must say that I did this without much enthusiasm, since by improving the metric I found myself lower and lower in the ranking.<br> Finaly, with a result of 0.636, which was achieved by increasing the image size and minor architecture changes, I began to look for other approaches (see below in chapter <em>„Fruitless attempts“</em>).</p> <h1>Data preparation</h1> <p>All data (except for the kidney_3_dense labels) were used as training data. The images have large fields that contain no useful information. To reduce these fields, the images were preprocessed to extract roi using statistical methods. </p> <pre><code> (): row_mask = image.std(axis=)&gt; clmn_mask = image.std(axis=)&gt; row_mask = cleaning_mask(row_mask) clmn_mask = cleaning_mask(clmn_mask) image = image[row_mask,:][:, clmn_mask] label = label[row_mask,:][:, clmn_mask] (label, np.ndarray) row_pad = (row_mask.argmax(), row_mask[::-].argmax()) clmn_pad = (clmn_mask.argmax(), clmn_mask[::-].argmax()) image, label, (row_pad, clmn_pad) (): mask[] = mask[-] = frames = np.nonzero(mask[:-]!=mask[:])[] delta = frames[:]-frames[:-] max_solid_block_begin = np.argmax(delta) garbage = np.delete(frames, [max_solid_block_begin, max_solid_block_begin+]) a, b (garbage[::], garbage[::]): mask[a+:b+] = mask </code></pre> <p>Next, all images were reduced to a single size of 1024x704. The experiments started with a size of 384x256, and as the size increased, the result expectedly improved. 1024x704 is the maximum size that did not result in an OOM error. An example of the processed image is below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2Fdc967a133b4763004a3d347af9c0af4e%2Fprepr_img.png?generation=1707740226468394&amp;alt=media" alt="Example of a processed image"><br> Every 25 images (4%) were used for validation, since the density of the labels varies greatly along the z-axis.<br> The resulting images and tags were packed into tfrecord files to organize a multi-threaded pipeline (total files - 92, 162 MB each). The maximum possible batch size for the 1024x704 shape turned out to be 32. Augmentation was not used - I just couldn’t get around to it!</p> <h1>Model</h1> <p>The more or less classical <strong>U-net</strong> architecture was used as a model.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2Fc84d61b6e342a86e18e47abafb575cb5%2Funet%20arcitect.png?generation=1707741082913852&amp;alt=media" alt="U-net architect"><br> Losses were estimated using <strong>binary crosentropy</strong>. The <strong>Adam</strong> optimizer was used for optimization. The <strong>learning rate</strong> was changed according to the <strong>cosine decay</strong> schedule with warmup.<br> Since there is a significant class imbalance, weights were used. The idea was to set the weights at the instance level, since the class ratios vary greatly as we move from the center of the kidney to the edges (along the z-axis). But to begin with, I hardcode the weights, and it worked tolerably well. I didn't return to this issue later, so there is room for improvement.<br> The model was created from scratch and trained for 60 epochs. For prediction, epochs with a minimum value of validation losses were taken.<br> The prediction result looked something like this:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2F7325fd2a8223112484047b8a02408aed%2Fpredict_res.png?generation=1707740910547839&amp;alt=media" alt="Prediction result"><br> There are quite a lot of FP here… However, it makes sense to work on the sample weights 🤔</p> <h1>Fruitless attempts</h1> <p>Obviously, given the large number of small details, any resizing harms the result. I tried to solve this problem by dividing the image into fragments (intersecting tiles of 256x256 size). I used the same model architecture. But the labels turned out to be exclusively in the places where the tiles overlapped, and having assembled a mask from the tiles, I got a blank sheet! I haven't had time to figure this out.<br> Second. I tried to solve the problem of label resizing by changing the architecture - I added another “kinda u-net” to the end of decoder - 2 convolution layers and two reconvolution ones. Didn't do well here either, but would have been in 67th place on the private leaderboard 😉<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2F5d62e9402997d2db29eff231c65447af%2Fscore_of_ext_model.png?generation=1707741415352143&amp;alt=media"></p> <h1>Some observations</h1> <p>Yes, yes… There was a big quake… For some reason, most of the solutions failed in suspiciously similar ways ;) This is clearly noticeable in the interval of about 100-600 places.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2Fbf0eec25d430cbc29190c3b83d5e5dfb%2FLB_quake.png?generation=1707741692231072&amp;alt=media" alt="LB shake plot"><br> My main solutions, similar in network architecture and image size to the winning one, gave stable results on a public and private dataset. On a private dataset - even a little better!<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F937321%2F3673a9e41dfdde1120b98b7f01a75017%2Fscore_of_win_model.png?generation=1707741845733027&amp;alt=media" alt="Score of the winning model"><br> The difference between the public and private data sets was 0.012 points. Such stable results in the first thousand can be counted on the fingers of one hand. In general, the variance is already normal, all that remains is to work on the bias 😁<br> Thanks to everyone who worked on the problem! It was interesting with you! Good luck! ✋</p>
LLM - Detect AI Generated Text
113th solution: Reranking with Two-Stage Prediction
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the competition host for organizing such an interesting competition!</p> <p>Our solution uses an ensemble approach, combining the TF-IDF based prediction with the two-stage prediction. (<a href="https://www.kaggle.com/code/reiendo73/daigt-ensemble-two-stage-prediction" target="_blank">notebook</a>)</p> <p>Two-stage prediction (Inspired by: <a href="https://www.kaggle.com/code/hsinhungwu/two-stages-prompt-prediction-custom-models" target="_blank">Two-Stages: Prompt Prediction + Custom Models</a>)</p> <ul> <li>Stage 1: Prompt prediction</li> <li>Stage 2: Predicting probabilities by training models for each prompt</li> </ul> <p>In stage 2, only the data for a specific prompt is used for training, so models are created specifically for that prompt. The distribution of predicted probabilities is different for each model and each prompt.</p> <p>We therefore used ranking of probabilities per prompt instead of raw probability values.</p> <p>The TF-IDF based prediction results for the overall data are reranked based on the ranking per prompt in the two-stage prediction results, and the average values before and after the reranking are calculated.</p> <p>An example is given below: the <code>generated</code> column contains the probability and the ranking per prompt.</p> <table> <thead> <tr> <th>id</th> <th>prompt_id</th> <th>generated (TF-IDFbased)</th> <th>generated (two-stage)</th> <th>generated (combining)</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>1</td> <td>0.7, <strong>1</strong></td> <td>0.5, <strong>1</strong> → 0.7</td> <td>0.7, <strong>1</strong></td> </tr> <tr> <td>b</td> <td>1</td> <td>0.6, <strong>2</strong></td> <td>0.3, <strong>3</strong> → 0.2</td> <td>0.4, <strong>2</strong></td> </tr> <tr> <td>c</td> <td>1</td> <td>0.2, <strong>3</strong></td> <td>0.4, <strong>2</strong> → 0.6</td> <td>0.4, <strong>2</strong></td> </tr> <tr> <td>d</td> <td>2</td> <td>0.5, <strong>1</strong></td> <td>0.9, <strong>1</strong> → 0.5</td> <td>0.5, <strong>1</strong></td> </tr> <tr> <td>e</td> <td>2</td> <td>0.3, <strong>2</strong></td> <td>0.1, <strong>2</strong> → 0.3</td> <td>0.3, <strong>2</strong></td> </tr> </tbody> </table> <p>This method excludes the model specific differences in probability distributions in the two-stage prediction.</p> <p>Compared to using only the TF-IDF based prediction, the private score improved from 0.907 to 0.909.</p> <p>I hope this helps with your future ideas.</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
1st Place Solution (code updated)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we would like to thank Kaggle and the organizers for hosting such a great competition. And also thanks to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for the amazing posts, <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> for the metric implementation and all other participants for sharing their experiments.</p> <h1>Overview</h1> <p>Our final submission is an ensemble of two 2.5d convnext tiny unet with 3 channels, and the only differences between these two models are augmentation and number of epochs. Actually the best scored submission is not the selected ensemble but one single model of the ensemble which is 0.835 on private lb.</p> <h1>Data Preparation</h1> <p>We used all training data <strong>including</strong> kidney_1_voi.</p> <ul> <li>Multiview slice (x, y, z)</li> <li>Normalization: No normalization, just <code>image = image / 65535.0</code></li> <li>Whole slice instead of tiles and all slices resized or cropped to 1536x1536. </li> <li>Augmentations:</li> </ul> <pre><code>A.Compose([ A.HorizontalFlip(=0.5), A.VerticalFlip(=0.5), A.Transpose(=0.5), A.Affine(scale={:(0.7, 1.3), :(0.7, 1.3)}, translate_percent={:(0, 0.1), :(0, 0.1)}, rotate=(-30, 30), shear=(-20, 20), =0.5), A.RandomBrightnessContrast(=0.4, =0.4, =0.5), A.OneOf([ A.Blur(=3, =0.2), A.MedianBlur(=3, =0.2), ], =1.0), A.OneOf([ A.ElasticTransform(=1, =50, =10, =1, =0.5), A.GridDistortion(=5, =0.1, =1, =0.5) ], =0.4), A.OneOf([ A.Resize(1536, 1536, cv2.INTER_LINEAR, =1), A.Compose([ RandomResize(1536, 1536, =0.5, =0.5, =1), A.PadIfNeeded(1536, 1536, =, =cv2.BORDER_REPLICATE, =1.0), A.RandomCrop(1536, 1536, =1.0) ], =1.0), ], =1.0), A.GaussNoise(=0.05, =0.2), ]) </code></pre> <ul> <li>Random 3D rotation to get slices that is not necessarily parallel to axes. The best scored submission used random 3d rotation for augmentation and trained for more epochs.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4240322%2F07e6746d787af196eaceb6acf5d336fe%2F3drot.png?generation=1707409376653348&amp;alt=media"></li> </ul> <h1>Modeling &amp; Training</h1> <ul> <li>We used unet from SMP with convnext tiny backbone, replaced BatchNorm and ReLU to GroupNorm and GELU and added an extra convolution stem. The input size for all models is 3x1536x1536.</li> </ul> <pre><code>self = nn( nn(in_channels, out_channels, , , ), (out_channels), ) </code></pre> <ul> <li>For loss function, we used 1.0 focal loss, 1.0 dice loss, 0.01 <a href="https://arxiv.org/abs/1812.07032" target="_blank">boundary loss</a> and 1.0 custom loss. The custom loss is inspired by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/456118#2583472" target="_blank">post</a> and <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a>'s <a href="https://www.kaggle.com/code/junkoda/fast-surface-dice-computation" target="_blank">metric implementation</a>.</li> </ul> <pre><code></code></pre> <ul> <li>For optimization, we used AdamW and CosineAnnealingLR from 1e-4 to 0 with warmup. All models were trained for 20 epochs with a batch size of 8 and 4 gradient accumulation steps, except for the model with 3d slice rotation augmentation which was trained for 30 epochs.</li> </ul> <h1>Inference</h1> <ul> <li>Inference on 3 axes with 8xTTA.</li> <li>We tried different resize methods for inference. For the best scored submission, all slices are simply resized to 3072x3072; for the selected submission, we used a dynamic scale factor that <code>(h*scale)*(w*scale)=3200*3200</code>.</li> <li>The threshold used for submission is 0.4, and the optimal threshold based on cv and lb is about 0.4~0.5.</li> <li><code>torch.compile()</code> gave about 2x acceleration so that we were able to inference with high resolution and TTAs.</li> </ul> <h1>What didn't work</h1> <ul> <li>3d models.</li> <li>External data and pseudo labels.</li> <li>Transformers.</li> <li>Stacking more slices (&gt;3) for 2.5d model.</li> </ul> <h1>Results</h1> <table> <thead> <tr> <th></th> <th><strong>Model</strong></th> <th><strong>Slice Rotation</strong></th> <th><strong>Inference size</strong></th> <th><strong>Public Score</strong></th> <th><strong>Private Score</strong></th> </tr> </thead> <tbody> <tr> <td>1</td> <td>convnext_tiny</td> <td></td> <td>3072</td> <td>0.889</td> <td>0.682</td> </tr> <tr> <td>2</td> <td>convnext_tiny</td> <td>✓</td> <td>3072</td> <td>0.888</td> <td>0.830</td> </tr> <tr> <td>3</td> <td>convnext_tiny</td> <td>✓</td> <td>3072</td> <td>0.867</td> <td><strong>0.835</strong></td> </tr> <tr> <td>4</td> <td>ensemble(1+2)</td> <td>-</td> <td>3200</td> <td><strong>0.898</strong></td> <td>0.744(selected)</td> </tr> <tr> <td>5</td> <td>ensemble(1+2)</td> <td>-</td> <td>3200(dynamic)</td> <td>0.895</td> <td>0.774(selected)</td> </tr> </tbody> </table> <h1>Links</h1> <ul> <li><a href="https://github.com/jing1tian/blood-vessel-segmentation" target="_blank">training code</a></li> <li>inference code<ul> <li><a href="https://www.kaggle.com/code/clevert/sennet-1st-place-solution" target="_blank">final submission ensemble 0.774106</a></li> <li><a href="https://www.kaggle.com/code/clevert/sennet-unet-convnext-3d-rotation" target="_blank">3d rotate single model 0.835346</a></li></ul></li> </ul>
LLM - Detect AI Generated Text
98th Place Solution for the DAIGT Competition: TF-IDF + fastText + Distilbert + pseudolabels + instance weighting
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>(public LB best: .970; private selected best: .909; private unselected best: .923)</p> <h1>Context</h1> <p>I had a bunch of time on my hands in December and January. I came across the <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/overview" target="_blank">LLM Detect AI-Generated Text competition</a> and found the prospect very appealing. Over those last couple of months, I had the time to try a lot of things; some worked, a lot failed. <br> I finish at rank ~100, after having reached 15th on public LB in early Jan. But I am happy with a solo silver. I loved every moment; it has been a thrilling competition, and I look forward to the next!</p> <h1>Overview</h1> <h3>1. Data</h3> <ul> <li>My final submission used much exclusively the <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">daigt V2 train dataset</a>, from which I excluded 'mistral7binstruct_v2' and 'llama2_chat' as they were harming my public score. I played a bit with other data sources but they did not help.</li> <li>I used <a href="https://www.kaggle.com/piotrkoz" target="_blank">@piotrkoz</a>'s library for typo preprocessing, and it helped my score a bit, probably by ~.001. Thanks a lot for sharing!</li> </ul> <h3>2. Models</h3> <ul> <li>The bulk of my ensemble (~75%) was inspired by the public LB TF-IDF approach and models. I improved this a bit, but in the end it is not substantially different from the published work. I included the canonical MNB, SGD, LGBM and Catboost. Credits go to <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> for sharing this strong approach 🙏</li> <li><a href="https://fasttext.cc/" target="_blank">fastText</a>. FastText alone did not perform great (my best pure fastText run was at .85 on public LB, and surprisingly only .70 on private); however it consistently gave a good boost (maybe ~.002 AUC) to ensembles in which it was mixed. [-&gt; 12% of final weight]</li> <li><a href="https://huggingface.co/distilbert-base-uncased" target="_blank">DistilBERT</a>. I fine-tuned the pre-trained model for 5 epochs; again, this alone did not perform very well on public or private LB, but still provided a nice boost to ensembles. [-&gt; 12% of final weight]</li> </ul> <h3>3. Pseudo-labeling</h3> <ul> <li>After a first model fit, I added the bottom and top ~10% of predictions to the training data to run a second model fit. This worked fairly well, giving me a ~.002 boost in public LB.</li> </ul> <h3>4. Adversarial instance weighting</h3> <ul> <li>I trained a (fastText) model to predict whether instances come from the train set or the test set. I then gave a slightly lower weight to instances from the train set that the model deems the least likely to belong to the test set. This gave me only a small further boost in public LB; but it now looks like private LB responded well to this.</li> </ul> <h3>5. Things that regrettably didn't work on public LB</h3> <p>and consequently did not make it to the final submission:</p> <h4>Canine</h4> <p>I played a lot with a character-level encoder called <a href="https://huggingface.co/google/canine-c" target="_blank">Canine</a>. My intuition was that a model operating at character-level would skip the tokenization step, and therefore possibly be somewhat robust to whatever obfuscation occurred on the test set. Unfortunately, this did not perform as well as DistilBERT on public LB, so I did not keep it; however, I can now see that my best Canine did a bit better than my best DistilBERT on private LB.</p> <h4>Prompt ID</h4> <p>I tried to add a categorical feature indicating to the model which prompt an essay was written against (potentially useful only for pseudo-labelled data). My public LB score responded very poorly to this, dropping to .93, so I ran away from the idea and never looked back; however the private LB now looks like it actually kinda liked the addition, as it gave me one of my best scores (private LB .916).</p> <h4>Rank averaging</h4> <p>Again a thing that my public LB score hated, but which would have actually helped my private score (public LB drop by .007; private LB boost by .013)</p> <p>And a big bunch of other things that worked neither on public nor private LB 😅</p> <h3>6. Runtime</h3> <p>My best submissions took about 6 to 8 hours to run (the number of Catboost iterations being the adjustment variable). I gave up on the efficiency leaderboard idea early on.</p>
Linking Writing Processes to Writing Quality
[66th Place Solution] LightGBM and Public Notebook
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Linking Writing Processes to Writing Quality <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to express my gratitude to the staff and the Kagglers who shared great information and knowledge. My result is not so good one, but I am posting this writeup in the hope that it will be helpful to someone else and as a record for myself.</p> <h2>Overview</h2> <p>I created my own LightGBM model and blended it with Public Notebook models. The best score was achieved with a submission that ensemble the Public Notebook with 0.7 weight and my LightGBM model with 0.3 weight. My model alone had scores of Public 0.582878 / Private 0.571849, which is almost the same as the public model (single model). But it seems that my model have added a bit of diversity to the ensemble, which leads to the improved score.</p> <table> <thead> <tr> <th>model</th> <th>CV</th> <th>Public Score</th> <th>Private Score</th> <th>final subs</th> <th>private best</th> </tr> </thead> <tbody> <tr> <td>My model 1</td> <td>0.6001</td> <td>0.582878</td> <td>0.571849</td> <td>[x]</td> <td>[]</td> </tr> <tr> <td>My model 2</td> <td>0.6018</td> <td>0.584491</td> <td>0.572845</td> <td>[]</td> <td>[]</td> </tr> <tr> <td>My model 2 * 0.5 + <a href="https://www.kaggle.com/code/cody11null/lgbm-x2-nn" target="_blank">Public1</a> * 0.5</td> <td>-</td> <td>0.580021</td> <td>0.568756</td> <td>[x]</td> <td>[]</td> </tr> <tr> <td>My model 1 * 0.3 + <a href="https://www.kaggle.com/code/kononenko/lgbm-x2-nn-fusion" target="_blank">Public2</a> * 0.7</td> <td>-</td> <td>0.578381</td> <td>0.568015</td> <td>[x]</td> <td>[x]</td> </tr> </tbody> </table> <p>My model 1 and 2 are almost same models with a little different features and hyperparameters.</p> <h2>My Model</h2> <p>I built my LightGBM model as follows. First, I used Stratified Kfold with 5 folds. In each fold, I trained the LightGBM model three times with different seeds. I repeated this process five times, changing the seed for Stratified Kfold. In other words, I trained LightGBM models with 5 folds * 3 model seeds * 5 fold seeds. This was made to minimize the impact of the random seed as much as possible. For inference, I used the average of these 75 models as the predicted score.</p> <p>With regard to feature engineering, I adopted <a href="https://www.kaggle.com/code/awqatak/silver-bullet-single-model-165-features" target="_blank">the Silver Bullet Notebook</a> as the baseline and made several changes. The number of features amounts to 192.</p> <ul> <li>Deleted some features that were almost identical.</li> <li>Corrected anomalies in the data that appeared to be bugs. This allowed me to correct data for several IDs that had logs significantly over 30 minutes.<ul> <li>Logs where the down_time was reversed by more than one second were corrected so there was no reversal.</li> <li>Logs where the difference in down_time was more than 20 minutes were corrected to a 10-second difference.</li> <li>Logs where more than 20 minutes had passed at the first log were corrected to a 2-minute lapse.</li> <li>Delete some events which are down_event==Process and have too long action_time.</li></ul></li> <li>Added and modified some features about the typing process.<ul> <li>Created another P-bursts feature, using bursts of consecutive inputs within 0.3 seconds.</li> <li>The number of characters and the maximum cursor position at the 20 and 25-minute mark.</li> <li>The number of keyboard shortcut.</li> <li>Some gap features of word_count with event_id shift 200.</li></ul></li> <li>Added many features about the essay.<ul> <li>Reconstructed the essay at 15 minutes and compared it to the final essay, add editing distance in the first 100 characters as a feature.</li> <li>Add aggreated features of words and sentences.</li> <li>The number of sentences with the first word being over six characters and containing a comma.</li> <li>The proportion of sentences with a one-letter first word.</li> <li>The number of sentences ending with a comma within the first three or four words.</li> <li>The average number of characters in sentences up to the first eight words.</li> <li>The number of sentences ending with a question mark.</li> <li>The number of sentences ending with an exclamation mark.</li> <li>The number of long sentences containing hyphens.</li> <li>The number of long sentences with a comma around the middle.</li> <li>The median of the total number of characters in 10 consecutive words.</li> <li>The number of 5 consecutive words where the total number of characters exceeds 35.</li> <li>And so on.</li></ul></li> </ul> <p>I added and removed features, checking whether it improved CV and LB. However, while CV improved, LB hardly did. Looking at the decimal scores after deadline, it seemed that LB's score was moving in the right direction, but the improvement was quite small. The hyperparameters for LightGBM were decided by optimizing the basic parameters with optuna. However, I didn't spend much time on it, as it seemed to be overfitting to the training data.</p> <h2>Some Reflections</h2> <p>It was good to know that the silver medal score was achieved with blending of public Notebook and my model that has the almost same score as public single model itself. It is interesting to see that some top solution uses NLP models and techniques, because I didn't come up with those approaches. I would like to learn them from solutions. </p> <p>My code can be found in <a href="https://github.com/KazuakiTokunaga/kaggle-LWP" target="_blank">this github repository</a>.</p>
LLM - Detect AI Generated Text
A kaggle newbie's 🥈23rd solution: Just follow-up excellent public works & ideas
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the host and all kagglers with their sharing of excellent works and ideas! As a kaggle novice, I'm very fortunate to get a silver place in my first competition. Although it's a bit pity that I didn't achieve many groundbreaking original work during my trials, I'm willing to share my final submission and choices.</p> <p>Here's my <a href="https://www.kaggle.com/code/spraut23333/daigt-public-0-965-private-0-927" target="_blank">solution</a></p> <h3>Details of Solution</h3> <p><strong>1. Data</strong></p> <p><a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> 's great <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">daigt-v2 dataset</a> and <a href="https://www.kaggle.com/carlmcbrideellis" target="_blank">@carlmcbrideellis</a> 's <a href="https://www.kaggle.com/datasets/carlmcbrideellis/llm-mistral-7b-instruct-texts" target="_blank">mistral-7b dataset</a> with prompts <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/467820" target="_blank">mentioned by</a> <a href="https://www.kaggle.com/bianshengtao" target="_blank">@bianshengtao</a> excluded. </p> <p>Data augmentation with <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> 's <a href="https://www.kaggle.com/code/aerdem4/daigt-superfast-unsupervised-baseline" target="_blank">unsupervised method</a> to generate pseudo test label for a small subset of test data. </p> <p><strong>2. Tokenization &amp; TF-IDF vectorization</strong></p> <p>I followed the <a href="https://www.kaggle.com/code/datafan07/train-your-own-tokenizer" target="_blank">general pipeline</a> of training BPE tokenizer on the test data by <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> and chose <a href="https://www.kaggle.com/code/verracodeguacas/sentencepiece-constructions" target="_blank">SentencePiece tokenizer</a> for final submission as introduced by <a href="https://www.kaggle.com/verracodeguacas" target="_blank">@verracodeguacas</a> .</p> <p><strong>3. Text correction</strong></p> <p>I have seen discussions about the license of different text-correction libraries,but I'm still unclear about the permission for usage of these tools since haven't seen a clear claim from the host. My final solutions include text correction on the test data with autocorrect, which is LGPL-3.0 license. But I also chose a version without any text correction for submission and it still stands on silver zone with private LB 0.923. </p> <p><strong>4. ML-Models for TF-IDF features</strong></p> <p>MultiNomialNB + SGDClassifier(linearSVC) + LightGBM(GBDT) + LightGBM(DART with GOSS) + CatBoost</p> <p>Focal loss with different gammas for the two lightgbm tree models.</p> <p>Some model parameters inspired by <a href="https://www.kaggle.com/batprem" target="_blank">@batprem</a> 's <a href="https://www.kaggle.com/code/batprem/llm-daigt-analyse-edge-cases" target="_blank">generous sharing</a>.</p> <p><strong>5. Transfromer-based Model</strong></p> <p>I added <a href="https://www.kaggle.com/mustafakeser4" target="_blank">@mustafakeser4</a> 's <a href="https://www.kaggle.com/code/mustafakeser4/inference-detectai-distilroberta-0-927" target="_blank">DistilRoBerta</a> to my ensemble in the last few days before DDL and get an instant boost from 0.962 to 0.965 on public LB, which gave me much confidence and motivation.</p> <p><strong>6. Other tricks</strong></p> <p>Fixing the max features of TF-IDF to 5M.</p> <p><a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468363" target="_blank">Post processing</a> method by <a href="https://www.kaggle.com/hyunsoolee1010" target="_blank">@hyunsoolee1010</a> .</p>
Stanford Ribonanza RNA Folding
419th: Tensorflow and additionally ( Vienna RNA, Scikit Learn and XG Boost)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>**Summary</p> <p>**The rna-modelling.ipynb notebook presents a comprehensive approach to RNA sequence modeling using TensorFlow and advanced data handling techniques. The notebook focuses on utilizing TensorFlow's TPU (Tensor Processing Unit) capabilities for efficient computation and handles various types of RNA experiment data, including DMS_MaP and 2A3_MaP. It integrates RNA data from different sources and prepares it for modeling.</p> <p>**Introduction<br> **RNA modeling is crucial in understanding the complexities of biological processes. This notebook explores RNA sequence data, aiming to model and predict RNA structures and reactivities. The approach harnesses TensorFlow's distributed computing power and TensorFlow's TPU strategy, providing a robust platform for handling and analyzing large-scale RNA data. The notebook specifically addresses different experiment types in RNA data, indicating a versatile and comprehensive approach to RNA sequence modeling.</p> <p>**Preprocessing<br> **The preprocessing steps in the notebook include:</p> <p>Data Loading: RNA sequence data is read and concatenated from CSV files, ensuring a comprehensive dataset.<br> Data Integration: RNA data from the RNA Mapping Database (RMDB) and Kaggle's Stanford Ribonanza dataset are integrated.<br> Experiment Type Handling: Different experiment types like DMS_MaP and 2A3_MaP are identified and handled separately.<br> Sequence and Reactivity Extraction: Sequences and their respective reactivities are extracted and prepared for modeling.<br> Model<br> TensorFlow and TPU Utilization: The notebook leverages TensorFlow's advanced features and TPU for efficient and scalable modeling.<br> Data Preparation for Modeling: Sequences are prepared, and their corresponding reactivities are extracted to serve as the target variables for the models.<br> Modeling Approach: While the specific details of the modeling approach are not explicitly stated in the provided excerpts, it's implied that advanced machine learning techniques, possibly involving neural networks or similar architectures, are used.</p> <p>**Other Models <br> **RNA Modeling with ViennaRNA (another-rna-modelling-with-viennarna.ipynb)<br> Features and Models:</p> <p>ViennaRNA and XGBoost: Integrates ViennaRNA for RNA folding and structure prediction with XGBoost for regression tasks.<br> Feature Engineering: Extracts features like minimum free energy (MFE), base-pairing probabilities, GC content, and AU content from RNA sequences.<br> Data Preprocessing: Utilizes pandas for data handling and scikit-learn's LabelEncoder for encoding categorical variables.<br> Model Training and Validation: Splits data into training and validation sets, trains an XGBoost regressor, and computes Mean Absolute Error (MAE) as a performance metric.<br> Submission Preparation: Generates predictions for submission based on the validation set and saves them in a CSV file.</p> <p><strong>Basic XGBoost for RNA Computing (basic-xg-boost-for-rna-computing (2).ipynb)</strong><br> Features and Models:</p> <p>XGBoost Focused: Primarily uses the XGBoost algorithm for regression tasks related to RNA reactivity.<br> Simple Feature Engineering: Employs basic techniques like sequence length encoding and experiment type categorization.<br> Model Training: Focuses on training an XGBoost regressor with prepared features and targets.<br> Validation and Submission: Calculates the Mean Absolute Error (MAE) on the validation set, prepares mean and median predictions for submission, and saves the submission file.</p> <p>**Evaluation<br> **Performance Metrics: The notebook uses performance metrics, Mean Absolute Error (MAE).<br> Validation and Testing: The approach involves splitting the data into training, validation, and testing sets to ensure the model's effectiveness and generalizability.<br> Practical Application: The model's predictions on RNA reactivities are crucial for understanding RNA behavior, which has practical implications in fields like bioinformatics and molecular biology.</p>
LLM - Detect AI Generated Text
25th Place Solution for DAIGT (Public LB: 0.966 Private LB: 0.927)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We are very happy to share our solution and are so grateful to everyone on Kaggle for sharing your ideas, codes, and datasets.</p> <h2>Overview</h2> <p>Our final solution follows this workflow:</p> <ol> <li>Data cleaning and typo correction</li> <li>BPE tokenizer training + TF-IDF feature extraction</li> <li>Models: MultinomialNB x 1, SGDClassifier x 1, LGBMClassifier x 1, CatBoostClassifier x 1</li> <li>Ensemble and identification of samples with prediction probability between 0.05 and 0.45 as hard samples</li> <li>Use the Mistral-7B model to predict the hard samples and DistilBert to predict all samples</li> <li>Use samples with blend probabilities less than 0.05 and greater than 0.5 as labeled data for unsupervised learning</li> <li>Blend the results of steps 4, 5, and 6 for the final prediction</li> </ol> <h2>Solution Details</h2> <ol> <li><p><strong>Data cleaning and typo correction</strong></p> <p>Inspired by <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468767" target="_blank">piotrkoz's Discussion</a>, we delved deeper into typo correction. We began by correcting large-scale character substitutions, followed by identifying and correcting words with a Levenshtein distance of 1 to misspelled words. This process was applied to both the training and test data to ensure consistency.</p> <pre><code>text = text = correct_substitutions(text, ) text = correct_dist1(text) </code></pre> <p>These corrections provided a <strong>~0.01</strong> improvement on the private LB.</p></li> <li><p><strong>LLMs for ensemble</strong></p> <p>We fine-tuned Mistral-7B with LORA on the DAIGT-v2 dataset, which yielded a 0.884 on the public LB. Although the model did not perform as well on the private LB, we had no way of doing this before the competition ended. We also attempted to fine-tune DeBERTa-v3 but were unable to improve upon the public model. Thus, we used the public DistilRoberta model and our Mistral-7B model to test data predictions.</p> <p>We used the LLM predictions only for "hard samples" because we observed that TF-IDF features could robustly predict most samples, and LLMs might reverse predictions on some samples.</p></li> <li><p><strong>Semi-supervised Learning</strong></p> <p>We were impressed by <a href="https://www.kaggle.com/code/aerdem4/daigt-superfast-unsupervised-baseline" target="_blank">aerdem4's notebook</a>, which achieved a 0.91 on the public LB using only the test set and two "magic words". However, we questioned the method's robustness, as its performance heavily depended on the choice of magic words and the distribution of samples for each prompt ID. This could also explain the model's underperformance on the private LB. Ultimately, we discarded the magic words approach and used TF-IDF prediction probabilities to label samples as data, which led to a <strong>~0.02</strong> improvement on the private LB on the last day.</p></li> </ol> <h2>Team Members</h2> <p>Gengyang Xiao <a href="https://www.kaggle.com/adrianxiao" target="_blank">@adrianxiao</a>, Jiayang Zhao <a href="https://www.kaggle.com/superxiaotang" target="_blank">@superxiaotang</a>, Junrui Wang <a href="https://www.kaggle.com/junruiwang" target="_blank">@junruiwang</a>, Zhiyang Zhang <a href="https://www.kaggle.com/zyzhang0109" target="_blank">@zyzhang0109</a> (in alphabetical order)</p> <p>Feel free to connect with any of us via Kaggle or LinkedIn.</p> <h2>Acknowledgement</h2> <p>Many thanks to <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>, <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a>, <a href="https://www.kaggle.com/mustafakeser4" target="_blank">@mustafakeser4</a>, <a href="https://www.kaggle.com/piotrkoz" target="_blank">@piotrkoz</a>. </p> <p>We learned a lot from you. Thank you!</p>
Linking Writing Processes to Writing Quality
5th Place Solution: Features are all you need!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Linking Writing Processes to Writing Quality <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I'd like to extend my thanks to Kaggle and the Learning Agency Lab for organizing such a fantastic competition. We've had a strong correlation between our cross-validation and leaderboard scores since the beginning, and all three selected submissions are in the gold range, which is quite reassuring.<br> I would like to thank my teammates. <a href="https://www.kaggle.com/jaideepvalani" target="_blank">@jaideepvalani</a> <a href="https://www.kaggle.com/rohitsingh9990" target="_blank">@rohitsingh9990</a> <a href="https://www.kaggle.com/mori123" target="_blank">@mori123</a> <a href="https://www.kaggle.com/phoenix9032" target="_blank">@phoenix9032</a> for their contribution and working hard till the last minute.<br> I'll try to briefly summarize what worked and what didn't.</p> <p>I spent most of my time in feature engineering and the rest on making CV and LB reliable.</p> <h1>Link to our inference notebook:</h1> <p><a href="https://www.kaggle.com/code/chaudharypriyanshu/light-automl-lgbm-22/notebook" target="_blank">https://www.kaggle.com/code/chaudharypriyanshu/light-automl-lgbm-22/notebook</a></p> <h1>Feature Engineering</h1> <p>Most of the features are based on the question<br> <strong>Q.</strong> "what exactly the evaluator is going to see ?"<br> <strong>A.</strong> The essay text. So the most important features will be derived from it's structure.</p> <p>So I created features with following categories:</p> <ol> <li>Paragraph lengths (first, second, third,…..): word counts and characters length , some cumulative length features.</li> <li>Sentence lengths (first, second, third): word counts and characters length </li> <li>Total capital letters in the text. </li> <li>total nouns in the text (Total capital letters - total_sentences).</li> <li>Count of words that are added unsequentially.</li> <li>How many sentences start with same word (first word, first two word, first three words).</li> <li>Time window based features : how many words added before (7/15/22/35) minutes.</li> <li>Total commas count.</li> <li>Tfidf features ngram (1,1), A total of 20 features.</li> <li>More Punctuation statistics. Mostly count based.</li> <li>Total questions and exclamation marks. </li> <li>Total number of more than 1 character replacements. </li> <li>Cursor positions based features, i used cursor positions that were there in reconstructed essay this way i found what was the actual standard deviation. how many times the author related the cursor with more than 1 positions.</li> <li>Total action time to write words with certain length.</li> <li>Rest of the features were adopted from public notebooks. </li> </ol> <h1>Modelling</h1> <p>For modelling our team utilised 4 Neural Net, 1 1DCNN based, 3 Trees based models.</p> <h3>Neural Networks</h3> <p>Most of the neural networks are adopted from lightautoML shared in public notebook <a href="https://www.kaggle.com/code/alexryzhkov/lgbm-and-nn-on-sentences" target="_blank">here</a>.<br> The architectures we used are:</p> <h5>CV LB scores</h5> <table> <thead> <tr> <th>Models</th> <th>cv</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>MLP</td> <td>0.589</td> <td>-</td> </tr> <tr> <td>Denselight</td> <td>0.590</td> <td>-</td> </tr> <tr> <td>Autoint</td> <td>0.599</td> <td>-</td> </tr> <tr> <td>NODE</td> <td>0.593</td> <td>-</td> </tr> <tr> <td>1DCNN</td> <td>0.602</td> <td>0.592</td> </tr> <tr> <td>Ensemble</td> <td>0.5868</td> <td>0.582</td> </tr> </tbody> </table> <h5>Training Strategy:</h5> <p>1) Light autoML models are trained for 10 epochs<br> 2) Since the results were a bit unstable, i used Stochastic weighted averaging and used best 3 validation scores (early stopping is used).<br> 3) Since the CV could have been over optimistic i have ensembles them separately from the models that were not using Early stopping.</p> <h3>Gradient Boosters:</h3> <h5>CV LB scores</h5> <table> <thead> <tr> <th>Models</th> <th>cv</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>LGBM</td> <td>0.598</td> <td>0.580</td> </tr> <tr> <td>CATBoost</td> <td>0.6007</td> <td>-</td> </tr> <tr> <td>XGBoost</td> <td>0.6001</td> <td>-</td> </tr> <tr> <td>Ensemble</td> <td>0.5963</td> <td>0.582</td> </tr> </tbody> </table> <h5>Training Strategy:</h5> <p>1) Trained for 1500 iterations, no early stopping is used.</p> <h1>Final CV setup</h1> <p>1) Stratified KFold by scores.<br> 2) Trained for 5 seeds [42,2022,7,4,1]<br> 3) Seeds for all models and CV are kept same throught</p> <h1>Ensembling</h1> <p>1) we independently optimised the weights of NN and Trees and gave equal weights to both.<br> 2) Optuna is used to determine the weight of each model.<br> Final CV score was 0.5858, LB = 0.578, Private = 0.560</p> <h1>What we did to avoid shake down?</h1> <ol> <li>Use same seeds for ensemble</li> <li>Not overfitting on cv, instead we tried to improve both cv and lb and average of both.</li> <li>Added diversity by using various features sets since models trained on different features will deliver diverse results.</li> <li>Separately ensembled the NN(used early stopping) and gradient boosters (do not use early stopping)).</li> <li>Reducing the difference between CV-LB. Most important</li> <li>training all models on same seeds.</li> </ol> <p>As a result</p> <ul> <li>By adopting this strategy 12 of our submissions out of top 15 reliable and updated solutions were in gold range for us.</li> <li>All three final selected subs are under in gold range</li> <li>Best CV sub= Best Private LB sub</li> </ul> <h1>What didn't work</h1> <p>1) TFIDF for ngrams more than 1.<br> 2) Past competitions data like Feedback.<br> 3) Word2vec features.<br> 4) svd+tfidf.<br> 5) Timing window based essa structure features.</p> <h1>Notebooks that helped us throughout the competition,</h1> <p><strong>Please give them an upvote</strong><br> 1) <a href="https://www.kaggle.com/code/hiarsl/feature-engineering-sentence-paragraph-features" target="_blank">Feature Engineering: Sentence &amp; paragraph features</a><br> 2) <a href="https://www.kaggle.com/code/alexryzhkov/lgbm-and-nn-on-sentences" target="_blank">LGBM and NN on sentences</a><br> 3) <a href="https://www.kaggle.com/code/awqatak/silver-bullet-single-model-165-features" target="_blank">Silver Bullet | Single Model | 165 Features</a><br> 4) <a href="https://www.kaggle.com/code/abdullahmeda/enter-ing-the-timeseries-space-sec-3-new-aggs" target="_blank">https://www.kaggle.com/code/abdullahmeda/enter-ing-the-timeseries-space-sec-3-new-aggs</a></p>
Linking Writing Processes to Writing Quality
[14th place solution] Keras ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Linking Writing Processes to Writing Quality <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank the organizers for organizing this competition.<br> Thanks also to all the kagglers who shared information in notebooks and discussions.<br> My model could not score better than the public notebook, but by ensemble I was able to improve the public notebook's score a bit.<br> This is my first medal and I am very happy!</p> <h1>Preprocessing</h1> <p>I did some data cleaning.<br> For example, during holding down SHIFT, even if nothing changed, it seemed be counted as a new event about every 30ms, so combined them into one.</p> <h1>Feature engineering(similar to the public notebook)</h1> <p>I created 69 features from essay, 90 features from train_logs aggregation and binning on idol_time and essay-related columns.<br> (really appreciate <a href="https://www.kaggle.com/kawaiicoderuwu" target="_blank">@kawaiicoderuwu</a> for shareing <a href="https://www.kaggle.com/code/kawaiicoderuwu/essay-contructor" target="_blank">essay constructor</a>)<br> I used nltk.tokenize to split the essay and then addressed grammatical errors such as sentences connected or single symbol left.</p> <h1>Models</h1> <p>ensemble of LightGBM, XGBoost and Keras<br> 10-KFold, tuned with optuna<br> (single LightGBM | CV: 0.597, Public LB: 0.593, Private LB: 0.574)<br> (single XGBoost | CV: 0.597, didn't submit)</p> <h1>Keras model</h1> <p>simple 3-4 layers model<br> Score was scaled between (-1, 1) and output by softsign or tanh.<br> Performance of single keras was poor, but ensemble with GBDT model improved LB scores.<br> (single keras | CV: 0.620, Public LB: 0.605, Private LB: 0.577)<br> Stacking and classification did not work.<br> Since few people were using keras, it was probably just luck, probably not originally suited for this competition.</p> <h1>Ensemble</h1> <p>I used the following notebook for ensemble.<br> Thanks so much for sharing!<br> <a href="https://www.kaggle.com/code/kononenko/lgbm-x2-nn-fusion" target="_blank">LGBM (X2) + NN + Fusion</a> by <a href="https://www.kaggle.com/kononenko" target="_blank">OLEKSIY KONONENKO</a><br> <a href="https://www.kaggle.com/code/yunsuxiaozi/writing-quality-fusion-notebook" target="_blank">Writing Quality(fusion_notebook)</a> by <a href="https://www.kaggle.com/yunsuxiaozi" target="_blank">YUNSUXIAOZI</a><br> <a href="https://www.kaggle.com/code/cody11null/lgbm-x2-nn" target="_blank">LGBM (X2) + NN</a> by <a href="https://www.kaggle.com/cody11null" target="_blank">CODY_NULL</a></p> <p>Finally I blended my model and (LGBM (X2) + NN + Fusion) in 3:7 or 4:6 ratio.<br> My final Private LB Score was 0.564197, blended (keras_softsign * 0.25) + (keras_tanh * 0.05) + (LGBM (X2) + NN + Fusion * 0.7)</p> <h1> </h1> <p>Thank you to everyone who shared information in notebook and discussions!<br> Thank you for reading!</p>
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
9th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>Only use Competition Data, no External Data</h3> <h2><strong>Split WSI and TMA:</strong></h2> <p>WSI images have black pixels (all zeros in all three channels), while TMA images do not. Therefore, if both the image width and height are less than 6000, but the area of black pixels is greater than 5% of the image (all WSI images in the training data have more than 10% black pixels), it is classified as WSI; otherwise, it is classified as TMA.</p> <h2><strong>Make tile :</strong></h2> <p>First, reduce the size of the WSI by 0.33 times, and then divide it into 512*512 tiles. Subsequently, categorize these tiles into three levels based on the presence of bad pixels, identified by the condition "np.sum(np.ptp(tile, axis=2) &lt; 20)".<br> Inference make tile code:</p> <pre><code> (): path= p_mask= image=cv2.imread(path) image=cv2.resize(image,(,),fx=scale,fy=scale,interpolation=cv2.INTER_AREA) mask=np.load(p_mask) os.makedirs(,exist_ok=) count= (count&lt;): idxs=[(y,x) y (,image.shape[]//) x (,image.shape[]//)] random.shuffle(idxs) k, (y, x) (idxs): tile=image[y*:(y+)*,x*:(x+)*,:] bg_count=np.(np.ptp(tile,axis=)&lt;) ((bg_count/(*))&lt;=): cv2.imwrite(,tile) count+= count&gt;=: count&lt;: idxs=[(y,x) y (,image.shape[]//) x (,image.shape[]//)] random.shuffle(idxs) k, (y, x) (idxs): tile=image[y*:(y+)*,x*:(x+)*,:] bg_count=np.(np.ptp(tile,axis=)&lt;) ((bg_count/(*))&lt;=)&amp;((bg_count/(*))&gt;): cv2.imwrite(,tile) count+= count&gt;=: count&lt;: idxs=[(y,x) y (,image.shape[]//) x (,image.shape[]//)] random.shuffle(idxs) k, (y, x) (idxs): tile=image[y*:(y+)*,x*:(x+)*,:] bg_count=np.(np.ptp(tile,axis=)&lt;) ((bg_count/(*))&lt;=)&amp;((bg_count/(*))&gt;): cv2.imwrite(,tile) count+= count&gt;=: </code></pre> <h4>Training tile:</h4> <p><strong>Step 1:</strong>Using all tiles if bg_count/area less 0.5<br> <strong>Step 2:</strong>If tiles of WSI image&lt;50,add ((bg_count/area) between 0.5-0.65) tiles until there are 50 tiles.<br> <strong>Step 3:</strong>If tiles of WSI image&lt;20,add ((bg_count/area) between 0.65-0.75) tiles until there are 20 tiles.</p> <h2>Model Training:</h2> <p>Only use WSI tiles. Randomly select 6 tiles from each image for training in every batch.<br> Loss Function: Binary Cross-Entropy (BCE)<br> <strong>Step 1:</strong> Normal Training<br> <strong>Step 2:</strong><br> Utilize the results from Step 1 to generate auxiliary labels.If the predicted value for true label is greater than 0.3, set the auxiliary label to 1; otherwise, set it to 0.<br> Re-train the model without using the weights from Step 1.<br> Loss function: Label loss (BCE) + 0.3 * Auxiliary Label loss (BCE)<br> Learning rate: 2e-4<br> <strong>Step 3:</strong> Fine-tuning with Step 2 Weights<br> Further refine the model using the weights obtained from Step 2.<br> Loss function: Label loss (BCE) + 0.15 * Auxiliary Label loss (BCE)<br> Learning rate: 5e-5</p> <h4>Models with different backbone:</h4> <p>efficientnetb4,efficientnet_v2s,maxvit_tiny (The model settings of different backbones are slightly different.)</p> <h3>WSI</h3> <p>Use model to predict tiles.</p> <h3>Wsi tile ensemble:</h3> <pre><code>tile_df[]=np.(tile_df[[,,,,]],axis=) tile_df[]=np.argmax(tile_df[[,,,,]].values,axis=) tile_df=tile_df[[,,,]].groupby([,])[[,]].mean().reset_index() idx=tile_df.groupby([])[].idxmax() wsi_df=tile_df.loc[idx1].reset_index(drop=) </code></pre> <h3>Outliers(WSI):</h3> <p>The predicted mean value of aux_label&lt;0.5(The score is almost the same as not predict "Other",maybe+0.01)</p> <h2>tma:</h2> <p><strong>Step1.</strong>Crop tma</p> <pre><code> (): ks=((img.shape[],img.shape[])//,) mask=(img.(axis=)-img.(axis=))&gt; kernel = np.ones((ks, ks),np.uint8) mask=cv2.erode(mask.astype(np.uint8),kernel) nonzero_pixels = np.column_stack(np.where(mask &gt; )) (nonzero_pixels.size)&lt;(img.size//): img : min_y, min_x = np.(nonzero_pixels, axis=) max_y, max_x = np.(nonzero_pixels, axis=) img[(,min_y-ks):max_y+ks+,(,min_x-ks):max_x+ks+,:] </code></pre> <p><strong>Step2.</strong>Resize to 512*512(The size of tma *0.33*0.5~512, so we can directly resize to 512 to predict)<br> <strong>Step3.</strong>Using wsi training model to predict</p> <h3>Outliers(tma):</h3> <p>The predicted value of aux_label &lt;0.5 (compared to tma without predict "Other", public score +0.03, private score +0.06)</p> <h2>Ensemble different models:</h2> <p>Voting(Compared with a single model, maybe only +0.01)</p> <h2>Maybe not work:</h2> <p>segmentation</p>
Linking Writing Processes to Writing Quality
[3rd place solution] Trust CV is all you need.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Linking Writing Processes to Writing Quality <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to express my gratitude to the host for organizing this great competition, and I feel incredibly fortunate to have experienced such a significant shake-up!!!!!!😄<br> I chose to trust my cv, which is about 0.580, and as I expected, the shake-up happened. While I anticipated some shake-up, I never expected to climb to the 3rd place.</p> <p>The key component of my method is to <strong>align context features generated using a pre-trained Deberta-based regressor</strong>. </p> <p>Firstly, using the reconstructed essays <a href="https://www.kaggle.com/datasets/hiarsl/writing-quality-challenge-constructed-essays" target="_blank">e.g., Essays reconstructed</a>, I trained a Deberta regressor by adding 3 nn layers at the bottom of Deberta. This procedure follows the NLP process like <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/data" target="_blank">Feedback Prize - English Language Learning</a>, except that the characters are replaced by "anonymous" character q. Please find the <a href="https://www.kaggle.com/code/leonshangguan/training-deberta-based-regressor-as-fe-extractor?scriptVersionId=158377154" target="_blank">code</a> for more details. Training the Deberta-based regressor can achieve a cv about 0.75.</p> <p>After that, I dropped the last layer of the Deberta regressor, and used the second last layer's output, which is a 128-dimensional vector as the <strong>context feature</strong> extracted by the language model (i.e., Deberta).</p> <p>Based on the features from public notebooks that performed well on the public leaderboard (mainly from <a href="https://www.kaggle.com/code/yunsuxiaozi/writing-quality-fusion-notebook" target="_blank">Writing Quality(fusion_notebook)</a> by <a href="https://www.kaggle.com/yunsuxiaozi" target="_blank">@yunsuxiaozi</a>), I trained lightgbm/xgboost/catboost and save the feature importance, then selected top 64/128/256 most important features from each model and take their union.</p> <p>Finally, the selected features from the public notebook and my context features generated using Deberta were concatenated. I trained <a href="https://www.kaggle.com/code/leonshangguan/all-fe-of-tree-models-training-pipeline-saved-fe?scriptVersionId=158376408" target="_blank">lightgbm/xgboost/catboost models</a> which give cv around 0.588 to 0.595, and <a href="https://www.kaggle.com/code/leonshangguan/all-fe-of-nn-models-training-pipeline-saved-fe?scriptVersionId=158376311" target="_blank">nn models (mlp/autoint/denselight)</a> which have cv around 0.580-0.590. For the nn models, thanks to <a href="https://www.kaggle.com/code/alexryzhkov/lightautoml-nn-test" target="_blank">this notebook</a> by <a href="https://www.kaggle.com/alexryzhkov" target="_blank">@alexryzhkov</a> .</p> <p>Due to the time limit, I only trained a Deberta-base model as the context extractor, but I believe other language models (e.g., Bert) and even llms (e.g., llama) are worth trying as well. As Deberta and nn models require GPU resources, I didn't consider the efficient track. </p> <p><strong>All code are listed below:</strong><br> Feature generation and processing: <a href="https://www.kaggle.com/code/leonshangguan/gpu-all-in-training-with-context-fe-generation/notebook" target="_blank">GPU all in training with context fe generation</a> The notebook shows feature engineering and tree-based model training. For convenience, I saved the processed features to a .csv and train nn/tree models separately.</p> <p>Training: For each model, I train with 10 to 15 different random seeds and 5 to 10 folds (StratifiedKFold). <br> Deberta-based regressor: <a href="https://www.kaggle.com/code/leonshangguan/training-deberta-based-regressor-as-fe-extractor?scriptVersionId=158377154" target="_blank">Training Deberta-based regressor as fe extractor</a><br> nn models: <a href="https://www.kaggle.com/code/leonshangguan/all-fe-of-nn-models-training-pipeline-saved-fe?scriptVersionId=158376311" target="_blank">All fe of nn models training pipeline saved fe</a><br> tree models: <a href="https://www.kaggle.com/code/leonshangguan/all-fe-of-tree-models-training-pipeline-saved-fe?scriptVersionId=158376408" target="_blank">All fe of tree models training pipeline saved fe</a></p> <p>Inference: <a href="https://www.kaggle.com/code/leonshangguan/gpu-all-fe-inference?scriptVersionId=158374031" target="_blank">GPU all fe inference</a> I didn't tune the weights for each model, just some random numbers based on cv. A simple postprocessing <code>scores[scores&lt;0.5] = 0.5 scores[scores&gt;6.0] = 6.0</code> was conducted before submission.</p> <p><strong>Thank you all and feel free to comment if you have any questions</strong></p>
Linking Writing Processes to Writing Quality
[3rd place solution] Blend MLM pretrained DeBERTa & GBM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Linking Writing Processes to Writing Quality <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to kaggle and everyone involved for hosting this competition. It was interesting to see how good we can predict essay scores just from keystroke information. Special thanks to <a href="https://www.kaggle.com/kawaiicoderuwu" target="_blank">@kawaiicoderuwu</a> for sharing the <a href="https://www.kaggle.com/code/kawaiicoderuwu/essay-contructor" target="_blank">essay constructor</a> we based much of our solution on.</p> <h3>TLDR</h3> <p>Our solution is an ensemble of two parts. Firstly, we build a Gradient Boosted Trees (GBT) ensemble based on popular public features on essay level. Secondly, we build an ensemble of deberta models using the final essay text, which have been mlm-pretrained on persuade corpus where we obscured the essays similar to the competitions train set. We weight GBT-Ensemble and deberta-ensemble by a ratio of 40/60.</p> <h3>Cross validation</h3> <p>We used 8fold cross validation and got a so-so correlation between CV and LB. One important insight is that GBM models have a way better LB/ CV ratio compared to deberta models. Or in other words, Deberta worked better on CV, while GBM were relatively bad on CV, but good on LB. That indicates a slight domain shift between LB and train set. Our guess is the essays might be split by topic or student year. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F213493%2Ff9a9768c6bbb99469e6ba717be941bbb%2Fcv_lb.png?generation=1704882140038566&amp;alt=media"></p> <h3>Data preprocessing</h3> <p>Similar to most teams we extracted the final text for each essay, and trained a model to analyse it. Deberta models were quite strong when using the final obscured essay text. We found significant improvement by replacing the obscurification character <code>q</code> with <code>i</code> or <code>X</code>. For both cases one can understand that the pretrained debertas have a better tokenization compared to using <code>q</code>. For example a lot of texts the debertas originally have been trained on, have explicit tokens for i, ii, iii, or X, XX, XXX. This also meant the tokenized sequence was shorter than using <code>q</code>. Following that rationale, we also trained a custom tokenizer. Models based on that tokenizer helped the ensemble a bit.</p> <h3>Model</h3> <h4>GBT ensemble</h4> <p>For all models within the GBT ensemble we used the same <a href="https://www.kaggle.com/code/awqatak/silver-bullet-single-model-165-features" target="_blank">165 features</a> as presented in the public kernel by <a href="https://www.kaggle.com/awqatak" target="_blank">@awqatak</a> (who deserves another special thanks 🙏). We then trained a few models similar to public kernels on those features.</p> <p>The final components were as following</p> <ul> <li>LGB (params tuned with optuna)</li> <li>XGB (params tuned with optuna)</li> <li>CatBoost (params from public kernel)</li> <li>Lightautoml (params from public kernel)</li> <li>Shallow NN</li> </ul> <h4>Deberta ensemble</h4> <p>A key component of our solution is an ensemble of transformer models (Deberta) trained on the reconstructed essay text. </p> <h5>Training procedure:</h5> <p>We found that pretraining the transformer on the persuade corpus with MLM-objective significantly improves CV and LB. For that, essays in the persuade corpus had to be obscured similar to the essays resulting from the train set. Then in a second step we fine-tuned those models on the training data, adding additional features like cursor position, etc and using a Squeezeformer layer to derive semantic features. Adding further keystroke features did not help, and we needed heavy dropout/augmentation on the three we added.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F213493%2F5b7318584e25c3af0099f99b368606ab%2FScreenshot%202024-01-10%20at%2010.28.21.png?generation=1704882528155672&amp;alt=media"></p> <p>The final components were as following</p> <ul> <li>deberta-v3-base trained with q replaced by i</li> <li>deberta-v3-large trained with q replaced by i (with first 12 layers frozen in finetuning, to avoid overfit)</li> <li>deberta-v3-base trained with q replaced by X</li> <li>deberta-v3-base trained with custom spm tokenizer </li> </ul> <h3>Postprocessing</h3> <p>For some models we clipped predictions at [0.5,6.] but it did not really make a difference. </p> <h3>Ensembling</h3> <p>Within the Deberta ensemble as well as within the GBM ensemble we used positive Ridge Regression on Out-of-fold (OOF) predictions to determine blending weights. The final weighting of both parts was done manually, and represented if we trust CV or LB more. For final submissions we selected 50/50 kernel and one 40/60 kernel. </p> <h3>What did not help</h3> <ul> <li>Using the deleted text</li> <li>Stacking</li> <li>Adding more keystroke features to the deberta based model. </li> <li>More squeezeformer layers</li> </ul> <h3>Used tools/ repos</h3> <ul> <li>Pytorch</li> <li>Huggingface</li> <li>Optuna</li> <li>Neptune.ai was our MLOps stack to track, compare and share models. </li> </ul> <h3>Code</h3> <p>Training code repo <a href="https://github.com/darraghdog/kaggle-linking-writing-3rd-place-solution" target="_blank">here</a>.<br> Inference script <a href="https://www.kaggle.com/code/darraghdog/3rd-place-lwq/notebook?scriptVersionId=160142140" target="_blank">here</a>.</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
110th Place, Did Not Work: Splitting Thin/Thick Vessels Approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>110th Place, Did Not Work: Splitting Thin/Thick Vessels Approach</h1> <p>Thank you for hosting the competition.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930242%2F6d623087b77192ed018bb4216e81f873%2Fvessel.png?generation=1707318307890041&amp;alt=media"></p> <h2>Overview</h2> <ul> <li>Employed both 2D Unet and 3D Unet approaches.</li> <li>Expectation: The 2D model was expected to capture larger vessels, while the 3D model should have been better at capturing smaller vessels.</li> <li>2D Unet 512 x 512 (xy,yz,xz)<ul> <li>backbone: SE_resnext50</li></ul></li> <li>3D Unet 128 x 128 x 128<ul> <li>backbone : Resnet18</li></ul></li> </ul> <h2>Training</h2> <ul> <li>Pre-training with all data</li> <li>Fine-tuning with dense dataset(kidney1&amp;3)<ul> <li>augmentation with monai API for multi mask</li></ul></li> </ul> <pre><code> aug_list = [ RandRotated(keys=[, , , ],range_x = np.pi/180 * 90, =np.pi/180 * 90, =np.pi/180 * 90, =1, =), RandFlipd(keys=[, , , ], =1), RandGridDistortiond(keys=(, , , ), =1, distort_limit=(-0.03, 0.03), =), RandZoomd(keys=[, , , ], min_zoom = 1 , max_zoom = 6/5, =1, =), RandAdjustContrastd(keys=[],=1, gamma=(0.8, 2.5)), ] </code></pre> <h2>Data preprocessing</h2> <ul> <li>Thin/thick vessel split approach<ul> <li>Hypothesis: The premise was that a difference in contrast, dependent on vessel thickness, would result in unstable thresholding.This challenge was similar to issues discussed in papers on retinal vessel detection.</li> <li>Reference: <a href="https://www.frontiersin.org/articles/10.3389/fbioe.2021.697915/full" target="_blank">https://www.frontiersin.org/articles/10.3389/fbioe.2021.697915/full</a><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930242%2Fadb0df2308c1f3db2936a04bfd945f3e%2Ffbioe-09-697915-g002.jpg?generation=1707318581847217&amp;alt=media"></li> <li>But overfitted to CV…</li> <li>Source code: 3d splitting code</li></ul></li> </ul> <pre><code>from scipy.ndimage import distance_transform_edt from skimage.morphology import skeletonize_3d dist_transform_varied_thickness = distance_transform_edt(mask_data) skeleton_varied_thickness = skeletonize_3d(mask_data) z_varied_thickness, y_varied_thickness, x_varied_thickness = np.where(skeleton_varied_thickness) num_points = len(x_varied_thickness) thicknesses_varied_thickness = np.array([dist_transform_varied_thickness[z, y, x] * 2 for z, y, x in zip(z_varied_thickness, y_varied_thickness, x_varied_thickness)]) thicknesses_varied_thickness.shape, thicknesses_varied_thickness[:10] from scipy.spatial import cKDTree skeleton_coords_3d = np.column_stack([z_varied_thickness, y_varied_thickness, x_varied_thickness]) skeleton_thicknesses_3d = thicknesses_varied_thickness skeleton_tree_3d = cKDTree(skeleton_coords_3d) mask_coords_3d = np.column_stack(np.where(mask_data)) distances_3d, indices_3d = skeleton_tree_3d.query(mask_coords_3d) colored_mask_3d = np.zeros_like(mask_data, dtype=float) for idx, coord in enumerate(mask_coords_3d): colored_mask_3d[coord[0], coord[1], coord[2]] = skeleton_thicknesses_3d[indices_3d[idx]] colored_mask_3d.shape thickess = colored_mask_3d skeleton = skeleton_varied_thickness thickess_mask = np.zeros_like(thickess, dtype=np.uint8) thickess_mask[thickess&gt;0] = 1 thickess_mask[thickess&gt;5] = 2 thick_mask = np.zeros_like(thickess, dtype=np.uint8) thick_mask[thickess_mask==2] = 1 thick_mask = thick_mask | skeleton thin_mask = np.zeros_like(thickess, dtype=np.uint8) thin_mask[thickess_mask==1] = 1 thin_mask = thin_mask | skeleton </code></pre> <h2>Inference</h2> <ul> <li>or ensemble ( 2D and 3D )</li> <li>Threshold adjusted with kidney2</li> </ul> <h1>post processing</h1> <ul> <li>merge each estimations with each threshold</li> </ul> <pre><code> post_processing(y_pred:torch.Tensor, thr=[.,.,.], k= ): = y_pred[,:]*k = y_pred[,:] = y_pred[,:] = (y_pred0 &gt; int(thr[]*)) = (y_pred1 &gt; int(thr[]*)) = (y_pred2 &gt; int(thr[]*)) = y_pred0 | y_pred1 | y_pred2 y_pred </code></pre>
LLM - Detect AI Generated Text
[7th Place Solution] Generate Data with Non-Instruction-Tuned Models
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2><strong>Forewords</strong></h2> <p>Firstly, a big thank you to Kaggle, the Learning Agency Lab, and Vanderbilt University for orchestrating this interesting competition. Hats off to you all! 🎩 This marks my inaugural Kaggle competition, and the learning curve has been quite the journey.<br> Early in the competition, I had reservations about how well the TF-IDF approach might translate to real-world scenarios. This skepticism led me to delve into LM-based methods and resist the urge to switch back. Little did I anticipate the leaderboard shake-up would be this significant and the outcome has genuinely taken me by surprise.</p> <h2><strong>Final Solution</strong></h2> <p>My "magic sauce" is to use only non-instruction-tuned models to generate data. If we look at the 3 generated essays provided in this competition, they seem increasingly human-like. More importantly, none of them has the generic ChatGPT answer format. I believe this was a big hint by the host. <br> After reading <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/465882" target="_blank">James' post</a> and <a href="https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse-due-to-rlhf#The_one_answer_is_that_there_is_no_one_answer" target="_blank">this excellent article</a> (tl;dr instruction tuning has an adverse effect on response diversity), I generated around 400k non-essay texts from slimpajama and 25k essays with persuade 2.0. Models used are vanilla Falcon-7B, Mistral-7B, and Llama2-7B with temperatures ranging from 0.8 to 1.2, top p values from 0.8 to 0.98, and frequency penalties from 0 to 0.3.<br> Then I simply finetuned a single DeBERTa-v3-large model with this dataset at 512 context length and inference at 1024 length. One training run takes around 8 hours on a 3090. I tried training at 1024 length, but the results were worse. I couldn't find much information on why this is the case, so insights on this matter would be appreciated.<br> In my final solution, predictions falling within the 40th to 60th percentile were replaced with outcomes from a basic TF-IDF + SGD model. This tweak resulted in a modest increase in my CV, but had minimal impact on LB.</p> <table> <thead> <tr> <th>Model</th> <th>Dataset</th> <th>Public LB</th> <th>Private LB</th> </tr> </thead> <tbody> <tr> <td>DeBERTa-v3-base-512</td> <td>only persuade</td> <td>0.869</td> <td>0.875</td> </tr> <tr> <td>DeBERTa-v3-base-512</td> <td>persuade &amp; slimpajama</td> <td>0.921</td> <td>0.920</td> </tr> <tr> <td>DeBERTa-v3-base-1024</td> <td>persuade &amp; slimpajama</td> <td>0.910</td> <td>0.901</td> </tr> <tr> <td>DeBERTa-v3-large-512</td> <td>persuade &amp; slimpajama</td> <td>0.942</td> <td>0.965</td> </tr> <tr> <td>DeBERTa-v3-large-1024</td> <td>persuade &amp; slimpajama</td> <td>0.922</td> <td>0.957</td> </tr> <tr> <td>DeBERTa-v3-large-512 + TF-IDF</td> <td>persuade &amp; slimpajama</td> <td>0.942</td> <td>0.965</td> </tr> </tbody> </table> <h2><strong>Prompt Engineering</strong></h2> <p>My prompt for generating essays is as follows:</p> <pre><code>(100 points) [essay instruction] Your text should be around 500 words.\n\n\n\n Name: [random name based on gender] Grade: [grade] Date: [random date] [essay title] [one cleaned essay from persuade 2.0]\n\n\n\n Name: [another random name with the same gender] Grade: [same grade] Date: [same date] [essay title] </code></pre> <p>Grade and gender information were taken directly from persuade 2.0 dataset. I made 10 generations for each persuade 2.0 essay, and then filtered generations that are too similar to the sample human essay using embedding similarity and Levenstein distance. Generations were then further filtered by length and repetition.</p> <h2><strong>List of Failed Ideas</strong></h2> <ul> <li>Separate essay into list of semantic blocks, embed each block, then train a bi-LSTM on top of it. (score 0.763)</li> <li>Same architecture, but train LSTM on the differences of each pair of neighboring blocks. (score 0.685)</li> <li><a href="https://arxiv.org/pdf/2005.01229.pdf" target="_blank">Robust Encoding</a> to combat character-level adversarial attacks.</li> <li>Sliding window inference and averaging the logits. (score 0.929)</li> <li>MLM pretraining on the test set (10 epochs with deberta-v3-xsmall), then finetune on my dataset. (score 0.689)</li> <li>And a lot more ideas that didn't even pass local CV.</li> </ul> <h2><strong>Dataset and Code</strong></h2> <h3>Training and Dataset Generation Code</h3> <p><a href="https://github.com/Tailen/Kaggle-Detect-AI-Generated-Text-7th-Solution" target="_blank">https://github.com/Tailen/Kaggle-Detect-AI-Generated-Text-7th-Solution</a></p> <h3>Datasets</h3> <p><a href="https://www.kaggle.com/datasets/tailen/persuade-corpus-ai-generated-dataset" target="_blank">https://www.kaggle.com/datasets/tailen/persuade-corpus-ai-generated-dataset</a><br> <a href="https://www.kaggle.com/datasets/tailen/slimpajama-ai-generated-parallel-dataset" target="_blank">https://www.kaggle.com/datasets/tailen/slimpajama-ai-generated-parallel-dataset</a></p> <h3>Inference Code</h3> <p><a href="https://www.kaggle.com/code/tailen/daigt-deberta" target="_blank">https://www.kaggle.com/code/tailen/daigt-deberta</a></p>
LLM - Detect AI Generated Text
24th solution: DeBERTa & TF-IDF Vectorizer Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the host for opening this competition. We learned valuable lessons that will undoubtedly contribute to our growth. </p> <p>Here is the brief solution of our's.<br> We used an ensemble of DeBERTa models &amp; TF-IDF Vectorizers.<br> Details are below.</p> <h1>1. DeBERTa</h1> <h3>CFG</h3> <p>Model: DeBERTa-v3-large<br> epoch: 5~6<br> train max_length: 256~300<br> inference max_length: 512</p> <h3>Data</h3> <ul> <li>Human Data: 300k++</li> <li>AI Data: 300k++ (included self-made data via ChatGPT3.5, 4)</li> <li>Used diverse Data, not only 'RDizzl3_seven'.</li> <li>AI generated text with more than 10 typos all excluded. typo checked by Pyspellchecker.</li> <li>CV split: Stratified KFOLD - 0.9:0.1 </li> </ul> <h1>2. TF-IDF Vectorizer</h1> <p>Special Thanks to <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> for providing us the amazing public notebook! It was awesome.</p> <h3>Model</h3> <p>MultinomialNB, SGDClassifier, LGBMClassifier</p> <h3>Data</h3> <ul> <li>DAIGT-V3 by <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a> and we added our self-made data via ChatGPT3.5, 4</li> <li>[llama, ada, babbage, claude] -&gt; we excluded these data since a lot of them had too much typo.</li> <li>Used diverse Data, not only 'RDizzl3_seven'.</li> <li>AI generated text with more than 10 typos all excluded. typo checked by Pyspellchecker.</li> </ul> <h1>Ensemble</h1> <ul> <li>TF-IDF Vectorizer: MultinomialNB x 1, SGDClassifier x 3, LGBMClassifier x 3</li> <li>Deberta x 4</li> <li>TF-IDF Vectorizer : DeBERTa = 0.65:0.35</li> </ul> <p>Thanks again to the competition hosts and everyone who participated!! Good luck everyone for your next competitions!</p> <p>You can check our submission notebook here: <br> <a href="https://www.kaggle.com/code/kimseunghee/24th-place-notebook-public-0-966-private-0-927" target="_blank">https://www.kaggle.com/code/kimseunghee/24th-place-notebook-public-0-966-private-0-927</a></p> <h3>Team member</h3> <p><a href="https://www.kaggle.com/danielchae" target="_blank">@danielchae</a> <br> <a href="https://www.kaggle.com/kimseunghee" target="_blank">@kimseunghee</a> </p>
SenNet + HOA - Hacking the Human Vasculature in 3D
13th place solution: 4-panel solo model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First I would like to thank the host for this wonderful competition, I really enjoyed participating in it! I would also like to thank <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> for his valuable and interesting comments, I learned a lot from this! As my solution was inspired by the <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/discussion/430618" target="_blank">winning solution</a> of the contrails competition 6 months ago, I want to thank <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> as well! </p> <p><strong>4-panel image</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5960015%2F1cc75a5e44c828cbb2b05af1a2d91558%2FSenNetHOA23_4Panel_example.png?generation=1707287151880105&amp;alt=media"></p> <p>Inspired by <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> winning solution, the model was trained on 4-panel images consisting of 256px sized patches of consecutive slices (creating 512px by 512px images). Therefore, I was able to keep 2.5D dimensionality in a 2D image. The idea was that the model would learn this relationship between slices and therefore would predict a more continuous segmentation.<br> The 256px patches were made with the <a href="https://github.com/Mr-TalhaIlyas/EMPatches" target="_blank">EMPatches</a> library. Which made stitching the separate 256px patches from each 4-panel image a lot easier.<br> The images were normalized based on the percentile of the whole kidney. All values lower than 0.5 were clamped to 0.5, to normalize the background better:</p> <pre><code>lo, hi = .percentile(kidney_volume.numpy(), (, )) def preprocess_image(, lo, hi): = .to(torch.float32) = ( - lo) / (hi - lo) = torch.clamp(, =) </code></pre> <p><strong>Augmentations</strong><br> I used simple augmentations for training: </p> <pre><code>train_transform = A.Compose([ A.RandomRotate90(=1), A.HorizontalFlip(=0.5), A.VerticalFlip(=0.5), A.RandomBrightness(=1), A.OneOf( [ A.Blur(=3, =1), A.MotionBlur(=3, =1), ], =0.9, ), ]) </code></pre> <p>What did not work for me were the augmentations based on scaling of the image.</p> <p><strong>Submission</strong><br> For the submission part: each patch in the 4-panel image was rotated on each own. Where after, the mean was taken of the rotated patches. The mean was taken as well of all the patches in the separate 4-panel images. These patches were merged with the ‘max’ setting. This was performed for the xy, xz, yz rotations of the whole kidney volume.</p> <p><strong>Model</strong><br> The model that was trained on these 4-panel images was a Unet maxvit_tiny_tf_512 using segmentation models pytorch (SMP). The model was trained on 3 whole kidney volume rotations with 0.4 overlap in the patches (~490.000 different images). The model was trained for 9 epochs with a 1e-4 lr and then another 6 epochs with CosineAnnealingLR to 1e-6.</p> <ul> <li>Link to Kaggle submission notebook: <a href="https://www.kaggle.com/code/menno1111/sennet-hoa23-2-5d-4-panel-submission/" target="_blank">SenNet-HOA23 | 2.5D 4-panel | submission</a></li> <li>Link to the training and validation notebooks: <a href="https://github.com/Menno-Meijer/SenNet_VasculatureSegmentation_Competition" target="_blank">GitHub</a></li> </ul>
Santa 2023 - The Polytope Permutation Puzzle
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We would like to thank the organizers for this annual optimization challenge and very interesting problem.<br> Our repo is <a href="https://github.com/Lgeu/santa23" target="_blank">here</a>.</p> <h1>Overview</h1> <p>We solved every problem with the same strategy, using beam search. For state transitions during beam search, sequences of operations that have few pieces being changed, pre-computed by Depth-First Search (DFS) and/or Breadth-First Search (BFS), were mainly used.</p> <h1>Cube</h1> <p>Like many other teams, we reduced the puzzle to normal 3x3 by solving the faces and the edges, and then used a 3x3 solver. We found it difficult to solve the faces and edges simultaneously, so we solved the faces and edges separately with beam search.</p> <p>For solving the faces, since the diversity of the state transitions during the beam search had a strong effect on the quality of the solution, we finally precomputed all sequences under 8 operations (9 operations for small cubes) with few pieces changed with DFS and pruning. The number of sequences became so enormous that more than 200 GB of memory was used when solving the 33x33 cube. On the other hand, for solving the edges, the number of effective sequences was not that large. In addition to the pre-computed sequences, transitions were added that we call "parallelize" the previous sequence. For example, when the previous transition was the sequence "f0.r2.b0," we tried changing it to "f0.r2.r1.b0" or "f0.r2.r3.b0" etc.</p> <p>The evaluation function was designed to minimize the number of pieces that were not aligned (Hamming distance). However, there were some pieces that were difficult to align, and the weights of those pieces were increased.</p> <p>The first implementation of beam search was computationally slow and could not solve 33 cube in a realistic time even when the number of state transitions was not that large. Therefore, by speeding up the process through more efficient calculation of the score during state transitions and through multi-threading (180 threads in the end), we were able to finally achieve both diversity of state transitions and beam width, both necessary for high-quality solutions.</p> <p>Key points other than beam search</p> <ul> <li>Since the parities of the edges must be consistent when attributing a large cube to a 3x3, the parities of the edges were adjusted on a rule basis separately from the beam search.</li> <li>Since the optimal solutions of HFM and QTM are often different, we used a <a href="http://kociemba.org/download.htm" target="_blank">QTM optimal solver</a>.</li> </ul> <h1>Globe</h1> <h2>Precomputation of moves</h2> <p>Initially, sequences of moves used in the state transition during beam search were generated. Depending on the problem size, tens of millions of sequences were used.</p> <h3>DFS</h3> <p>Depth-First Search (DFS) was initially used to enumerate the moves. Without loss of generality, the first flip is fixed to f0, and after enumerating the moves, the other moves were obtained by rotating the entire set. Pruning was applied by prohibiting consecutive operations that cancel each other out , and ensuring an even number of identical flips in the sequence, enabling search up to a depth of 10 in a practical amount of time.</p> <h3>Duplicate removal</h3> <p>Since different sequences of operations can result in the same piece permutation, these duplicates need to be removed. In this process, the shortest sequence leading to the same permutation was retained. If the lengths of the sequences were also identical, the one that comes first lexicographically was kept. This made it slightly more efficient to parallelize and cancel out the moves, which is described later.</p> <h3>Augmentation</h3> <p>The moves obtained through DFS (X) were augmented by sandwiching them between an operation A and its inverse -A (A.X.-A). During this process, the permutations resulting from the sequence were checked by rotating the Globe to determine how many pieces were changed, and those with excessive changes were discarded. This method made it possible to generate sequences of moves longer than 10, which were difficult to explore with DFS alone.</p> <h2>Beam Search</h2> <p>In beam search, the Globe was divided into units of two rows each and solved independently. If the Globe has an odd number of rows, the center one is solved simply by rotation.</p> <h3>Evaluation function</h3> <p>The current state and the solution state’s Hamming distance served as the evaluation function, using the aforementioned moves to transition states. States with smaller values of the evaluation function were retained for each number of moves used, and these states were expanded and evaluated in order of fewer moves. During state transitions, if the previous operation and the next operation to be performed were inverse operations, adjustments were made dynamically to reduce the count of moves.</p> <h3>Combining unit solutions</h3> <p>After solving each unit, their solutions were combined. Different units’ flips could be performed simultaneously to shorten final solutions similarly to the “parallelization” in Cube, so this is handled in this step. The optimal flips to share were calculated using a method similar to the one used to solve the Longest Common Subsequence problem, employing dynamic programming. When there were more than three units to combine, the same process was applied to two units at a time, combining solutions in sequence. All possible orders of combination were tried, and the shortest solution was chosen as the final solution.</p> <h3>Parity for special Globes</h3> <p>For the Globe where each piece in the solution has a different color, parity of the permutation needs to be cared to obtain a solution. Before starting the search, the parity of the permutation of the initial state and solution state was checked, and if it odd, a operation “r” was performed once to align the parity. Only even-length moves were used during the beam search, preserving the parity.</p> <h1>Wreath</h1> <p>For smaller Wreath problems, the best solutions found through exhaustive search were used. Please refer to the public code. <a href="https://www.kaggle.com/code/ruiyaoyang/optimization-problems" target="_blank">https://www.kaggle.com/code/ruiyaoyang/optimization-problems</a></p> <p>For larger problems (21, 33, 100), simple beam search was employed using unit moves {l, -l, r, -r} for state transitions. To evaluate the states during the beam search, DFS was used to search up to a certain number of moves, keeping the best score at each depth. To increase the depth to explore, the types of moves used for state transitions were limited based on conditions:</p> <ul> <li>Prohibit consecutive inverse moves.</li> <li>When the first intersection of rings contains A: used moves = {l, -l}</li> <li>When the first intersection of rings contains B: used moves = {r, -r}</li> <li>When the first intersection of rings contains C: used moves = {l, r}</li> </ul> <p>The score function to compute penalties for each piece not matching the solution state is as follows.</p> <ul> <li>Penalties for each ring supposed to align with A containing something other than A</li> <li>Penalties for each ring supposed to align with B containing something other than B</li> <li>Penalties when something other than C is in the intersection of rings</li> <li>Penalties when the distance between two Cs differs from the solution state</li> <li>When there are more than four wildcards, it's acceptable to ignore hard-to-match Cs, so no penalty for C-related errors</li> </ul> <p>Particularly for large-scale problems such as wreath_100/100, without sufficient depth of exploration, it led to the score plateauing and hundred further search progress. However, extending the scoring search depth to 50 facilitated the achievement of solutions.</p>
Santa 2023 - The Polytope Permutation Puzzle
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Here is our (team PreferredSantaClaus) solution. The repository is <a href="https://github.com/hogloid/kaggle_pfsanta_public/tree/main" target="_blank">here</a>.</p> <h1>Cube &amp; Globe</h1> <p>The key ideas behind our solution for cube and globe puzzles are:</p> <ul> <li>Using a sequence of moves which permutes only a small number (like 3) of pieces (we called such sequences <strong>combo</strong>), and</li> <li>Gradually inserting moves to (possibly the middle of) the solution sequence to reduce the number of mismatches.</li> </ul> <p>For the first part, we ran a series of exhaustive searches on sequences up to length 8. Here we show some examples of combos:</p> <ul> <li>cube (<code>cube_9/9/9</code>)<ul> <li><code>f4.-d2.-f0.d2.-f4.-d2.f0.d2</code> (length 8; permuting 3 pieces)</li></ul></li> <li>globe (<code>globe_1/8</code>)<ul> <li><code>f0.r1.-r0.f0.-r1.r0</code> (length 6; permuting 4 pieces)</li> <li><code>f0.r1.f0.-r1.f0.-r0.f0.r0</code> (length 8; permuting 3 pieces)</li></ul></li> </ul> <p>For the second part, suppose we apply two moves p, q (which themselves can be a composition of moves) to a piece ordering a, denoted by q(p(a)).<br> Here, inserting a move f between p and q to make the final piece ordering q(f(p(a))) is equivalent to adding a move f' = q . f . q^-1 (here . denotes the function composition), which is a "conjugate" of f, to the end of the move sequence.<br> If f affects only a small number of pieces (just like the sequences we found in the first part do) and we precompute q, f' can be quickly computed.<br> Thus, for a sequence of moves m1 . m2 . … . mn, if we compute mn, m(n-1) . mn, …, and their inverses one by one, we can simulate inserting move(s) to any position in the sequence very efficiently.</p> <p>The benefit for inserting a move in the middle of the sequence is twofold:</p> <ul> <li>We obtain a much larger number of combos available.</li> <li>We can expect that some moves cancels out to produce shorter sequences (e.g. "f0.-f0" can be just removed).</li> </ul> <h2>The pipeline for cube puzzles</h2> <p>Here is the overall pipeline for solving cube puzzles:</p> <ol> <li>Change the color of edge pieces based on its original color and the color of the piece which is attached to it in real Rubik's cubes.</li> <li>Starting with the empty sequence, reduce the number of mismatches by repeatedly inserting a move or a combo to any position of the sequence.</li> <li>Fix the position of the very center pieces (odd-sized cubes only) and the parity mismatches.</li> <li>For each 24/48 pieces problem (detailed later), make all the pieces matched by applying combos (again, we allow inserting combos in the middle of the sequence).</li> <li>Fix the position of corner pieces to produce the final solution sequence.</li> </ol> <p>Optimization in step 2 and 4 is done by the greedy search or the beam search. </p> <h3>Center-piece alignment &amp; Parity resolution</h3> <p>For odd-sized cubes, we have to fix the position of the very center pieces. This can be done by applying center-row rotations and we searched for the optimal rotation sequence (if possible, it is desirable that positions of pieces except for the very center pieces are not changed) by BFS.</p> <p>We found some "parity" issues which cannot be resolved by applying combos.</p> <p>Let N be the size of the cube. For i in 2, 3, …, floor(N/2), consider all the edge pieces in row / column i and N+1-i (i=1 is omitted as it corresponds to corner pieces). Here we consider "3D" pieces (a piece and the one attached to it are considered as one) rather than "pieces" in this problem and we ignore the orientation of 3D pieces (more precisely, the orientation of a piece when placed to a slot can be uniquely determined). Then surface rotations are even permutations on these pieces, while rotations in row / column i and N+1-i are odd permutations (other rotations do not affect these pieces). Also all combos are even permutations. Therefore, if the permutation of the 3D pieces is odd, we have to add one row / column i and N+1-i rotation.</p> <p>A similar rule also applies to the middle pieces in edges (odd-sized cubes only). Here surface rotations as well as center-row rotations are odd permutations. Also we encountered another parity problem which has not been explained. We empirically found that this parity problem can be fixed by trying combinations of 2 surface rotations. To test whether this parity is resolved, we just ran the optimization process in Step 4 and checked whether the number of mismatches in edge-middle pieces became 0.</p> <p>Clearly, addition of moves for parity resolution may increase the number of mismatches. To mitigate this problem, we chose the position of insertion and the move to insert so that the increased number of mismatches is minimized.</p> <h3>24 / 48 pieces problem</h3> <p>Let (x,y) denote the piece in the y-th row and x-th column in a cube face (the orientation does not matter). Then pieces in (a,b), (b,a), (N+1-a,b), (N+1-b,a), (a,N+1-b), (b,N+1-a), (N+1-a,N+1-b), (N+1-b,N+1-a) can move around only in these 24 / 48 positions. Therefore, if we use combos which affects these pieces only, we can solve the puzzle independently for different (a,b). For a fixed (a,b), we ran a greedy / beam search to reduce the number of mismatches to 0 (just like in Step 2) allowing using such combos only.</p> <h3>Corner pieces</h3> <p>So far we have completed the puzzle except for corner pieces and now we can treat the remaining puzzle as 3x3x3 one. However, for even-sized cubes, there remains a parity issue which does not happen in 3x3x3 cubes and they should be treated as 4x4x4 ones.</p> <p>We used following solvers for this part:</p> <ul> <li><a href="https://github.com/dwalton76/rubiks-cube-NxNxN-solver" target="_blank">https://github.com/dwalton76/rubiks-cube-NxNxN-solver</a></li> <li><a href="https://github.com/cs0x7f/TPR-4x4x4-Solver" target="_blank">https://github.com/cs0x7f/TPR-4x4x4-Solver</a></li> </ul> <h2>The pipeline for globe puzzles</h2> <p>The pipeline for globe puzzles is similar to that for cube, but differs in details.</p> <ol> <li>Starting with the empty sequence, reduce the number of mismatches by repeatedly inserting a move or a combo to any position of the sequence.</li> <li>Fix parity mismatches.<ul> <li>Let us consider pieces affected by "r0" and "r(k-1)", where k is the number of rows. Then the moves "r0" and "r(k-1)" are odd permutations on these pieces. Also, moves "fi" (hemisphere rotation) are even (if the number of pieces in each row is a multiple of 4) or odd (otherwise). Again, all combos are even permutations. Thus we have to fix the parity mismatches as in cube puzzles.</li></ul></li> <li>Make all the pieces matched by applying combos. Here we did not split the problems into smaller ones (like 24 / 48 pieces problem in cube).</li> <li>For difficult instances (specifically, 391 and 395; globe_3/33 and globe_33/3 with all pieces differently colored), enumerate all 3-piece rotations and apply them to make all pieces matched.<ul> <li>We note that there are combos which affects only on 3 pieces. By repeatedly taking their conjugate (f -&gt; g . f . g^-1), we obtain combos for rotating any 3 pieces in "r0" and "r(k-1)". We can use these combos to make all pieces matched, as the parity issues have been resolved so far.</li></ul></li> </ol> <h1>Wreath</h1> <p>For puzzles smaller than or equal to 21/21, the optimal solution (with no wildcards) can be obtained through a bidirectional search. This entails performing a breadth-first search from both the initial state and the solution state; once they meet at a common state, the sequence of actions from the initial state is connected with the reversed sequence of actions from the solution state. Please note that this process can consume considerable memory, hence the encoding of the state into 128-bit integers is advised.</p> <p>For puzzles with 8 wildcards, we have devised a specific algorithm. Let's focus on the 100x100 puzzle (the explanation still applies, but it's simpler for a 33x33 puzzle). You may google ‘hungarian puzzle’ to see what the puzzle looks like.</p> <p>First of all, let's examine the solution state. The left ring consists of 'A' cells, while the right ring consists of 'B' cells. There are 'C' cells at the two intersection points. The distance (total number of cells + 1) between two intersections is 25 in the left ring and 26 in the right ring. Let’s say one of the intersection cells is ‘bottom’, and the other is ‘top’. To simplify, we can recolor 'C' cells as 'B', which will cost at most 4 mismatch errors.</p> <p>Let’s enumerate the indices of the 'B' cells from 0 to 99 in the clockwise order so that the 0-th cell is in the bottom intersection point, and the 26-th cell is in the top intersection point at first.<br> A cell in the right ring can be recolored as 'B' by relocating the cell to the bottom intersection point, revolving the left ring, and then returning the cell back to its original position. However, this can risk changing the state of the right cell, which is in the top intersection point.</p> <p>That said, it's possible to rectify the state of cells in the right ring in the following order of indices: 0, 26, 52, 76, 4, 30, …, 48, 74. After this sequence, only the 0th cell is wrong, because rectifying 26<em>k (mod 100)-th cell risks only 26</em>(k+1)-th cell, by putting 26<em>k-th, and 26</em>(k+1)-th cell in the intersection point when revolving the left ring.</p> <p>Apply the same rule for 1, 27, 53, 77, 5, 31 …, 49, 75. This process may induce at most 2 mismatch errors for even/odd cells, yielding a total of 8 mismatch errors at the most.</p> <p>Following this algorithm as is results in too many actions, considering it necessitates 26 * 100 actions for the right ring. To alleviate this, an additional preprocessing step is carried out to correct randomly chosen sets of cells in the right ring in ascending order from 0 to 99. That is, do the following:</p> <pre><code>toCorrect= randomof [0,1] of length 100 for i in range(100): toCorrect[i]: revolve the left ring so that the bottom intersection point becomes ‘B’ revolve the right ring once in counterclockwise </code></pre> <p>This approach requires at most 100 steps for the right ring, and aids in reducing the number of actions in subsequent processes, because for example, after rectifying the state of 26-th cell, if the state of 52, 76, 4-th cell is already ‘B’, only 4 steps to revolve the right ring is required to get the 30-th cell to the bottom intersection point. Trying toCorrect sequence about 10**6 times gives us a solution with length 681.</p>
Santa 2023 - The Polytope Permutation Puzzle
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>result: <a href="https://gist.github.com/sash2104/fc2628f2ce6770e92c827e799a9726d1#file-95484-csv" target="_blank">95484.csv</a></p> <p>First of all, thanks for organizing the competition. This is my third consecutive year participating, and I've enjoyed this problem the most.<br> I'll mainly discuss the large-size Cube problem, as it took up most of our time.</p> <h2>Overview of the Approach</h2> <p>For all wreaths and small-size cubes, solutions were obtained either through bidirectional BFS or simple beam search.</p> <p>For all globes and large-size cubes, the following steps were taken:</p> <ol> <li>Solve the special parts. <ul> <li>The main objective is to resolve parity.</li></ul></li> <li>Beam search with moves that preserve parity as candidate moves.<ul> <li>The foundational strategy involved using 3-rot commutators as candidate moves, and for further improvement, the pool of candidate moves was expanded.</li></ul></li> </ol> <p>The solutions obtained were further optimized using public notebooks such as <a href="https://www.kaggle.com/code/shitovvladimir/optimize-any-solution-with-group-theory-approach" target="_blank">here</a> and <a href="https://www.kaggle.com/code/glazed/humble-hillclimber" target="_blank">here</a>, and when applicable, the best solutions from public notebooks were adopted.<br> We are grateful for the valuable insights gained from utilizing these public notebooks.</p> <h2>Scores</h2> <pre><code> // // // // // // // // // // // / / / / / / / / / / / / / / / </code></pre> <h2>Terminology</h2> <ul> <li>facelet: The smallest unit of state. (e.g. There are 6534 facelets in cube_33/33/33)</li> <li>cubie: An individual small cube in a Cube puzzle. (e.g. In cube_33/33/33, there are 5766 center cubies, 372 edge cubies, and 8 corner cubies)</li> </ul> <h2>Large-size Cube Solution</h2> <ol> <li><p>Solve the special parts.</p> <ul> <li>Use the moves that appear an odd number of times in the sample_submission.csv to resolve parity.<ul> <li>Since executing the same move an even number of times does not change parity, arranging only moves that appear an odd number of times can always resolve parity.</li> <li>The sequence of moves obtained is manually fine-tuned to reduce the number of moves further (e.g., <code>r1.d1</code> is removed because it does not change parity).</li></ul></li> <li>For cubes with an odd N, also align the positions of the central six cubies.</li></ul> <p>Examples: For id=283, use <code>f0.f16.f2.f4.f5.f6.f11.f12.f13.f14</code>; for id=257, use <code>f3.r3.f1.f2</code>.</p></li> <li><p>Beam search with moves that preserve parity as candidate moves.</p> <ul> <li>The beam width varies with N, ranging from 1 to 100.</li> <li>The evaluation metric is the fewness of moves. The number of matches with the solution state on a cubie basis is considered equivalent to turns.</li></ul> <p>The candidate moves used are as follows:</p> <ul> <li>Moves that do not disturb parity in two steps.<ul> <li>Example for N=33: <code>r1.-d31</code></li></ul></li> <li>All combinations of 3-rot.<ul> <li>After enumerating all 8-move commutators of 3-rot (e.g. <code>R=-d1.-r0.d0.r0.d1.-r0.-d0.r0</code>), use breadth-first search to find the remaining moves in the form of <code>A.R.-A</code>.</li></ul></li> <li>Commutators of 4 to 8 moves that alter a small number of facelets.<ul> <li>Examples for N=33: <code>f1.r1.-f1.-r1</code>, <code>-d32.-r31.d22.r31.d32.-d22</code></li> <li>Adjusted based on N. For N=5, all 4, 6, and 8 moves that alter up to 24 facelets; for N=33, all 4 and 6 moves that alter up to 12 facelets.</li></ul></li> <li>Commutators of rotations of two corners and rotations of two central edges.<ul> <li>Example of corner rotation moves for N=33: <code>r0.f0.-r0.-f0.r0.f0.-r0.-f0.r32.f0.r0.-f0.-r0.f0.r0.-f0.-r0.-r32</code></li> <li>Example of edge rotation moves for N=33: <code>-r16.d0.r16.-d0.-r16.d0.d0.r16.-d32.-r16.-d0.-d0.r16.d0.-r16.-d0.r16.d32</code></li></ul></li> <li>Commutators composed of the above.<ul> <li>Compositing multiple commutators can reduce the number of moves.</li> <li>Example: <code>-r1.r2.d0.f1.-f2.-d0.r1.-r2.d0.-f1.f2.-d0</code> is a composition of four 3-rot such as <code>-r1.d0.f1.-d0.r1.d0.-f1.-d0</code> and <code>r2.d0.-f2.-d0.-r2.d0.f2.-d0</code>.</li></ul></li></ul></li> </ol> <h2>Globe Solution</h2> <ol> <li><p>Solve the special parts.</p> <ul> <li>Roughly align positions using only <code>r{i}</code> moves.</li> <li>Most cases were solved without explicitly resolving parity; for unsolvable cases, a few random moves were added before reattempting.</li></ul></li> <li><p>Beam search with moves that preserve parity as candidate moves.</p> <ul> <li>The beam width varies with N, ranging from 1 to 10.</li> <li>The evaluation metric is the fewness of moves, with the number of matches with the solution state on a facelet basis considered equivalent to turns.</li></ul> <p>The candidate moves used are as follows:</p> <ul> <li>All combinations of 3-rot. (e.g. <code>r0.f0.r1.f0.-r1.-f0.-r0.-f0</code>) </li> <li>Commutators of 4 to 12 moves that alter a small number of facelets. (e.g. <code>r1.f3.f4.-f3.-r1.-f4</code>)</li></ul></li> </ol>
Santa 2023 - The Polytope Permutation Puzzle
1st Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Solving 10x10x10 cube (272) in 454 steps<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F36720%2F98429ee0f21875cfa4638910db8a2b59%2F272.gif?generation=1706750566377979&amp;alt=media" alt="Solving 10x10x10 cube (272) in 454 steps"></p> <p>Among the three types of puzzles, the wreath puzzle was not particularly focused on because a very short solution could be found using simple beam search, and it had a lower total score compared to the others, making it a lower priority. Therefore, we concentrate on the solutions for the remaining two types, the cube and the globe puzzles. The strategies for solving both puzzles are based on the same principle.</p> <p>Due to each move affecting a large number of pieces at once, simply applying moves as is can lead to a somewhat organized state, but fully solving the puzzle becomes extremely challenging. Thus, the key is to find sequences of moves that only swap a few pieces while leaving the rest unchanged. Remarkably, for both puzzles, there exist sequences of moves that only rotate three pieces and leave the rest unchanged, referred to as 3-rot.</p> <ul> <li>Example for the cube puzzle: d3.f2.d2.-f2.-d3.f2.-d2.-f2</li> <li>Example for the globe puzzle: f0.r0.f0.r1.f0.-r1.f0.-r0</li> </ul> <p>When considering the cluster decomposition of all pieces (a cluster being a set of pieces that can be interchanged through moves), it turns out that for most clusters, except for special parts like the cube's corners, there exists a 3-rot for any three elements within them.</p> <ul> <li>For the cube puzzle: Corners, centers of faces, and centers of edges are exceptions. For other clusters, taking symmetry into account, it suffices to consider the diagonal parts of a 4x4x4 (1,1), the cross parts of a 5x5x5 (1,2), the non-diagonal and non-cross parts of a 6x6x6 (1,2), and the edge parts of a 4x4x4 (0,1). For each, a bidirectional search was performed to enumerate all shortest 3-rot for any three elements, with the longest being 14 moves.</li> <li>For the globe puzzle: The central row when the number of rows is odd is an exception. For other clusters, with the number of rows as 2 and the number of columns as 2c, R=f0.r0.f0.r1.f0.-r1.f0.-r0 corresponds to a 3-rot of ((0,0) (1,c) (1,c-1)). A 3-rot for any three pieces can be found by first finding a sequence of moves A that moves those three pieces to a state where R can be applied (where other pieces can move freely), using breadth-first search, and then the 3-rot can be obtained by A.R.-A.</li> </ul> <p>When solving each cluster using 3-rot, it is important to note that all 3-rot are even permutations, so the overall permutation must also be even. Combining these insights, we can use the following approach:</p> <ol> <li>Solve the special parts.</li> <li>Operate without disrupting the solved parts so that all remaining clusters become even permutations.</li> <li>Solve each cluster independently using 3-rot.</li> </ol> <p>To achieve shorter solutions, we employ the following key ideas:</p> <ul> <li>Since 3-rot requires at least 8 moves and is lengthy, it's more efficient to bring the puzzle to a somewhat solved state using elementary moves or short sequences before employing 3-rot for the final touches.</li> <li>When adding a new sequence B to an existing sequence A, canceling out the end of A with the beginning of B can shorten the overall sequence. For example, if A=A'.ri and B=-ri.B', then A.B becomes A'.ri.-ri.B'=A'.B', thus saving 2 moves.</li> <li>If the current sequence is A=a[0]…a[T-1], instead of appending a 3-rot B for (i j k) at the end to form a[0]…a[T-1].B, at arbitrary time t, inserting a 3-rot B' for some (i' j' k') to achieve a[0]…a[t-1].B'.a[t]…a[T-1] results in the same state. Selecting the appropriate time t can make B' shorter than B or result in more cancellations. Therefore, rather than constructing the sequence from the front, it is better to try the insertions for all times and select the best time.</li> </ul> <p>Based on these ideas, we use the following approach:</p> <ol> <li>Solve the special parts.</li> <li>Use elementary moves to make all clusters even while roughly aligning them.</li> <li>Bring to a somewhat solved state using short sequences.</li> <li>Insert 3-rots at arbitrary times.</li> </ol> <p>The details of each part and the score table can be found in our repository: <a href="https://github.com/wata-orz/santa2023_permutation_puzzle" target="_blank">https://github.com/wata-orz/santa2023_permutation_puzzle</a></p>
Santa 2023 - The Polytope Permutation Puzzle
Silver Medal Solution (15th)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This year too, we won a silver medal in the top group. You may want to read the already published <a href="https://www.kaggle.com/competitions/santa-2023/Discussion/472405" target="_blank">#1 solution</a> and solutions from other winners. <br> But please allow me to make my <a href="https://github.com/k-harada/santa2023" target="_blank">code</a> public and write the solution, to prove that we are not cheating. </p> <h3>Cubes:</h3> <p>We can solve regular patterns by using <a href="https://github.com/dwalton76/rubiks-cube-NxNxN-solver" target="_blank">nxnxn solver</a>, but the way of counting moves is different between this solver and the competition, so if you use it as is, It will be less efficient. For example, in the worst case, 16Uw2 is 1 move for the solver (and for humans), but it is counted as 32 moves in this competition. </p> <p>We first aligned the edges and corners by repeatedly using the solver on a 5x5x5 problem. The approach to extracting 3x3x3 subcubes with corners and centers is shown in the first solution. By repeating 5x5x5, we can solve all the diagonal parts ((1,1) of 4x4x4), the cross parts ((1,2) of 5x5x5), and the edge parts ((0,1) of 4x4x4). <br> All that is left to do is align the inside ((1,2) of 6x6x6). We did greedy search to find efficient next 3-rots. (We called it "the magic of swapping three points" :-) ). <br> Using a 5x5 solver is also not efficient, but we didn't have an efficient way to align the edges, so we did it this way. </p> <p>For the cube N0;N1;.., by coloring it and performing the same method, the edges and corners will be aligned.The rest is essentially the same. </p> <h3>Globes:</h3> <p>I solved it by reducing m x n to multiple 1 x n. Since using f_k freely will affect other layers, I limited it to f_0 and f_n. <br> In 1 x n globes, there is also "the magic of swapping 3 points", so we can get a solution with sufficient performance.</p> <h3>Wreaths:</h3> <p>I was able to get a good enough solution using a simple heuristic.</p> <p>Post-processing by <a href="https://www.kaggle.com/tomokiyoshida" target="_blank">@tomokiyoshida</a> and parallel execution with different parameters by <a href="https://www.kaggle.com/sfujiwara" target="_blank">@sfujiwara</a> significantly improved the score.</p>
Santa 2023 - The Polytope Permutation Puzzle
25th place solution + thought process🥈
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This post outlines our approach which got us 25th</p> <p>First of all, thanks to <a href="https://www.kaggle.com/codicon" target="_blank">@codicon</a>, <a href="https://www.kaggle.com/timothygao" target="_blank">@timothygao</a>, <a href="https://www.kaggle.com/alvaroborras" target="_blank">@alvaroborras</a> and <a href="https://www.kaggle.com/marksix" target="_blank">@marksix</a> for teaming up and contributing ideas throughout the competition, and Kaggle for organizing.</p> <p><a href="https://www.kaggle.com/code/yeoyunsianggeremie/santa-2023-25th-place-moves-distribution/notebook" target="_blank"><strong>Notebook: Our final scores by puzzle type</strong></a></p> <h2><strong>Cube</strong></h2> <p>Initially, we utilised public repos, DWalton for edges and RCube for centers. Parity trick was used for AB cubes. These alone got us 23k for 281 and 282, and a 614k 3rd place submission 1 week into the comp. However, it is unable to solve N-type cubes</p> <p>N-type cubes are more challenging. We first tried to solve it like a normal cube, by recoloring (i.e. for cube 4/4/4, N1 to N16 are labelled as A, N17 to N32 are labelled as B, etc). After solving, we notice that the edges of the cubes are completely solved, only the centers are left. This brings us to the idea of using <strong>cube commutators</strong>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5256933%2F29eef6a5efea2fd0a13b0b53b4bbccd3%2FScreenshot%202024-02-01%20174550.png?generation=1706780860616174&amp;alt=media"></p> <p>For example, let’s take those squares underlined in blue. There’s 4 of them in each face. In total 24 in the entire cube. Same goes for the squares underlined in green. <strong>For the centers only, the cube can be reduced into solving many sub-problems of 24 squares.</strong></p> <p>We first tried to solve the sub-problems using bidirectional BFS, but that ended up too slow and not feasible, and hence we switched to beam search. Beam Search helped us reduce puzzle 283 from 108k to 27k moves, and also slightly improved the other cubes.</p> <p>Final solution: Use the DWalton solver to solve the edges (and the centermost square for odd N), then perform beam search on the commutators to solve the centers. </p> <p><strong>Total score for all cubes ~ 132k</strong></p> <h2><strong>Globe</strong></h2> <p>Initial Constructive Solution</p> <ol> <li>Split the N/M globes into sub-globes of size 2/M. Then solve each 2/M using (2) and (3).</li> <li>Move top pieces in the bottom to the top and vice versa. You can move 1 piece from the bottom to the top and one from the top to the bottom via f0.r0.f0 for example. </li> <li>Cyclically rotate the top by 1 if the parity of inversions for top and bottom don’t match up. Next, remove inversions via the last commutator in the resource, which swaps a pair in both the top and bottom.</li> <li>Finally, merge solutions by using r0 and -r0 to make all the fx moves f0 moves (fx = x r0’s + f0 + x -r0’s). <br> This improved 3/33 (32k -&gt; 13k each). </li> </ol> <p>To further improve the large globes, we implemented a modified version of the Minkwitz algorithm. The paper for the original Minkwitz algorithm can be found <a href="https://core.ac.uk/download/pdf/82526418.pdf" target="_blank">here</a>, and terms from that paper will be in “quotes”. </p> <p>The key differences are as follows:</p> <ol> <li>For a vastly faster “Improve” function with comparable results, only choose a single “j”. We chose this j randomly via a weighted distribution where 0..n-1 had weights n..1 respectively.</li> <li>Iterate over “i” in reverse for the “Fill Orbits” function so that words found can be used for subsequent i (this only helped a bit).</li> <li>For the starting word “t”, pick a random word length from 1 to max_start_word_length and then find a random word of that length. The starting words would be too short to be maximally useful otherwise, and repeat words can be useful.</li> </ol> <p>At first, we tried to do a greedy “base” selection, but this led to short term gains at the cost of exponentially increasing “word” lengths for later “table” entries. Instead, the best base was the elements when iterating column by column and bottom to top in each column (we tried a zigzag alternating top to bottom and bottom to top, but this is worse probably due to less order).</p> <p>Implementation wise, we used C++ for maximum speed. Furthermore, we implemented permutation operations like inverse in place to avoid memory allocation (3x speedup).</p> <p>Table filling was done until the table appeared to converge as follows. Let the notation for hyperparameters be: (rounds, improvement rounds (“s”), max_start_word_length, new Improve or the paper’s Improve). The reason we switch to the paper’s Improve later is because it tries more combinations of existing short words. This is necessary since it becomes increasingly difficult to find new short words.</p> <p>3/33:<br> (1e9, 1e6, 4, fast Improve) (2 hours)<br> (1e8, 1e6, 32, paper’s Improve) (3 hours)<br> (1e8, 1e6, 8, paper’s Improve) (2 hours)</p> <p>8/25:<br> (4e8, 1e6, 4, fast Improve) (2 hours)<br> (3e7, 1e6, 32, fast Improve) (20 minutes) (realized it was time to switch to paper’s Improve)<br> (1e8, 1e6, 32, paper’s Improve) (4 hours)<br> (1e8, 1e6, 8, paper’s Improve) (2 hours)</p> <p>All other globes:<br> (1e9, 1e5, 8, fast Improve) (&lt; 2 hours)</p> <p>After filling the tables, a solution can be quickly determined via factorization, so we can again use randomization. We applied 1-16 random initial moves for 8/25 and 3/33 and 1-8 for the rest before factorization. We used ~1e7 runs per puzzle, greatly reducing moves (e.g. ~70% for 3/4 and ~20% for 8/25 and 3/33).</p> <p>This algorithm solves each 3/33 in ~1700 moves, and each 8/25 in ~2500 moves.</p> <p><strong>Total score for all globes ~ 26k</strong></p> <h2><strong>Wreath</strong></h2> <p>We found the <a href="https://www.kaggle.com/code/glazed/humble-hillclimber" target="_blank">public hillclimbing notebook</a> from <a href="https://www.kaggle.com/glazed" target="_blank">@glazed</a> was extremely useful. It improved almost all of our wreaths, and got the wreath 100/100 down to 2500 moves after many runs, with some code modifications.</p> <p><strong>Total score for all wreaths ~ 3.6k</strong></p>
Santa 2023 - The Polytope Permutation Puzzle
ML approach for all puzzles
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I focused on using a universal ML approach to solve all puzzles. It is a transformer-based model as below. The model is used to predict the next move one step ahead. I solved all Wreath puzzles, all Globe puzzles except the 8_25, and the Cube puzzles from 3 to 6. I could not finish all the puzzles because of the time constraint, but I think it is possible to solve all puzzles.<br> Here is the <a href="https://www.kaggle.com/code/wuliaokaola/s2023-dev-cube6-2-0127" target="_blank">notebook of cube_6/6/6 (puzzle 256)</a>. The result is 739 moves.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2006644%2Fa5b17ae0eb1354a599fc5336a6a00e1b%2Fs2023.png?generation=1706796789124762&amp;alt=media"></p>
Santa 2023 - The Polytope Permutation Puzzle
16th Place Solution for Santa 2023 - The Polytope Permutation Puzzle Competition
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Context</h1> <ul> <li>Business context: <a href="https://www.kaggle.com/competitions/santa-2023/overview" target="_blank">https://www.kaggle.com/competitions/santa-2023/overview</a></li> <li>Data context: <a href="https://www.kaggle.com/competitions/santa-2023/data" target="_blank">https://www.kaggle.com/competitions/santa-2023/data</a></li> </ul> <h1>Overview of the approach</h1> <p>We used bidirectional breadth-first search to optimally solve the small puzzles. The backbone of the larger puzzles were solved with commutators and conjugates, but with several special features (detailed below) that made a big difference.</p> <h1>Details of the submission</h1> <h2>Solving small puzzles optimally</h2> <ul> <li>We solved the <code>cube_2/2/2</code>, <code>wreath_6/6</code> and <code>wreath_7/7</code> puzzles optimally using simple breadth-first-search (BFS).</li> <li>To extend to <code>wreath_12/12</code>, we implemented bi-directional BFS (BBFS). As well as enabling us to reach the optimal solution for <code>wreath_12/12</code> where we couldn't before (because we ran out of RAM), BBFS was vastly faster than vanilla BFS in finding the solutions and became a standard part of our toolbox for solving parts of other puzzle types. </li> <li>To extend to <code>wreath_21/21</code>, we made optimizations to compress the state representation and ran on a box with <em>lots</em> of RAM.</li> </ul> <h2>Medium globes</h2> <p>Even the smallest globes weren't solvable by BBFS. What we did here was to solve the "first" N cells using BBFS and then iteratively expand. We did this by masking out the other cells - setting them all to have the same symbol as other (<code>.</code>) - which means that many unique states collapse down to a single pseudo-state and the puzzle is therefore much smaller. Once we'd solved the first N cells, we'd extend to N+M cells. We called this Iterated BBFS (IBBFS). It no longer gives optimal solutions, but it gives pretty good ones. We then tweaked it a little further so that N and M were set adaptively - i.e. if BBFS found the solution in very few moves, we'd unwind and make it solve a bigger chunk at once. If BBFS was taking too long to solve the next M cells, we'd abort and reduce M. We called this Adaptive IBBFS (AIBBFS). AIBBFS was our best-performing method for globes 1/8, 2/6, 3/4, 6/4 and 6/8.</p> <h2>Large globes</h2> <p>Solving the larger globes proceeded in two phases. The first phase tried "random" moves to solve as many cells as possible. The second phase used commutators and conjugates (in Rubik's cube parlance) to solve the remainder without upsetting what was already solved.</p> <h3>Phase 1</h3> <p>Phase 1 simply used BFS, up to a fixed depth, to find the set of moves that solved the maximum number of cells. Then we did <em>just the first of</em> that set of moves and repeated the process. Taking a single step before doing the next iteration of BFS (rather than doing the full set of moves found) improved the efficiency of this phase and markedly improved the total number of cells solvable by this phase. Given that this phase had an efficiency in the region of 1 move/cell and phase 2 is more like 6 moves/cell, getting more cells solved in this phase was a big win.</p> <h3>Phase 2</h3> <p>We used BBFS to find a large set of short two-pair commutators - i.e. a small number of moves that swapped cells <code>{A,B}</code> and <code>{C,D}</code> but left everything else where it was. These commutators was short (8 or 10 moves) but pretty limited in the pairs they could swap. Then we extended them using conjugates - i.e. perform any set of moves as a preamble to bring cells of interest into positions <code>A, B, C &amp; D</code> that were covered by a commutator, then use the commutator, then perform the inverse of the preamble.</p> <p>We built a large (10s of millions of entries) database of pairs that could be swapped in this way. Then, at each step, we searched for the database entry that gave us the best bang-for-buck improvement to the current state (i.e. number of additional cells solved / number of moves taken to solve) and applied that.</p> <p>(Also worth mentioning something that worked nearly as well and ran much faster. There are some commutators that swap two north-south pairs and other commutators that swap a north-north pair and a south-south pair. For much of the competition we first got all the pieces into the correct hemisphere and then did the within-hemisphere swaps.)</p> <p>In one of the other write-ups, it sounds like the are also 3-cell commutators available for the globe puzzles. I had wondered about about adding these too, but I was out of time / focusing on helping my teammates solve the largest cubes.</p> <h2>Cubes</h2> <p>It was primarily my teammates who worked on the cubes, so I have less detail here. But the outline is a lot like the larger globes…</p> <ol> <li>Find a prefix that solves as many cells as possible.</li> <li>Use an orbit solver that uses commutators to solve each of the 24-cell orbits independently.</li> <li>For puzzles with wildcards, solve the knapsack problem.</li> <li>Assemble the solutions for each orbit using a travelling salesman solver.</li> </ol> <p>The last couple of phases are different to anything discussed so far.</p> <h3>Phase 3 - Knapsack solving</h3> <p>Given that phase 2 independently solves each orbit and given that some solutions have wildcards, it may be possible to leave some orbits unsolved. But which orbits should we do that for? This is the traditional <a href="https://en.wikipedia.org/wiki/Knapsack_problem" target="_blank">knapsack problem</a>. The value of omitting an orbit is the number of moves taken to solve that orbit. The cost (or "weight") of omitting an orbit is the number of cells that will be left in an incorrect state if we don't solve the orbit.</p> <p>This mostly applied to puzzle #277 which had 176 wildcards (8% of the state). Since it had limited applicability to other puzzles and we had limited time, we just used the straightforward approach of greedily omitting orbits based on their <code>value/weight</code> (subject to the "weight" not exceeding the number of wildcards available). There are ways of solving the knapsack problem that get better solutions, but this is often a reasonable starting point.</p> <h3>Phase 4 - Travelling salesman</h3> <p>Phase 2 solves the orbits independently. Because of that, it's possible that there are moves than cancel out at the end of one orbit solution and the beginning of the next. If we re-order the orbits, we can seek to maximize the total number of moves the cancel out. We noticed that this can be framed as an instance of the (asymmetric) travelling salesman problem. First compute, for each ordered pair of orbits <code>[A, B]</code>, the number of moves that can be cancelled ('C') when solving <code>B</code> immediately after <code>A</code>. The (directed) distance between <code>A</code> and <code>B</code> is just <code>-C</code>. Pass to a <a href="http://akira.ruc.dk/~keld/research/LKH-3/" target="_blank">travelling salesman solver</a> (which I used for <a href="https://www.kaggle.com/competitions/santa-2022" target="_blank">Santa 2022</a>) to minimize the distance and therefore maximize the amount of cancellation possible.</p> <p>We should also have used this method for the larger globes but, again, we were out of time.</p> <h1>Sources</h1> <p>TODO</p> <h1>Appendix A: The soft stuff</h1> <ul> <li><strong>Thanks</strong> to Kaggle and to Ryan in particular for setting this competition and dealing so graciously &amp; efficiently with the little wrinkles.</li> <li>Why <strong>Always Day Zero</strong>? This year I invited a couple of my colleagues to join a team with me. The company that we work for has a culture that it's "always day one" (which you can read more about <a href="https://aws.amazon.com/executive-insights/content/how-amazon-defines-and-operationalizes-a-day-1-culture/" target="_blank">here</a> if you aren't familiar with the term). I've previously competed in internal capture-the-flag security competitions where obviously "day zero" refers to an exploit that hasn't been patched yet. So, as a little joke, I competed in those competitions under the team name "Always Day Zero" and I've stuck with it ever since.</li> <li>I'm really pleased that so many of you benefitted from my <strong>progress updates</strong> and were able to use them to focus your attentions on the puzzles where you were furthest away. Thanks for the encouragements and interesting discussions in the chat. Personally, I find it really adds to the experience.</li> </ul> <h1>Appendix B: Final scores by puzzle type</h1> <table> <thead> <tr> <th>cube size</th> <th>total moves</th> <th>wreath size</th> <th>total moves</th> <th>globe size</th> <th>total moves</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>315 (†)</td> <td>6</td> <td>150 (†)</td> <td>1/8</td> <td>1,104</td> </tr> <tr> <td>3</td> <td>2,821</td> <td>7</td> <td>128 (†)</td> <td>1/16</td> <td>1,434</td> </tr> <tr> <td>4</td> <td>4,953</td> <td>12</td> <td>173 (†)</td> <td>2/6</td> <td>181</td> </tr> <tr> <td>5</td> <td>5,336</td> <td>21</td> <td>176 (†)</td> <td>3/4</td> <td>680</td> </tr> <tr> <td>6</td> <td>3,825</td> <td>33</td> <td>383</td> <td>6/4</td> <td>444</td> </tr> <tr> <td>7</td> <td>2,185</td> <td>100</td> <td>642</td> <td>6/8</td> <td>2,129</td> </tr> <tr> <td>8</td> <td>2,930</td> <td></td> <td></td> <td>6/10</td> <td>2,673</td> </tr> <tr> <td>9</td> <td>3,810</td> <td></td> <td></td> <td>3/33</td> <td>11,853</td> </tr> <tr> <td>10</td> <td>4,829</td> <td></td> <td></td> <td>8/25</td> <td>4,038</td> </tr> <tr> <td>19</td> <td>14,637</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>33</td> <td>41,078</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <p>† We know we have an optimal solution.</p> <h1>Appendix C: Score progression</h1> <p>Here's a graph of our score progression over the competition.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2217262%2Fc6e9b03d54d9ad6a4d2f13d09e0b7583%2Fhistory.png?generation=1706777568273724&amp;alt=media"></p> <h1>Appendix D: Final submission</h1> <p>See the attachment for our final submission.</p>
Santa 2023 - The Polytope Permutation Puzzle
[85th place] Group of Order 5
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, many thanks to the organisers. This was a wonderfully designed competition, with its 398 separate puzzles ensuring that there was minimal risk of a fully optimal score being quickly reached and therefore retaining jeopardy throughout. Worth waiting for, if you remember those uncertain "is Santa 23 happening" days before Christmas.</p> <p>I need to thank our utterly amazing team <a href="https://www.kaggle.com/annafabris" target="_blank">@annafabris</a>, <a href="https://www.kaggle.com/victorfernandezalbor" target="_blank">@victorfernandezalbor</a>, <a href="https://www.kaggle.com/shitovvladimir" target="_blank">@shitovvladimir</a> and <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a>, who performed awesomely and worked tirelessly throughout. The standard in this competition was really high, and we were all amazed by the scores at the top of the leaderboard, with the two leaders jostling for position lap-by-lap, and regularly overtaking one another before losing the place again. The scores went significantly lower than where I would have anticipated finding the optimum – and 53770 seems like a score from a different planet.</p> <p>Our team brought together complementary skills in mathematics, coding and science, and were always committed to trying a variety of approaches. Alexander and Victor worked very hard on the ML aspects, but since Alexander has discussed that <a href="https://www.kaggle.com/competitions/santa-2023/discussion/472594" target="_blank">in detail</a> I will refrain from duplication. The ML approach gave us a number of good solves of globe_2/6 and globe_3/4 puzzles, though unfortunately we weren’t able to get it working well enough to crack the larger puzzles.</p> <p>Vladimir produced sterling work on some of the more challenging cubes, particularly the N0, N1 … variety, and also applied his group theory expertise to build us a souped-up version of his public optimization notebook. Anna displayed amazing skills in improving existing solutions by various means, becoming our hill climbing wrangler. Anna’s expertise here became very important, as cutting down the length of the (reversed) scrambling pathways in the sample solution was the best option left to us for most of the bigger puzzles.</p> <p>I concentrated mostly on wreaths, developing a protocol of building a multi-part trajectory from a number of A-star runs. For this, I used the “number of wildcards” parameter as a “reaction coordinate”, with each successive A-star run along the pathway aiming to “solve” the puzzle up to a decreasing number of (fake) wildcards. This was generally a useful approach for puzzles in approximately the 40-200 size range. I got a globe_1/8 proof of concept before running out of time. I also spent a lot of effort ensuring that all our best solutions, plus the top public ones, made it into one ensemble, and also maintaining the ongoing scorecard.</p> <p>Although we finished down in 86th place, our submission scoring 683920 had 15 single-puzzle wins against 1st place, 36 versus 2nd place, 12 compared to 4th place and 17 over 17th place, as per <a href="https://www.kaggle.com/solverworld" target="_blank">@solverworld</a>’s <a href="https://www.kaggle.com/code/solverworld/compare-your-submission-to-the-best" target="_blank">notebook</a>. This includes one puzzle, cube_3/3/3 #104, where we beat all of them.</p> <p>As a computational chemist, I see a useful analogy with statistical thermodynamics. The underlying permutations are like microstates with the observable “which colour is where” states behaving like macrostates; there are typically many permutations corresponding to the same observed state. This is a limitation of those approaches that find sets of moves corresponding to identical permutations; identifying these is of some use, but doesn’t address the degeneracy arising from the many-states-per-permutation property of most puzzles.</p> <p>Finally, if there were a leaderboard for most submissions, we’d be in third place with a mighty 246 of the things. Just as well that overfitting isn’t a thing in this competition. For anyone interested, our best submission file and scorecard are in <a href="https://www.kaggle.com/datasets/jbomitchell/santa-2023-public-files" target="_blank">this dataset</a>, both indicated by the number 683920, our score.</p>
Santa 2023 - The Polytope Permutation Puzzle
Bronze medal: using RCube to solve large Santa puzzles
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Some time ago I've shared a <a href="https://www.kaggle.com/competitions/santa-2023/discussion/469167" target="_blank">video</a> of solving 65K cube with RCube package. The package is designed to solve huge cubes of arbitrary sizes and its main strength is to optimally solve centers, as the number of corners and edges becomes negligibly small when the dimensions grow. Unfortunately, in Santa the largest cube we had was only 33x33x33, so it was not obvious if it is worth to use the solver designed for thousands layers. </p> <p>However, after some research I found that the package is still pretty efficient for cube sizes similar to the ones in this competition:</p> <br> <br><p></p> <p>Additionally, I did a few benchmarks and realized that RCube could give a powerful boost in optimizing cubes downto 19 layers on each side. </p> <p>The main issue, however, was inconsistency between the competition's definitions and the ones used in the package. It was interesting to see that a few fellow kagglers tried to follow the same direction in public notebooks, however, were puzzled by RCube's inability to even load a predefined cube state from a file. </p> <p>To make it work I had to do a few things:</p> <ul> <li>write a convertor to translate Santa cube states to RCube, as the face and tile numbering conventions were different;</li> <li>design a way to read in text state files, by default RCube can only deserialize its own objects;</li> <li>replace optimized RCube solving procedures with the physical moves, as the package tries to avoid moving pieces, instead, changing the coordinate system;</li> <li>develop some logging capabilities to dump moves in Santa definitions.</li> </ul> <h1>Results</h1> <p>After all the above developments I was able to get the following improvements:</p> <table> <thead> <tr> <th>Puzzle ID</th> <th>Puzzle type</th> <th>Sample submission</th> <th>RCube</th> </tr> </thead> <tbody> <tr> <td>277</td> <td>cube_19/19/19</td> <td>24,419</td> <td>9,423</td> </tr> <tr> <td>278</td> <td>cube_19/19/19</td> <td>21,455</td> <td>11,107</td> </tr> <tr> <td>279</td> <td>cube_19/19/19</td> <td>32,249</td> <td>11,065</td> </tr> <tr> <td>280</td> <td>cube_19/19/19</td> <td>24,194</td> <td>11,176</td> </tr> <tr> <td>281</td> <td>cube_33/33/33</td> <td>123,431</td> <td>25,747</td> </tr> <tr> <td>282</td> <td>cube_33/33/33</td> <td>139,629</td> <td>25,621</td> </tr> </tbody> </table> <p><br><br> The puzzle 282 is the ABAB one and luckily had the odd number of layers, so after converting it to a conventional form I simply reverted the original solution to get the conventional initial state, then solved it with RCube too. One thing to note is that the package moves are not optimized for corners and edges, so I got an additional boost after using publicly available optimizers on RCube solutions.</p> <p>These results may be pretty far from gold, but when combined with the publically available top solutions were in the silver zone for a long time going down to bronze by the end of the competition.</p> <h1>Credits</h1> <p>The work presented in this post was greatly inspired by the following notebooks:</p> <ul> <li><a href="https://www.kaggle.com/code/jazivxt/using-rcube-solver" target="_blank">Using RCube Solver</a> by <a href="https://www.kaggle.com/jazivxt" target="_blank">@jazivxt</a>;</li> <li><a href="https://www.kaggle.com/code/nikitababich/hint-on-decoding-some-cubes-50k-decrease" target="_blank">Hint on decoding some cubes (&gt;50k decrease)</a> by <a href="https://www.kaggle.com/nikitababich" target="_blank">@nikitababich</a>;</li> <li><a href="https://www.kaggle.com/code/shitovvladimir/optimize-any-solution-with-group-theory-approach" target="_blank">Optimize any solution with group theory approach</a> by <a href="https://www.kaggle.com/shitovvladimir" target="_blank">@shitovvladimir</a>.</li> </ul> <p>And, of course, thanks to Kaggle for designing and hosting this awesome competition!</p>
SenNet + HOA - Hacking the Human Vasculature in 3D
41st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, we would like to express our gratitude to Kaggle and the organizers for hosting this exceptional competition. Through participating in this contest, we have gained a deeper understanding of the challenges and methodologies involved in medical image recognition.</p> <h2>Introduction</h2> <p>We submitted separate solutions within our team.</p> <ul> <li>I submitted an ensemble model of se_resnext101_32x4d and Vision Transformer (mit_b2), which achieved a score of <strong>0.834</strong> on the public leaderboard. The private leaderboard score was <strong>0.586</strong>.</li> <li><a href="https://www.kaggle.com/ryosukesaito" target="_blank">@ryosukesaito</a> submitted an ensemble model of EfficientNet and SE-ResNeXt which achieved a score of <strong>0.857</strong> on the public leaderboard. The private leaderboard score was <strong>0.519</strong>.</li> <li>The high public leaderboard score achieved by <a href="https://www.kaggle.com/ryosukesaito" target="_blank">@ryosukesaito</a>’s submission might have been a contributing factor to our ability to submit my somewhat ambitious notebook, possibly leading to our winning a silver medal.</li> </ul> <h2>My (@jooott) Solution</h2> <h3>Overview</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2F580d1b944e38bb1924f6ec8480ac2b65%2F5.PNG?generation=1708327636242492&amp;alt=media"></p> <h3>Key points</h3> <p>I struggled significantly with stabilizing the training process.</p> <ul> <li>To address this, I used Accumulate Grad Batches to effectively increase the batch size to 128, which stabilized the training.</li> <li>A major factor in the significant improvement in score was the application of stronger data augmentation. The data augmentation strategy was inspired by <a href="https://www.kaggle.com/competitions/vesuvius-challenge-ink-detection/discussion/417496" target="_blank">the 1st place solution of the Vesuvius Challenge - Ink Detection</a>.</li> <li>I also think that scaling up the training images from 512px to 1024px contributed to the increase in score.</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2Fe3e49e0014af22209252039314919b6d%2F10.PNG?generation=1708327710646310&amp;alt=media"></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2F01cf2045817c40092ded7e721c4ff52e%2F6.PNG?generation=1708327729992246&amp;alt=media"></p> <pre><code>train_transform = A.Compose( [ A.RandomScale( scale_limit=(1.0, 1.20), =cv2.INTER_CUBIC, =0.1, ), A.RandomResizedCrop( image_size, image_size, scale=(0.8, 1.0), =1 ), A.RandomBrightnessContrast(=0.75), A.ShiftScaleRotate(=0.75), A.OneOf([ A.GaussNoise(var_limit=[10, 50]), A.GaussianBlur(), A.MotionBlur(), ], =0.4), A.CoarseDropout( =1, =int(image_size * 0.1), =int(image_size * 0.1), =0, =0.5), A.CLAHE(=0.2), A.GridDistortion(=5, =0.3, =0.05), ToTensorV2(=), ] ) </code></pre> <h2>Muku's (@ryosukesaito) solution</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2Fa10c8d4428a2dd305f8f94b3bddbb6c7%2FUntitled%20(4).png?generation=1708327848313831&amp;alt=media"></p> <h3>key points</h3> <ul> <li><p>In my architecture, Detection/Segmentation of kidney region is performed before predicting blood vessel area.</p> <ul> <li>Detection contributed to inference speedup (especially in the yz/zx direction), since it is possible to skip vessel segmentation in frames where kidney is not detected, and to reduce image size by cropping.</li> <li>Segmentation masks were used to reduce FP outside the kidney.</li> <li>For both annotations, I used LangSAM <a href="https://github.com/luca-medeiros/lang-segment-anything" target="_blank">(luca-medeiros/lang-segment-anything: SAM with text prompt</a>). This allowed me to prepare annotation data with a few manual adjustments.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2F3f044b330cb47140058deac70a7db57c%2FUntitled%20(5).png?generation=1708327892920221&amp;alt=media"></li> <li>I use YOLOv8n for Detection and EfficientNet-B0 for Segmentation.</li></ul></li> <li><p>Various pre/post processing improved LB/PB scores slightly, but steadily.</p> <ul> <li><p>In the yz/zx axis image, blood vessels at the edge may be cut off. Since the inference accuracy was poor in this area, I improved the inference accuracy by pseudo-closing the vessels with mirror-padding before inference.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2Fb30520c1fb82b034bf913908cf65487d%2FUntitled%20(6).png?generation=1708327958129459&amp;alt=media"></p></li> <li><p>After binarization of the results, defects may occur in the vascular prediction region as shown below. For this reason, morphological closing and fillPoly processing were added as post-processing steps.<br> These contributed to a slight score improvement in CV/LB/PB.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2640938%2F84de68a4219ff31a9f3623c4e83fc8f2%2FUntitled%20(7).png?generation=1708328014528398&amp;alt=media"></p></li></ul></li> <li><p>In my experiments, ideas that contribute to generalization ability (strong augmentation, pseudo labeling, etc…) could not adopted as final submits, because they resulted in a decrease in CV/LB…<br> However, I regret that I should not have been too aware of the unstable CV/LB, as the sample was not large enough for this competition.</p></li> </ul>
LLM - Detect AI Generated Text
[7th Place Solution in Efficiency Prize]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to express my gratitude to Kaggle and THE LEARNING AGENCY LAB for organizing this thrilling competition. The insightful posts in the discussion forum have greatly benefited me as well. I'm also happy to share my solution.<br><br> <br></p> <p>Business context: https://www.kaggle.com/competitions/llm-detect-ai-generated-text<br><br> Data context: https://www.kaggle.com/competitions/llm-detect-ai-generated-text/data<br></p> <h2>Overview of the Approach</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F17316913%2F56e1595dcaa2c42f5908e280e7e2594c%2F_20240223230955.png?generation=1708792266125478&amp;alt=media"><br> Fig. 1 The pipeline of our solution<br><br> <br><br> <br></p> <p>Our solution includes six parts based on <a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts?scriptVersionId=158926419" target="_blank">public work</a><br></p> <p><strong>Data argument.</strong> The mainstream approach in competition is to seek or generate diverse datasets of student writing and large language model-generated data for data augmentation. Many Kagglers have generously shared their own datasets. Our approach also involves searching for external open-source data, and we have utilized the following external datasets:<br><br> <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset</a> <br><br> <a href="https://www.kaggle.com/datasets/alejopaullier/argugpt" target="_blank">https://www.kaggle.com/datasets/alejopaullier/argugpt</a> <br><br> <a href="https://www.kaggle.com/datasets/kagglemini/train-00000-of-00001-f9daec1515e5c4b9" target="_blank">https://www.kaggle.com/datasets/kagglemini/train-00000-of-00001-f9daec1515e5c4b9</a> <br><br> <a href="https://www.kaggle.com/datasets/pbwic036/commonlit-data" target="_blank">https://www.kaggle.com/datasets/pbwic036/commonlit-data</a> <br><br> <a href="https://www.kaggle.com/datasets/wcqyfly/argu-train" target="_blank">https://www.kaggle.com/datasets/wcqyfly/argu-train</a> <br><br> <br></p> <p><strong>Removing duplicate text and data filtering.</strong> This part is the same as the <a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts?scriptVersionId=158926419" target="_blank">public work</a>.<br><br> <br></p> <p><strong>Tokenization using Byte-pair Encoding.</strong> This part is the same as the <a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts?scriptVersionId=158926419" target="_blank">public work</a>.<br><br> <br></p> <p><strong>Generate TF-IDF vectors for model training.</strong> We adjusting parameters such as min_df, max_df, and max_features to effectively reduce the feature dimensionality and minimize the impact of noisy data.<br><br> <br></p> <p><strong>Train the MultinomialNB and stochastic gradient descent.</strong> We adjusted the parameter alpha which means the additive smoothing for MultinomialNB and the train steps for SGD. <br><br> <br></p> <p><strong>Ensemble MNB and SGD.</strong> We have attempted to combine Multinomial Naive Bayes (MNB) and Stochastic Gradient Descent (SGD) using different weights. <br></p> <h2>Run time optimization operation</h2> <p><strong>To combine the dataset <a href="https://www.kaggle.com/wcqyfly/notebook95c85fa3c6" target="_blank">in other notebook</a>.</strong><br><br> <br></p> <p><strong>Reduced feature size as mentioned above.</strong> We adjusting parameters such as min_df, max_df, and max_features.<br><br> <br></p> <p><strong>Remove time-consuming models.</strong> We found that after using lightgbm and catboost, not only does it take more time, but the LB score also decreases. We decided to remove these two models. However, the fact proves that this choice is wrong, which makes the PB score very low. But fortunately, we have the opportunity to obtain Efficiency Price.<br></p> <h2>Details of the submission</h2> <table> <thead> <tr> <th>Version</th> <th>Public Score</th> <th>Private Score</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0.962507</td> <td>0.861747</td> </tr> <tr> <td>2</td> <td>0.962465</td> <td>0.853448</td> </tr> <tr> <td>3</td> <td>0.967997</td> <td>0.891007</td> </tr> <tr> <td>4</td> <td>0.967267</td> <td>0.898650</td> </tr> <tr> <td>5</td> <td>0.967885</td> <td>0.890983</td> </tr> </tbody> </table> <h2>Which did not work</h2> <p>We adjusted the N_grams for TF-IDF, it did not work on both public score and private source.<br></p> <h2>Our code</h2> <p><strong>Our code can be access by kaggle or github. The links are as follows:</strong><br></p> <p><strong>1.kaggle:</strong> <a href="https://www.kaggle.com/code/wcqyfly/fork-of-fork-of-fork-of-llm-daigt-analyse-e-db6333" target="_blank">https://www.kaggle.com/code/wcqyfly/fork-of-fork-of-fork-of-llm-daigt-analyse-e-db6333</a> <br><br> <strong>note:</strong> It should be noted that because the number of test sets is less than 3, running all directly will cause the code to report an error, but after submitting, when the test set is replaced with a hidden test set, the code will be run correctly and get the result.<br></p> <p><strong>2.github:</strong> <a href="https://github.com/wcqy001028/LLM-Detect-AI-Generated-Text" target="_blank">https://github.com/wcqy001028/LLM-Detect-AI-Generated-Text</a><br> <strong>note:</strong> If the number of data in the test_essays.csv is less than 5, the min_df is set to 1 and the model is not trained which only used for debugging. Conversely, when the number of data in test_essays.csv is greater than 5, the min_df is set to 2 and the model will be trained and will generate prediction results. <br></p> <h2>Reference</h2> <ol> <li><a href="https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts?scriptVersionId=158926419" target="_blank">https://www.kaggle.com/code/batprem/llm-daigt-excluded-prompts?scriptVersionId=158926419</a></li> <li><a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset</a></li> <li><a href="https://www.kaggle.com/datasets/alejopaullier/argugpt" target="_blank">https://www.kaggle.com/datasets/alejopaullier/argugpt</a></li> <li><a href="https://www.kaggle.com/datasets/kagglemini/train-00000-of-00001-f9daec1515e5c4b9" target="_blank">https://www.kaggle.com/datasets/kagglemini/train-00000-of-00001-f9daec1515e5c4b9</a></li> <li><a href="https://www.kaggle.com/datasets/pbwic036/commonlit-data" target="_blank">https://www.kaggle.com/datasets/pbwic036/commonlit-data</a></li> <li><a href="https://www.kaggle.com/datasets/wcqyfly/argu-train" target="_blank">https://www.kaggle.com/datasets/wcqyfly/argu-train</a></li> <li><a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468908" target="_blank">https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468908</a></li> <li><a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/455711" target="_blank">https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/455711</a></li> </ol>
SenNet + HOA - Hacking the Human Vasculature in 3D
[2nd Public/90th Private] Solution for the SenNet + HOA - Hacking the Human Vasculature in 3D
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>After being heartbroken for 48 hours, I finally recovered. I decided to share my method, which might help solve some of friends' doubts.</p> <p>First of all, I want to thank my good brother, who spent more than 1,000 rmb calling a sexy beauty to give me a massage and relax, which made me realize that there are still such wonderful things in this world besides competition.</p> <p>At the same time, I would like to thank <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a> for providing the training and inference notebooks. All my experiments are based on these two notebooks.</p> <p>I also want to thank <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , <a href="https://www.kaggle.com/lihaoweicvch" target="_blank">@lihaoweicvch</a> and others who provided such detailed experimental details early in the competition, which reduced many difficulties I had in the initial stage of the experiments</p> <h2>Training Strategy:</h2> <ul> <li>High probability strong data augmentation - mixup pre-training for over 300 epochs, using kidney1 and kidney3 both dense and sparse datasets.</li> <li>Low probability data augmentation - mixup - EMA fine-tuning on kidney1 and kidney3 in dense datasets, with kidney2 dataset for validation.</li> <li>Validation metric uses average dice score from thresholds in (0.1, 0.2 … 0.9) range. </li> <li>Loss function is simple dice loss.</li> <li>Training resolution is 1024x1024.</li> <li>Silde-window inference with 5 TTA + 50um resolution inference for public test, 5 TTA + 50um and 60um resolution inference for private test, using F.interpolate(…, scale=0.8)</li> <li>Threshold selection: 1. Based on fixed value, 2. Based on percentile, 3. Based on Otsu adaptive thresholding.</li> </ul> <h2>Model Choices:</h2> <table> <thead> <tr> <th>Model</th> <th>Public Score</th> <th>Private Score</th> <th>Thresholding Strategy</th> <th>Notes</th> </tr> </thead> <tbody> <tr> <td>se-resnext50-unet</td> <td>0.891</td> <td>0.546</td> <td>Percentile</td> <td>Local receptive field</td> </tr> <tr> <td>se-resnext50-unet</td> <td>0.363</td> <td>0.553</td> <td>Otsu</td> <td>-</td> </tr> <tr> <td>gcnet50-unet</td> <td>0.881</td> <td>0.458</td> <td>Percentile</td> <td>Global receptive field</td> </tr> <tr> <td>gcnet50-unet</td> <td>0.857</td> <td>0.435</td> <td>Otsu</td> <td>-</td> </tr> <tr> <td>2.5d-hrnet-w32-unet</td> <td>0.868</td> <td>0.546</td> <td>Percentile</td> <td>Excellent detail recovery ability</td> </tr> <tr> <td>effiecientnet-b3-unet</td> <td>0.878</td> <td>0.517</td> <td>Percentile</td> <td>Excellent inference speed</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet</td> <td>0.857</td> <td>0.576</td> <td>0.16</td> <td>-</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet</td> <td>0.895</td> <td>0.542</td> <td>Percentile</td> <td>-</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet</td> <td>0.883</td> <td>0.518</td> <td>Otsu</td> <td>-</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet+effiecientnet-b3-unet</td> <td>0.886</td> <td>0.544</td> <td>Percentile</td> <td>-</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet+effiecientnet-b3-unet-2scale</td> <td>-</td> <td>0.589</td> <td>0.1</td> <td>-</td> </tr> <tr> <td>se-resnext50-unet+gcnet50-unet+effiecientnet-b3-unet-2scale</td> <td>-</td> <td>0.452</td> <td>Percentile</td> <td>-</td> </tr> </tbody> </table> <p>All the experiments, the results in private are not very ideal, perhaps due to the thresholds, or it could be due to other reasons<br> Additionally, since I cannot confirm whether private is the result of downsampling an entire kidney or a subset of a kidney, I did not spend too much time on it in the 2.5d method.<br> At the same time, in the 2-scale inference, some weights were fine-tuned at a 60um resolution, but there was actually no significant improvement.</p> <h2>Reasons for Failure:</h2> <ul> <li><p><strong>Threshold selection:</strong> In my models, lower thresholds achieved better private scores, which contradicted my early idea that "higher thresholds could better reduce false positives and improve the private results". Additionally, the threshold selection method based on proportion carries too much risk. Choosing a fixed threshold is a more stable approach.</p></li> <li><p><strong>Validation metric:</strong> Since kidney2 was sparsely annotated, I chose average dice as the validation metric. And I found that at different thresholds, a higher dice with smaller dice variance usually did not result in a poor public score, but this is not a positive correlation, which may have introduced a potential risk of overfitting.</p></li> <li><p><strong>Data augmentation:</strong> I gradually added many data augmentations, assuming that if the validation metric did not decrease after adding augmentations, the model's generalization ability would be improved to some extent. The final result told me that one or more augmentation methods, such as mixup, might have affected the private score.</p></li> </ul> <p>Nevertheless, this competition has ended. Congratulations to all the winners who have achieved such great results, and also to all the friends who have given their best in this competition but did not achieve good scores whose algorithm skills must have improved significantly.</p> <p>Here are some notebooks I used for inference (ugly code, no time to clean it up for now due to the Chinese New Year is coming). The related code and weights have been set to public. If anyone is interested, feel free to refer to them.<br> <a href="url" target="_blank">https://www.kaggle.com/code/tanxxx/inference-se-tugc</a><br> <a href="url" target="_blank">https://www.kaggle.com/code/tanxxx/2d-tu-gcresnext50ts-inference</a><br> <a href="url" target="_blank">https://www.kaggle.com/code/tanxxx/se-resnext-inference-1024</a><br> <a href="url" target="_blank">https://www.kaggle.com/code/tanxxx/tu-hrnet-w30-and-effb3-inference</a></p>
SenNet + HOA - Hacking the Human Vasculature in 3D
5th place solution - 3D interpolation is all you need (updated with code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to express my deepest gratitude to the organizers of this competition. The reason I mostly compete in CV competitions is because I love it. Especially in medical competitions such as this one. So I'm really happy we managed to work with such a great technology (resolution is insane). </p> <h3>Validation</h3> <p>Initially I thought it's gonna be a challenge validation-wise, since we simply don't have enough data for reliable validation. To make the validation without leakage and similar to test, I decided to train my pipelines in 2 ways: </p> <ul> <li>Take kidney_1 as base for trainings, kidney_3 for validation</li> <li>Take kidney_3 as base for trainings, kidney_1 for validation</li> </ul> <p>Since test is densely annotated, I wanted to compute metrics only on densely annotated kidneys, which eliminated kidney_2 from the discussion. </p> <h3>Data</h3> <p>I always believe that data is the key. So I tried my hardest to utilize the additional datasets provided by organizers at <a href="https://human-organ-atlas.esrf.eu" target="_blank">Human Organ Atlas</a>. </p> <p>In the end, I decided to use to following with the help of pseudo-labeling:</p> <ul> <li>LADAF-2020-31 kidney</li> <li>LADAF-2020-27 spleen</li> </ul> <p>In other words, after examining the pseudo annotations on spleen, I realized that they are quite good and should serve as a good regularization method. </p> <p>Additionally, I tried to use heart + brain + lung. However, my models make semi-accurate predictions for lung, but horrible for heart + brain. So in the end I decided to stick with kidney + spleen. </p> <h3>Pseudo annotations</h3> <p>I would say there are 2 important points</p> <p>The first one is don't pseudo annotate everything right away. In order to create full pseudo annotations, I run a 4-step process:</p> <ul> <li>Train on kidney_1. Pseudo-annotate kidney_2</li> <li>Train on kidney_1 + kidney_2. Pseudo-annotate the 2020-31 kidney.</li> <li>Train on kidney_1 + kidney_2 + 2020-31-kidney. Pseudo-annotate 2020-27 spleen.</li> <li>Train on kidney_1 + kidney_2 + 2020-31-kidney + 2020-27 spleen. </li> </ul> <p>The second point is that don't use hard labels. In other words, don't apply thresholding to the predictions. Simply use soft labels (predictions are sigmoided to be in the range of [0,1]) for training. </p> <h3>Loss</h3> <p>My baseline go-to loss in semantic segmentation is <code>CE + Dice + Focal</code>. This worked quite well in this competition. However, since we have a surface metric, I wanted to weight the boundaries of masks more heavily. </p> <ul> <li>What didn't work: losses I found in open-source repositories (like Hausdorff Distance loss).</li> <li>What worked really well in terms of Surface Dice, FP and FN on validation: CE with x2 weights for boundaries. </li> </ul> <p>So in the end I decided to use <code>CE_boundaries + Dice + Focal</code> for most of my models, and <code>CE_boundaries + Twersky + Focal</code> for a single model.</p> <p>Twersky was focusing more on FN rather than FP, but more on that in the next section. </p> <pre><code> (torch.nn.modules.loss._Loss): (): ().__init__() self.bound = EdgeEmphasisLoss(alpha=bound_alpha) self.dice = smp.losses.DiceLoss(mode=) self.focal = smp.losses.FocalLoss(mode=) self.bound_weight = bound_weight self.dice_weight = dice_weight self.focal_weight = focal_weight (): ( self.bound_weight * self.bound(preds, gt, boundaries) + self.dice_weight * self.dice(preds, gt) + self.focal_weight * self.focal(preds, gt) ) (nn.Module): (): (EdgeEmphasisLoss, self).__init__() self.alpha = alpha (): bce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction=) weighted_loss = bce_loss * ( + self.alpha * boundaries) weighted_loss.mean() </code></pre> <h3>Preprocessing</h3> <p>After analyzing initial models and its errors, I realized that my biggest issue is FN, not FP. In other words, my models simply don't see some masks, mostly the small ones. </p> <p>So I decided to increase the resolution of my trainings with crops from 512x512 to 1024x1024. However, after a couple of hours of training it hit me: that doesn't make much sense. By going from 512x512 to 1024x1024 I don't really increase resolution (each pixels holds the same real-world size), just the context, and 512x512 seemed like a big-enough context already. </p> <p>Instead, I decided to do the following: </p> <pre><code> (nn.Module): (): ().__init__() self.upscale_factor = upscale_factor self.model = Unet( encoder_weights=encoder_weights, encoder_name=encoder_name, decoder_use_batchnorm=decoder_use_batchnorm, in_channels=in_channels, classes=classes, ) (): x = torch.nn.functional.interpolate( x, (x.shape[-] * self.upscale_factor, x.shape[-] * self.upscale_factor), mode= ) x = self.model(x) x = torch.nn.functional.interpolate( x, (x.shape[-] // self.upscale_factor, x.shape[-] // self.upscale_factor), mode= ) x </code></pre> <p>This approached worked really well and I could clearly see improvements both on CV, and LB.</p> <h3>Models</h3> <p>I used only U-Net models from SMP with different backbones. Tried a lot of things, but for final ensembles decided to settle on the following:</p> <ul> <li>effnet_v2_s</li> <li>effnet_v2_m</li> <li>maxvit_base</li> <li>dpn68 </li> </ul> <p>Maxvit was trained on 512x512 crops, effent and dpn - on 512x512 with x2 interpolation. Crops were used from xy, xz, and yz axes. During inference, I use the same crops resolution with overlaps of crops_size / 2 (so that's 256). In other words, sliding window approach.</p> <p>Augmentation were medium-level in terms of intensity. </p> <pre><code> A.Compose( [ A.ShiftScaleRotate( p=, shift_limit_x=(-, ), shift_limit_y=(-, ), scale_limit=(-, ), rotate_limit=(-, ), border_mode=cv2.BORDER_CONSTANT, ), A.RandomBrightnessContrast( brightness_limit=(-, ), contrast_limit=(-, ), p=, ), A.HorizontalFlip(), A.VerticalFlip(), A.OneOf( [ A.GridDistortion(border_mode=cv2.BORDER_CONSTANT, distort_limit=), A.ElasticTransform(border_mode=cv2.BORDER_CONSTANT), ], p=, ), AT.ToTensorV2(), ], ) </code></pre> <h3>Post processing</h3> <p>I tried to use cc3d to remove small objects, it made weak models better, but no difference for ensemble.</p> <h3>Private resolution</h3> <p>Now, this part is really tricky. My huge thanks to the organizers for announcing the test resolutions. It sincerely warms my heart to see organizers interact with participants that much here on the forum. Really, thank you. </p> <p>One approach is not to do anything. You train your model on 50um/voxel, inference on 63um/voxel. Considering I use conv-based backbones (except for maxvit) that have some level of scale-invariance + have scale augs in validation, this might work.</p> <p>The second approach is to do rescaling. I believe the correct approach for rescaling is the following: </p> <pre><code> test_kidney == : private_res = public_res = scale = private_res / public_res d_original, h_original, w_original = test_kidney_image.shape test_kidney_image = torch.tensor(test_kidney_image).view(, , d_original, h_original, w_original) test_kidney_image = test_kidney_image.to(dtype=torch.float32) test_kidney_image = torch.nn.functional.interpolate(test_kidney_image, ( (d_original*scale), (h_original*scale), (w_original*scale), ), mode=).squeeze().numpy() </code></pre> <p>…</p> <pre><code>d_preds, h_preds, w_preds = preds_ensemble.shape preds_ensemble = preds_ensemble.view(, , d_preds, h_preds, w_preds) preds_ensemble = preds_ensemble.to(dtype=torch.float32) preds_ensemble = torch.nn.functional.interpolate(preds_ensemble, ( d_original, h_original, w_original, ), mode=).squeeze() </code></pre> <p>So we do 3D resize instead of 2D one: re-scale image from 63um (private) to 50um (public + CV), compute predictions, and re-scale them back to 63um. Simply going for 2D would work as well, but theoretically you end up with different spatial and temporal resolutions in that case. </p> <p>This trick helped. To give a single point (I don't have much else): the same ensemble scores 0.634 on private without interpolation, and 0.670 - with interpolation. </p> <p>To be honest, I didn't think it would make that much difference. I tried the following experiment locally: </p> <ul> <li>Download kidney in 25um resolution. Compute predictions in 25um, interpolate them to 50um, compute metrics. This approach brought my 0.92 surface dice to 0.895. Which is quite good, considering we're talking about x2 interpolation in all 3 directions (that's 8 times less volume) and the fact that it's harder to detect small objects in smaller resolution.</li> <li>Download kidney in 25um resolution. Interpolate image to 50um, compute predictions, compute metrics. This approach essentially provided the same metrics as in the case of simply using 50um from organizers. </li> </ul> <p>So even though I didn't really think interpolation is that important, it also didn't hurt (I was afraid of interpolation artifacts), so I used it for both final subs. </p> <h3>Final subs</h3> <p>Both subs have an ensemble of 3 models, each inferenced on all 3 axes without TTA (TTA took too much time, and didn't really help on CV). </p> <ul> <li>First sub. CV: 0.84 (kidney_1), Public: 0.768. Private: 0.566<br> <code>Maxvit_ce_dice_focal</code> + <code>effnet_v2_s_ce_dice_focal</code> + <code>effnet_v2_m_ce_dice_focal</code> trained on kidney_3, validated on kidney_1. This approach didn't work that well on CV, and also on Public and Private. </li> <li>Second sub. CV: 0.923 (kidney_3), Public: 0.855. Private: 0.691<br> <code>Maxvit_ce_dice_focal</code> + <code>effnet_v2_s_ce_bounds_dice_focal</code> + <code>dpn_68_ce_bounds_twersky_focal</code>.</li> </ul> <p>Code:</p> <ul> <li>Inference notebook <a href="https://www.kaggle.com/code/ivanpan/final-submission/notebook" target="_blank">link</a></li> <li>Training code <a href="https://github.com/ivanpanshin/segment-vasculature-5th-place" target="_blank">link</a></li> </ul>
SenNet + HOA - Hacking the Human Vasculature in 3D
93rd Place Solution for the SenNet + HOA - Hacking the Human Vasculature in 3D
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I started late in the beginning of January. This is only my second kaggle competition and I am happy with the enormous learning and the outcome of placing in the top 10% with bronze. Thank you to the fellow teams for sharing your knowledge and to the organizers for a well-run competition.</p> <h1>Context</h1> <p>Business context: <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/overview" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/overview</a><br> Data context: <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/data" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/data</a></p> <h1>Overview of the approach</h1> <p>My final model was an ensemble of two UNet 2D models trained with 1024x1024 and 512x512 sizes with equal weighting in inference. I used slices from X,Y and Z projections for both training and inference, a simplification inspired by the 2.5D Unet paper. </p> <p>For training, I used segmentation_models_pytorch smp.Unet architecture with resnext50_32x4d backbone and started with imagenet weights. Trained with kidney_1_dense and kidney_3_dense and validated with kidney_2. For preprocessing, used histogram equalization followed by minmax normalization. Adam optimizer, CosineAnnealingLR scheduler and smp.losses.DiceLoss were used for training. Following augmentations were used in training.</p> <pre><code>aug_prob = train_aug_list = [ A.Rotate(limit=, p=aug_prob), A.RandomScale(scale_limit=(,),interpolation=cv2.INTER_CUBIC,p=aug_prob), A.PadIfNeeded(min_height=img_size[], min_width=img_size[], p=), A.RandomCrop(img_size[], img_size[], p=), A.RandomBrightnessContrast(p=aug_prob), A.GaussianBlur(p=aug_prob), A.MotionBlur(p=aug_prob), A.GridDistortion(num_steps=, distort_limit=, p=aug_prob), ToTensorV2(transpose_mask=), ] </code></pre> <p>For inference, I padded the image to 3072x3072 and used a 3x3 grid of 1024x1024 size tiles to run the model on. 4 rotations used on each tile for inference and averaged for Test Time Augmentation aka TTA. Used sigmoid activation layer in Unet. Also, ran inference on X, Y and Z projections and created three prediction volumes. On each projection’s prediction volume, applied sigmoid threshold of 0.0001 to get binary mask volumes, then transposed and added binary masks from the three projections, then used a majority voting to get final predictions.</p> <h1>Details of the submission</h1> <h2>What was special about the submission</h2> <p>-I did no resizing in training or inference to reduce errors and artifacts from downsizing and aspect ratio changes. In training, I used a random crop. In inference, I used padding and tiling.<br> -Fixed sigmoid threshold independent of dataset, as opposed to using top Nth percentile for thresholding produced more stable result on private LB. I was bumped up by 616 in ranking.<br> -Ensemble of 1024+512 gave higher score than each model applied separately.<br> -4x rotation Test Time Augmentation in inference boosted the score by 0.007 in public LB.<br> -During local validation and spot checking, my models were achieving very high 2D dice scores on the middle slices in the volumes. Most of the FP and FN errors were on the edge slices in the tiny vessels (1 or 2 pixel errors).<br> -While I thought histogram equalization was a secret sauce (since it improved public LB score over mean/std/clip normalization), it turns out my mean/std/clip normalization model produced much better private LB score.<br> -There were many high scoring inference notebooks publicly shared in this competition. While I studied them to understand what other teams are doing and did in fact get many great ideas, I chose not to use large sections of code directly since there were many questionable choices and I didn’t understand why those notebooks were producing the high public LB scores. This approach kept my solution unique and generalized enough to move up in the private LB.</p> <h2>What was tried and didn’t work</h2> <p>-The striding on 3 or 5 consecutive slices to create a multi-channel image for training and inference did not work well for me, both score wise and CPU/GPU/Memory resource wise.<br> -Tried median blur for preprocessing which did not work well.<br> -Went from efficientnetb0 to resnext50_32x4d. Maybe somewhere in between would have been better.<br> -It would have been better to stick with mean/std/clip norm as opposed to histogram equalization for preprocessing<br> -Tried inference with Z projection only initially, after 0.04 improvement in LB score with XYZ projections and majority voting, and decided to use it going forward.<br> -There were suggestions in public high-scoring notebooks to reduce augmentation probability to 0.05, I tried low augmentation and although that produced higher validation dice scores during training, almost always produced lower score in public LB. So, I decided to increase to 0.10 probability for augmentation, now that I read the solution writeups from top scoring teams, I realize an even higher augmentation would have been better<br> -I had tried training with 90% of slices of all three kidneys dense 1, 2 and 3 and validating with 10% of the slices. Since I read in the discussion board that there is label shift in kidney 2, I switched to using only kidney1 and kidney 3 for training. The results got slightly better. Every incremental improvement counted.<br> -Since I realized most of segmentation errors were in the first 100 slices or so with the tiny vessels, I tried to train a separate model with first 200 slices of all three kidneys. Model didn’t work at all on both early slices and middle slices, probably because there was not enough data to train.<br> -Tried morphological opening for post processing the predicted mask. I expected that it will reduce some false positives without affecting true positives that much. But nope. It made both false positives and false negatives significantly worse. Then, tried removing all one pixel blobs for post processing. It was a disaster with the score. Also, tried majority voting across three slices to retain positives, it was a disaster. After that, I decided that no post processing is best. Any improvement had to come from better model, not predicted mask cleanup.<br> -Tried 6x and 8x TTA with flips and rotations, but score was worse than 4x TTA.</p> <h2>What I didn’t try which I would consider next time</h2> <ol> <li><p>A loss function that considers 2D boundary loss since the “surface dice score” used for test evaluation uses only the 3D surface boundaries. 5th place solution used a clever loss function of CE_boundaries + Dice + Focal. 4th place solution used a BoundaryDOULoss (<a href="https://arxiv.org/pdf/2308.00220.pdf" target="_blank">https://arxiv.org/pdf/2308.00220.pdf</a>)</p></li> <li><p>Take into account that the native scan resolution of private test data is different (63um/voxel) from train (50 and 50.16um) and public test data (50.28um/voxel). This can be accomplished with 2D or 3D resize during inference or equivalent scaling augmentations in training. Top 5 solution write ups talk about this.</p></li> <li><p>Somehow incorporate the sparse annotations in training. I completely ignored the kidney_3_sparse and kidney_2 (also sparse and had shifted labels) in training which limited my training data to kidney_1_dense and the small number of slices from kidney_3_dense. Pseudo-annotating the sparse data is an approach I saw that many of the top scoring solutions used.</p></li> <li><p>Experiment more with different model architectures and ensembles? I was bogged down with the basics and getting the foundations working. Also, with the Kaggle 30 hours per week GPU limit, didn’t have time and resources for this.</p></li> <li><p>Increase the augmentation probabilities in intensity and scaling. Many of the top solutions have done this.</p></li> <li><p>2.5D approach with 5 consecutive slices concatenated as a 5-channel image. 6th place solution has successfully used this. I was running into CPU/GPU/Mem resource issues, but those could have been potentially resolved with more time and effort. </p></li> </ol> <h2>Preprocessing examples</h2> <p>Histogram equalization followed by Minmax normalization:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2Fe119a342eda6f290c058e94c97728e78%2FNorm-HistogramEqualization.png?generation=1707343051563429&amp;alt=media"></p> <p>Mean/Std normalization with clipping followed by Minmax normalization:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2F9e72cb4f6f49eee6aae6213df2833e90%2FNorm-MeanStdClip.png?generation=1707343078883718&amp;alt=media"></p> <h2>Padding and tiling kidney slice for inference</h2> <p>This is showing 2048x2048 padding and 512x512 tiles which I later changed to 3072x3072 padding and 1024x1024 tiles.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2F4acb3f75490749342dab0c8e3f7f91de%2FTiledKidneySlice.png?generation=1707342738234654&amp;alt=media"></p> <h2>Sample segmentation results on slices for illustration</h2> <p>Slice 1000 from kidney_1_dense Label Vs Prediction. Prediction is color coded as Green for True Positives, Red for false positives and Blue for false negatives. Also all blobs are dilated 5x5 to observe the tiny blobs visually. Most of the false positives and false negatives are 1 pixel area blobs. However, there are also some 1 pixel area true positive blobs in this slice and many more of those in early slices for example 0100. So, cannot blindly remove 1 pixel blobs.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2F3d16023f25a7f35a225c15c2d88cc107%2FSegmentedKidney1_1000.png?generation=1707342563232471&amp;alt=media"></p> <p>Slice 0100 from kidney_1_dense Label Vs Prediction. 119 pixels in label, 133 pixels found, 93 pixels true positives, 40 pixels false positives, 26 pixels false negatives. Many of the blob sizes are tiny 1 to 4 pixels area.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2F632bff25ba557b49017796618251f092%2FSegmentedKidney1_0100.png?generation=1707342598764919&amp;alt=media"></p> <h1>Updates After the competition ended</h1> <p>After the competition ended, some new opportunities opened up for learning. First, I read the top solution write-ups looking for inspiration, especially low-hanging fruit ideas for adding to my existing implementation. Second, the private and public scores are now visible, enabling us to know how we do on the two test sets. Third, the 5 per day submission limit is lifted, so can do a lot more experiments more quickly. Given these, I was able to improve my private score to 0.682 which would have been 6th place (however not genuinely since I would never have chosen this submission due to the public score being so low). Well, anyway, here it is, my best improved score.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F18199244%2F574ddf3fa25f91bab4dad90f81efd542%2FSenNetHOA-BestPrivateScore.png?generation=1707777659990418&amp;alt=media"></p> <p>Here are the ideas I added to my implementation to achieve the above score.</p> <ol> <li>Changed histogram equalization preprocessing to mean/std/clip normalization (I learned from my own prev submissions scoring higher in private score)</li> <li>Changed DiceLoss to custom loss that is a combination of boundary loss, dice loss and focal loss (shared by 5th place solution)</li> <li>Changed the unet architecture to include an upscale layer (shared by 5th place solution)</li> <li>Changed to heavy augmentations in scale and intensity (strategy used by several top solutions)</li> <li>Used -0.45, 0.05 for scale limit to downsize more than upsize, to account for the private test set resolution. (shared by 3rd place solution)</li> </ol> <h1>Sources</h1> <p>Following kaggle sources were hugely helpful, many thanks to the contributors.</p> <p><a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/456118" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/456118</a><br> <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/464768" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/464768</a><br> <a href="https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/468525" target="_blank">https://www.kaggle.com/competitions/blood-vessel-segmentation/discussion/468525</a><br> <a href="https://www.kaggle.com/code/junkoda/fast-surface-dice-computation" target="_blank">https://www.kaggle.com/code/junkoda/fast-surface-dice-computation</a><br> <a href="https://www.kaggle.com/code/hengck23/lb0-808-resnet50-2d-unet-xy-zy-zx-cc3d" target="_blank">https://www.kaggle.com/code/hengck23/lb0-808-resnet50-2d-unet-xy-zy-zx-cc3d</a><br> <a href="https://www.kaggle.com/code/yoyobar/2-5d-cutting-model-baseline-inference" target="_blank">https://www.kaggle.com/code/yoyobar/2-5d-cutting-model-baseline-inference</a><br> <a href="https://www.kaggle.com/code/yoyobar/2-5d-cutting-model-baseline-training" target="_blank">https://www.kaggle.com/code/yoyobar/2-5d-cutting-model-baseline-training</a><br> <a href="https://www.kaggle.com/code/misakimatsutomo/inference-1024-should-have-a-percentile-of-0-00149" target="_blank">https://www.kaggle.com/code/misakimatsutomo/inference-1024-should-have-a-percentile-of-0-00149</a></p> <p>Other sources I used for background and inspiration:<br> <a href="https://doi.org/10.48550/arXiv.2311.13319" target="_blank">https://doi.org/10.48550/arXiv.2311.13319</a><br> <a href="https://arxiv.org/abs/1902.00347" target="_blank">https://arxiv.org/abs/1902.00347</a><br> <a href="https://arxiv.org/abs/2010.0616" target="_blank">https://arxiv.org/abs/2010.0616</a></p>
LLM - Detect AI Generated Text
[Solution of 4th Place in Efficiency LB]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: LLM - Detect AI Generated Text <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Firstly, I would like to thank Kaggle and THE LEARNING AGENCY LAB for hosting such an exciting competition. During this competition, I have been following the top solutions in the discussion area. These ideas are very cool, and I have learned a lot from them. I am deeply appreciative of the participants who generously shared their insights and observations.</p> <p>My solution was simple, yet to my surprise, it secured the 4th place in the private Efficiency LB. From the perspective of the Public Efficiency LB, this achievement seemed almost impossible…</p> <p>I would like to extend my thanks to <a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> for providing <a href="https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset" target="_blank">DAIGT-V2</a>. I also acknowledge <a href="https://www.kaggle.com/mustafakeser4" target="_blank">@mustafakeser4</a> for sharing a high-scoring <a href="https://www.kaggle.com/code/mustafakeser4/inference-detectai-distilroberta-0-927" target="_blank">Bert model</a> and the source of the training data. Although I did not utilize the Bert model in my final submission, I incorporated the data as external training data. The results clearly indicate that this external data was the key factor behind this surprise.</p> <h4><strong>Solution Summary:</strong></h4> <p>I used multiple classifiers to classify the TFIDF features. Our code comes from</p> <p><a href="https://www.kaggle.com/code/batprem/llm-daigt-cv-0-9983-lb-0-960" target="_blank">https://www.kaggle.com/code/batprem/llm-daigt-cv-0-9983-lb-0-960</a>,</p> <p>thanks to <a href="https://www.kaggle.com/batprem" target="_blank">@batprem</a> for sharing.</p> <p>At the beginning of my participation in the competition, I observed that the leaders in the Public LB often also topped the Efficiency LB. This led me to hypothesize that utilizing TFIDF alone could simultaneously yield high scores and efficiency. Throughout the remainder of the competition, my focus was on enhancing the score as much as possible without compromising the algorithm’s efficiency.</p> <p>Here is a summary of my solution:</p> <ol> <li>I found that Cat was the primary cause of the algorithm’s slowdown. Consequently, I removed Cat while retaining lgb. This action reduced the algorithm’s runtime from approximately 2 hours to about 15 minutes.</li> <li>I added ComplementNB and LinearSVC as base classifiers, which could slightly improve the public LB score.</li> <li>I utilized DAIGT-V2 and the previously mentioned external training data. I ran the same feature extraction and classification algorithms on the two datasets separately and then ensemble them in a weight of 6:4. This resulted in a public score of 963 and a private score of 916. Besides, 75:25 yielded a public score of 964 and a private score of 906.</li> <li>Taking inspiration from <a href="https://www.kaggle.com/competitions/llm-detect-ai-generated-text/discussion/468150" target="_blank">the tricks for AUC metric</a>, I applied min-max normalization to each prediction result prior to integration, slightly improving both the public and private scores.</li> </ol> <p>I have also tried to mix DAIGT-V2 and the external data for training instead of blending. This method produced a public score of 955 and a private score of 934. Due to the low public score, I did not select this submission as the final result. In my experiments of using different weights in the ensemble, I found that the more the weight is biased towards the external dataset, the higher the private score, but the lower the public score, so I only tried 6:4 and did not continue to try larger weights. I believe that if only external data is used for training, or if a more significant weight is set, it is possible to achieve 1st place in the Efficiency LB.</p> <h4>Code</h4> <p><a href="https://www.kaggle.com/superfei/solution-of-4th-place-in-efficiency-lb" target="_blank">https://www.kaggle.com/superfei/solution-of-4th-place-in-efficiency-lb</a></p>
SenNet + HOA - Hacking the Human Vasculature in 3D
[3rd Place solution] Refine from Sparse to Dense
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First and foremost, I would like to extend my gratitude to the organizers and the official Kaggle team for orchestrating such an outstanding competition. I joined the contest at a very late stage. Despite having some experience with segmentation competitions, I must express my appreciation to <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , <a href="https://www.kaggle.com/yoyobar" target="_blank">@yoyobar</a> , and <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> (implementation of metric), as well as the other community participants for their open-source contributions and discussions, which allowed me to quickly get up to speed with this contest.</p> <p>My approach was strikingly straightforward, relying solely on <strong>2D models</strong> and only utilizing <strong>smp</strong> (segmentation models pytorch) and <strong>timm</strong> (pytorch image models) in the whole training and inference pipeline. </p> <h2>Global key points</h2> <ol> <li><strong>Refining labels from sparse to dense.</strong></li> <li><strong>Emulating the magnification factor of the test set.</strong></li> <li><strong>Maintaining an appropriate resolution.</strong></li> </ol> <h2>1. From Sparse to Dense</h2> <p>Given that half of the training set has dense labels (kidney 1, kidney 3 dense), and the other half was sparse, utilizing dense labels to refine sparse ones was a crucial step. The overall process entailed:</p> <ol> <li>Training UNet(maxViT512) and UNet(EfficientNetv2s) using kidney 1 and kidney 3 dense.</li> <li>Generating supplemental labels for kidney 3 sparse using the trained UNet maxViT512 and UNet EfficientNetv2s models.</li> <li>Resuming the training of UNet maxViT512 and UNet EfficientNetv2s for a few epochs with kidney 1, kidney 3 (dense, sparse plus supplemental labels).</li> <li>Repeating the step2 on kidney 2.</li> <li>Training three UNet models (with EfficientNetv2s, SeResNext101, MaxViT512) and one UNet++ using all real labels from all kidney plus pseudo labels.</li> </ol> <p>Note: As the organizers disclosed the proportion of annotations within kidney 3 and kidney 2, I endeavored to select thresholds based on pixel quantity as close as possible to the official proportion when choosing threshold values for pseudo label.</p> <h2>2. Emulating the Magnification of the Private Test Set</h2> <p>A pivotal reason for my decision to participate in this competition was the disclosure of the magnification factors for the training and test sets by the hosts. The training set had a magnification of 50um/voxel, the public test set was the same at 50um/voxel, while the private test set was at 63um/voxel. A larger magnification factor implies a lower resolution. For instance, a 600um object would occupy 12 pixels in both the training and public sets, but only 10 pixels in the validation set. Hence, during training, <strong>I set the scaling center to 0.8</strong>, rather than 1, with a scaling range of 0.55 to 1.05, to simulate the private test set.</p> <pre><code>.ShiftScaleRotate(shift_limit=., =(-., .), =, =, =.), </code></pre> <h2>3. Maintaining an Appropriate Resolution</h2> <p>In this competition, training and inference along the x-axis, y-axis, and z-axis separately was a very important trick. However, this introduced a significant risk. The entire test set contained 1500 slices, with the public test set accounting for 67% and the private test set for 33%. This means that the private test set comprised only about <strong>500 slices</strong>. Inferring along the z-axis with a higher resolution (e.g., 1024) was feasible. But if inferring along the y-axis or x-axis, it would mean that one of the edges would only be 500 pixels long. At that point, if the model and code were configured for a larger resolution (say 1024), there would be a substantial risk of a huge shake down.</p> <p>My models primarily operated at a resolution of 512, with one model switching to higher resolution weights for larger resolution slices when the slice have appropriate resolution.</p> <table> <thead> <tr> <th>Model</th> <th>Backbone</th> <th>Resolution</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>UNet</td> <td>MaxViT-Large 512</td> <td>512</td> <td>0.846</td> <td>0.727(submission1)</td> </tr> <tr> <td>UNet</td> <td>SeResNext</td> <td>512</td> <td>0.819</td> <td>0.753</td> </tr> <tr> <td>UNet</td> <td>Efficiennet_v2_s</td> <td>448, 832</td> <td>0.799</td> <td>0.703</td> </tr> <tr> <td>UNet++</td> <td>Efficiennet_v2_l</td> <td>512</td> <td>0.817</td> <td>0.692</td> </tr> <tr> <td>ensemble</td> <td>-</td> <td>-</td> <td>0.846</td> <td>0.727(submission2)</td> </tr> </tbody> </table> <h2>4. Train on all data if convergence is Stable</h2> <p>During the early stages of the competition, whether validating on kidney 2 or kidney 3, I observed that if I trained for 20 epochs, after the initial few epochs, the dice coefficient (not surface dice) variation on the validation set was very minimal, with the MaxVit512 large model exhibiting the least fluctuation. Considering that we only had three kidneys, I decided to train on all kidneys directly after completing the pseudo labeling process, given the stability in convergence.</p> <h2>5. Minimizing the Impact of Threshold Values</h2> <p>I am grateful for the method provided by <a href="https://www.kaggle.com/junkoda" target="_blank">@junkoda</a> for calculating metrics. My most stable single model was able to maintain very minor fluctuations in the surface dice score (less than 1) within a threshold range of 0.2. After model fusion, the stable threshold range could be potentially in 0.3~ 0.4. A stable threshold is extremely crucial in segmentation competitions. In this competition, as my final model lacked a validation set, I had to utilize thresholds searched with earlier trained models that included a validation set and apply them to the final version of the model. Fortunately, the models trained on the full dataset appeared to possess threshold values very close to those from the earlier models trained with k1+k2 (sparse), and validate on k3. At the same time, the fluctuation of threshold values across kidney 3 dense, public, and private was very small.</p> <h2>6. Heavy augmentation on intensity.</h2> <p>As mentioned by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> , difference kidneys has large variance on intensity. So I used a heavy intensity augmentation.</p> <pre><code>.RandomBrightnessContrast(p=.), .RandomGamma(p=.), </code></pre> <h2>7. Quick ablation</h2> <table> <thead> <tr> <th>Model</th> <th>Backbone</th> <th>points mentioned above</th> <th>public</th> <th>private</th> </tr> </thead> <tbody> <tr> <td>UNet</td> <td>MaxViT-Large 512</td> <td>3, 5, 6</td> <td>0.818</td> <td>0.586</td> </tr> <tr> <td>UNet</td> <td>MaxViT-Large 512</td> <td>3, 4, 5, 6</td> <td>0.857</td> <td>0.633</td> </tr> <tr> <td>UNet</td> <td>MaxViT-Large 512</td> <td>2, 3, 4, 5, 6</td> <td>0.849</td> <td>0.652</td> </tr> <tr> <td>UNet</td> <td>MaxViT-Large 512</td> <td>1, 2, 3, 4, 5, 6</td> <td>0.846</td> <td>0.727</td> </tr> </tbody> </table> <h2>8. Final Submission</h2> <p>My final submissions were a single model of MaxViT and a ensemble of the four models. Surprisingly, both submissions scored same at 0.727. I did not use any form of weighting and MaxViT only constituted a quarter of the ensemble submission, but their scores were totally the same on private LB. Even more astonishing was that the single-model score of SeResNext on private LB turned out to be the highest. Its cv was nothing extraordinary, its convergence was not more stable than MaxViT's, and its public leaderboard score was not high, so I had no reason to choose it.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7285387%2F8fa373c75008e7cf3064b7f4c4089175%2F1.png?generation=1707272539926386&amp;alt=media" alt="1"></p> <p>Finally, I would like to extend my gratitude once again to the organizers, Kaggle, and all the participants again!</p> <hr> <p><strong>Inference(Submission)</strong> Code is published:</p> <ol> <li><a href="https://www.kaggle.com/forcewithme/sennet-final-submission2" target="_blank">MaxVit512 scored 0.727</a></li> <li><a href="https://www.kaggle.com/code/forcewithme/sennet-top3-final-submission?scriptVersionId=162311084" target="_blank">Ensemble submission scored 0.727</a></li> </ol> <p><strong>Training</strong> Code is published in the <a href="https://www.kaggle.com/datasets/forcewithme/sennettop3-training-code/data" target="_blank">kaggle dataset</a>. </p>
SenNet + HOA - Hacking the Human Vasculature in 3D
474th Ranked Solution for the SenNet + HOA - Hacking the Human Vasculature in 3D Challenge
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SenNet + HOA - Hacking the Human Vasculature in 3D <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Thank You</h1> <p>I first of all would like to thanks all the solution writers, competition participants, competition organizers and community members for hosting another interesting challenge useful for the mankind and important to build the analytical skills necessary for a quantitative competition.</p> <h1>Overview</h1> <p>The primary goal as prescribed by the researchers of this competition is to segment blood vessels. The objective is to create a model trained on 3D Hierarchical Phase-Contrast Tomography (HiP-CT) data from human kidneys to help complete a picture of vasculature throughout a body. Currently, human expert annotators manually trace the vascular structures — which is a slow process. Even with expert annotators, each new dataset takes 6+ months to complete. Machine learning approaches using this manual data do not generalize well to new datasets because of the variability of both human anatomy and to changes in the image quality as HiP-CT technology continues to improve and change.</p> <h2>The Data</h2> <p>The dataset is split into three main sub-categories :- train, test and sample submission. We break down the training data to get more accurate results as images and labels, and obtain the exact pixel breakdown, and similarly divide the testing data into images and labels too. The sample submission file is meant to provide a basic illustration ahead of the main submission file.</p> <h2>Submission Format</h2> <p>For each id in the test set, you must predict rle, a run-length encoded instance segmentation mask, where id represents {dataset}_{slice} for an image with path test/{dataset}/images/{slice}.tif. Represent the RLE for an empty mask as 1 0.</p> <p>The submission should contain a header and have the following format:<br> '<br> id,rle<br> kidney_5_0,1 1 100 10<br> kidney_5_1,1 1 100 10<br> kidney_6_0,1 0<br> kidney_6_1,1 0<br> '</p> <h2>The Model - Weighted Ensemble</h2> <p>The main model used to generate accurate predictions is a UNet built on a ResNet50 backbone for faster computation and greater accuracy. The first part is to create a library importing code base where we define all the necessary library imports. </p> <p>We move to the next important part which is defining the main model to create a solid baseline for further action. We define several parameters the first being the model paths and then moving to defining the model name, image size, input size, tile size and most importantly the threshold value needed for further calculations. </p> <p>'<br> model_path_i = 9 <br> model_path_i9 = 11</p> <p>class CFG:<br> model_name = 'Unet'<br> backbone = 'se_resnext50_32x4d'<br> in_chans = 1<br> image_size = 1024<br> input_size = 1024<br> tile_size = image_size<br> stride = tile_size // 4<br> drop_egde_pixel = 0<br> target_size = 1<br> chopping_percentile = 2e-4</p> <pre><code> = = = = [, , ] = </code></pre> <p>'</p> <p>We further add code to define various model paths in addition to our model to define our data distribution pipeline :-</p> <p>'<br> model_path = [<br> "/kaggle/input/2-5d-cutting-model-baseline-training/se_resnext50_32x4d_19_loss0.12_score0.79_val_loss0.25_val_score0.79.pt",<br> "/kaggle/input/training-6-512/se_resnext50_32x4d_19_loss0.09_score0.83_val_loss0.28_val_score0.83.pt",<br> "/kaggle/input/training-6-512/se_resnext50_32x4d_19_loss0.05_score0.90_val_loss0.25_val_score0.86.pt",<br> "/kaggle/input/training-6-512/se_resnext50_32x4d_19_loss0.05_score0.89_val_loss0.24_val_score0.86_midd.pt",<br> "/kaggle/input/training-6-512/se_resnext50_32x4d_24_loss0.05_score0.90_val_loss0.23_val_score0.88_midd.pt",<br> "/kaggle/input/training-6-512/se_resnext50_32x4d_24_loss0.04_score0.91_val_loss0.23_val_score0.88_midd.pt", # 25 025 rot 512 center<br> "/kaggle/input/blood-vessel-model-1024/se_resnext50_32x4d_24_loss0.10_score0.90_val_loss0.16_val_score0.85_midd_1024.pt",<br> "/kaggle/input/blood-vessel-model-1024/se_resnext50_32x4d_24_loss0.10_score0.90_val_loss0.12_val_score0.88_midd_1024.pt", # lr = 8e-5<br> "/kaggle/input/blood-vessel-model-1024/se_resnext50_32x4d_24_loss0.91_score0.09_val_loss0.91_val_score0.09_midd_1024.pt", # 60e-5 <br> "/kaggle/input/sn-hoa-8e-5-27-rot0-5/se_resnext50_32x4d_26_loss0.10_score0.90_val_loss0.12_val_score0.88_midd_1024.pt", # 8e-5-27-rot0-5<br> "/kaggle/input/sn-hoa-8e-5-27-rot0-5/se_resnext50_32x4d_30_loss0.10_score0.90_val_loss0.13_val_score0.88_midd_1024.pt",<br> "/kaggle/input/sennet-kidney-1-and-3/model_real_23.pt" # 31 8e 05<br> ]<br> '</p> <p>The next part is where we define the code to make changes to our original model, i.e., make it a custom model for use in present and future.</p> <p>The code for the same is below :-</p> <p>'<br> class CustomModel(nn.Module):<br> def <strong>init</strong>(self, CFG, weight=None):<br> super().<strong>init</strong>()</p> <pre><code> . .model smp.( encoder_name.backbone, encoder_weightsweight, in_channels.in_chans, classes.target_size, activation, ) .batch .batch def forward_(, image): output .model(image) output[:, ] def forward(, x: tc.): x x.to(tc.float32) x norm_with_clip(x.reshape(, x.shape[:])).reshape(x.shape) .input_size .image_size: x nn.functional.interpolate(x, size(.input_size, .input_size), mode'bilinear', align_corners) shape x.shape x [tc.rot90(x, ki, dims(, )) i range()] x tc.cat(x, dim) with autocast(): with tc.no_grad(): x [.forward_(x[i .batch:(i ) .batch]) i range(x.shape[] x tc.cat(x, dim) x x.sigmoid() x x.reshape(, shape[], shape[:]) x [tc.rot90(x[i], ki, dims(, )) i range()] x tc.stack(x, dim).mean() .input_size .image_size: x nn.functional.interpolate(x[], size(.image_size, .image_size), mode'bilinear', align_corners)[] x </code></pre> <p>def build_model(weight=None):<br> load_dotenv()</p> <pre><code> = CustomModel(CFG, weight) .cuda() </code></pre> <p>'<br> The key features in the code defined here are the forward functions which define the type of interpolation, linearity and also the reshaping and rotation criterions. Furthermore, we define the rotation vectors for our image and labelling, from 1024 to 1024*1024 to define various parameters in consideration. </p> <p>We further add code to increase our functionalities in our code as follows :-</p> <p>'<br> def rle_encode(mask):<br> pixel = mask.flatten()<br> pixel = np.concatenate([[0], pixel, [0]])<br> run = np.where(pixel[1:] != pixel[:-1])[0] + 1<br> run[1::2] -= run[::2]<br> rle = ' '.join(str(r) for r in run)<br> if rle == '':<br> rle = '1 0'<br> return rle</p> <p>def min_max_normalization(x: tc.Tensor) -&gt; tc.Tensor:<br> """input.shape=(batch,f1,…)"""<br> shape = x.shape<br> if x.ndim &gt; 2:<br> x = x.reshape(x.shape[0], -1)</p> <pre><code>min_ = x.(=-, keepdim=True)[] max_ = x.(=-, keepdim=True)[] min_.mean() == and max_.mean() == : x.() x = (x - min_) / (max_ - min_ + ) x.() </code></pre> <p>def norm_with_clip(x: tc.Tensor, smooth=3e-5):<br> dim = list(range(1, x.ndim))<br> mean = x.mean(dim=dim, keepdim=True)<br> std = x.std(dim=dim, keepdim=True)<br> x = (x - mean) / (std + smooth)<br> x[x &gt; 5] = (x[x &gt; 5] - 5) * 1e-3 + 5<br> x[x &lt; -3] = (x[x &lt; -3] + 3) * 1e-3 - 3<br> return x</p> <p>def add_edge(x: tc.Tensor, edge: int):<br> mean_ = int(x.to(tc.float32).mean())<br> x = tc.cat([x, tc.ones([x.shape[0], edge, x.shape[2]], dtype=x.dtype, device=x.device) * mean_], dim=1)<br> x = tc.cat([x, tc.ones([x.shape[0], x.shape[1], edge], dtype=x.dtype, device=x.device) * mean_], dim=2)<br> x = tc.cat([tc.ones([x.shape[0], edge, x.shape[2]], dtype=x.dtype, device=x.device) * mean_, x], dim=1)<br> x = tc.cat([tc.ones([x.shape[0], x.shape[1], edge], dtype=x.dtype, device=x.device) * mean_, x], dim=2)<br> return x<br> '</p> <p>Functionalities such as the RLE code, min-max normalization, normalize w/ clip and add-edge define the additional features that this code provides for our ease in operating with the image pixel data. </p> <p>Further, we define a data pipeline to add data into our system with a pipeline feature to ensure no data leakage and high efficiency in data processing.</p> <p>We finally define the output format before defining the submission code to consolidate our work, with the code for output as follows :-</p> <p>'<br> def get_output(debug=False):<br> outputs = []</p> <pre><code>if debug: paths = [] else: paths = () outputs = [[], []] for path in paths: x = (path, ) labels = tc.(x, dtype=tc.uint8) mark = (x, path).() for axis in [, , ]: debug_count = if axis == : x_ = x labels_ = labels elif axis == : x_ = x.(, , ) labels_ = labels.(, , ) elif axis == : x_ = x.(, , ) labels_ = labels.(, , ) if x.shape[] == and axis != : break dataset = (x_, path) dataloader = (dataset, batch_size=, shuffle=False, num_workers=) shape = dataset.x.shape[-:] x1_list = np.(, shape[] + CFG.tile_size - CFG.tile_size + , CFG.stride) y1_list = np.(, shape[] + CFG.tile_size - CFG.tile_size + , CFG.stride) for img, index in (dataloader): img = img.() img = (img[], CFG.tile_size // )[None] mask_pred = tc.(img[:, ], dtype=tc.float32, device=img.device) mask_count = tc.(img[:, ], dtype=tc.float32, device=img.device) indexs = [] chip = [] for y1 in y1_list: for x1 in x1_list: x2 = x1 + CFG.tile_size y2 = y1 + CFG.tile_size indexs.([x1 + CFG.drop_egde_pixel, x2 - CFG.drop_egde_pixel, y1 + CFG.drop_egde_pixel, y2 - CFG.drop_egde_pixel]) chip.(img[..., x1:x2, y1:y2]) y_preds = model.(tc.(chip)).(device=) if axis == CFG.axis_second_model: y_preds = ( * y_preds + * model9.(tc.(chip)).(device=)) if CFG.drop_egde_pixel: y_preds = y_preds[..., CFG.drop_egde_pixel:-CFG.drop_egde_pixel, CFG.drop_egde_pixel:-CFG.drop_egde_pixel] for i, (x1, x2, y1, y2) in (indexs): mask_pred[..., x1:x2, y1:y2] += y_preds[i] mask_count[..., x1:x2, y1:y2] += mask_pred /= mask_count mask_pred = mask_pred[..., CFG.tile_size // :-CFG.tile_size // , CFG.tile_size // :-CFG.tile_size // ] labels_[index] += (mask_pred[] * * CFG.axis_w[axis]).(tc.uint8).() if debug: debug_count += plt.() plt.(img[, CFG.in_chans // ].().().()) plt.() plt.(mask_pred[].().().()) plt.() if debug_count &gt; : break outputs[].(labels) outputs[].(mark) return outputs </code></pre> <p>'</p> <p>We finally define the submission data code as follows :-</p> <p>'<br> for index in range(len(ids)):<br> id = ids[index]<br> i = 0</p> <pre><code> x output: index &gt;= (x): index -= (x) += : break mask_pred = (output &gt; TH)() mask_pred2 = (mask_pred, , image_size=) mask_pred = mask_pred2() not is_submit: plt() plt(mask_pred) plt() debug_count += debug_count &gt; : break rle = (mask_pred) submission_df( pd(data={ : id, : rle, }, index=) ) </code></pre> <p>submission_df = pd.concat(submission_df)<br> submission_df.to_csv('submission.csv', index=False)<br> '</p> <h1>Future Scope</h1> <p>The future of this model is to add more features into the ResNet50 baseline model to enhance the efficiency of the system, and also certain vector rotations can lead to loss in data accuracy after a certain number of operations, so we need to define better methods for optimizing the number of rotations. Also, the model used here is too simple and needs to be made more efficient to enable faster computation and reduced operations.</p>
Santa 2023 - The Polytope Permutation Puzzle
243rd Ranked Solution for the Santa 2023 - The Polytope Permutation Puzzle Challenge
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Santa 2023 - The Polytope Permutation Puzzle <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Thank You</h1> <p>I thank all the competition participants, peer solution writers, community members and the supporting team of the platform for hosting another great competition for learning and implementation. </p> <h1>Overview</h1> <p>The objective of this competition is that you will solve cube-like puzzles in the fewest moves, but instead of the usual cubes, the puzzles come in a variety of geometric shapes. </p> <h2>The Data</h2> <p>1) puzzle_info.csv<br> puzzle_type - This file identifies the type of puzzle; puzzles of the same type have a common set of moves.<br> allowed_moves - It describes the moves allowed in solutions of this puzzle type. Each move represents a permutation given in array form. You are also allowed to use the inverses of these moves in your puzzle solutions.</p> <p>2) puzzle.csv<br> id - The file is a unique identifier for each puzzle.<br> puzzle_type - The file corresponds to that corresponding to puzzle_info.csv.<br> solution_state - The solution_state file provides an arrangement of "colors" describing the solved state of the puzzle, with a semicolon ; delimiter.<br> initial_state - It provides an arrangement of colors describing the initial state of the puzzle, with a semicolon ; delimiter. A solution to a puzzle must transform the initial state to the solved state through a sequential application of the puzzle's allowed_moves.<br> num_wildcards - The file is designated to denote the number of "mistakes" allowed in the final state of a solution.</p> <p>3) sample_submission.csv - A submission file in the correct format.<br> id - The file is corresponding to puzzles.csv.<br> moves - It provides an initial &amp; unoptimized solution.</p> <h1>The NxNxN Translator Solution Model</h1> <p>The main objective is to solve the problem using the most optimal number of moves. We use the following reference for understanding this technique :- <a href="https://www.kaggle.com/code/seanbearden/solve-all-nxnxn-cubes-w-traditional-solution-state" target="_blank">https://www.kaggle.com/code/seanbearden/solve-all-nxnxn-cubes-w-traditional-solution-state</a>.</p> <p>We walk through all the steps of writing the solution from initiation to making the submission file for a detailed understanding.</p> <h2>First Part - Cloning from Git</h2> <p>The first part is where we develop a potential solution by cloning an existing version from Git -</p> <p>'<br> %%bash<br> git clone <a href="https://github.com/dwalton76/rubiks-cube-NxNxN-solver.git" target="_blank">https://github.com/dwalton76/rubiks-cube-NxNxN-solver.git</a><br> cd rubiks-cube-NxNxN-solver<br> make init<br> '</p> <p>'<br> %%bash<br> cd ..<br> git clone <a href="https://github.com/dwalton76/kociemba.git" target="_blank">https://github.com/dwalton76/kociemba.git</a><br> cd kociemba/kociemba/ckociemba/<br> make<br> sudo make install<br> '</p> <h2>Second Part - Completing the Essentials</h2> <p>The next part is to complete the essential parts before devising the solution by first importing libraries -</p> <p>'<br> import gc</p> <p>import os</p> <p>os.chdir('rubiks-cube-NxNxN-solver')<br> print("Current Working Directory: ", os.getcwd())</p> <p>%%writefile 'script.py'</p> <p>import time<br> import math<br> from tqdm import tqdm<br> import pandas as pd<br> import numpy as np<br> import matplotlib.pyplot as plt<br> import os, subprocess<br> import json<br> '</p> <p>We then start picking the essential files necessary for our main code -</p> <p>'<br> p = '/kaggle/input/santa-2023/'<br> puzzles = pd.read_csv(p + 'puzzles.csv', index_col='id')<br> puzzle_info = pd.read_csv(p + 'puzzle_info.csv', index_col='puzzle_type')<br> submission = pd.read_csv('/kaggle/input/so23-collective-benchmarks/submission.csv', index_col='id')<br> '</p> <h2>Third Part - The Main Solution Program</h2> <p>Now, we move to devising the core algorithm to develop the program solution for NxNxN system.</p> <p>We define a state-2-ubl function for defining the dimensional and state changes. The code for the same is below :-</p> <p>'<br> def state2ubl(state):<br> state_split = state.split(';')<br> dim = int(np.sqrt(len(state_split) // 6))<br> dim_2 = dim*<em>2 s = ''.join([U_dict[f] for f in state_split]) return s[:dim_2] + s[2</em>dim_2:3<em>dim_2] + s[dim_2:2</em>dim_2] + s[5<em>dim_2:] + s[4</em>dim_2:5<em>dim_2] + s[3</em>dim_2:4*dim_2]<br> '</p> <p>The next is to define a moving translation function to contruct the moves and translation vectors. The code tor the same is below :-</p> <p>'<br> def move_translation(dim):<br> M = {}<br> M["U"] = f'-d{dim-1}'<br> M["R"] = "r0"<br> M["B"] = f"-f{dim-1}"<br> M["F"] = "f0"<br> M["L"] = f"-r{dim-1}"<br> M["D"] = "d0"</p> <pre><code> dim &gt; : M = f M = f M = f M = f M = f M = f dim &gt;= : M = f M = f M = f M = f M = f M = f width_max = dim (, width_max + ): M = f + M M = M + f M = f + M M = M + f M = f + M M = M + f m (M): M = M + + M M: M = M(,) : M = (()]) return M </code></pre> <p>allowed_moves = {}</p> <p>for puzzle_type, row in puzzle_info.iterrows():<br> allowed_moves[puzzle_type] = eval(row['allowed_moves'])</p> <p>def move_state(state, move, moves_pool):<br> if '-' in move:<br> move = move[1:]<br> rev = True<br> else:<br> rev = False<br> move = moves_pool[move]<br> if rev:<br> new_state = state[:]<br> for i in range(len(move)):<br> new_state[move[i]] = state[i]<br> state = new_state<br> else:<br> state = [state[idx] for idx in move]<br> return state<br> '</p> <p>The code goes in continuation as we define the manipulations needed to be made in the vector matrix and also we define the output solution state for generating output files. The code for the same is below :-</p> <p>'<br> outputs = {}<br> for id, row in puzzles.iterrows():<br> if row['puzzle_type'][:4] == 'cube':<br> dim = int(row['puzzle_type'].split('/')[-1])<br> if id not in range(125, 150):<br> continue<br> moves = eval(puzzle_info.loc[row['puzzle_type'], 'allowed_moves'])<br> for move in list(moves):<br> moves['-'+move] = np.argsort(moves[move]).tolist()<br> M = move_translation(dim)<br> init_state = row['initial_state']<br> sol_state = row['solution_state']</p> <pre><code> sol_state[:*dim**-] != .join([]*dim**): .join(((sol_state.split()))) == : (, , ) sb_solution = .join([fc*dim fc ]) sol_state_transl = .join(sol_state.split()) init_state_transl = .join(init_state.split()) changes = [] i ((sb_solution)): sb_solution[i]!=sol_state_transl[i]: changes.append(sb_solution[i]+sol_state_transl[i]) (, (changes)/) (, row[]) : (, , ) sol_state[:*dim**-] == .join([]*dim**): state = state2ubl(init_state) (, , (submission.loc[, ].split())) output = subprocess.check_output([ + state], shell=, stderr=subprocess.STDOUT).decode().split() outputs[] = output output[-][:] == : sol = output[-].split()[] : n (, ): output[-n]: sol = output[-n].split()[].split()[] mmoves = .join([M[m] m sol.split()]) new_state = init_state move mmoves.split(): new_state = .join((np.asarray(new_state.split())[np.array(moves[move])])) I = [.join([ i (dim)]) j [, , ]] manipulations = [] + I + [i1 + + i2 i1 I i2 I]+ [i1 + + i2+ + i3 i1 I i2 I i3 I]+ [i1 + + i2+ + i3 + + i4 i1 I i2 I i3 I i4 I] init_moves manipulations: temp_state = new_state (init_moves) &gt; : move init_moves.split(): temp_state = .join((np.asarray(temp_state.split())[np.array(moves[move])])) temp_state == sol_state: (init_moves) &gt; : mmoves += + init_moves (, (mmoves.split())) state = row[].split() move_name mmoves.split(): state = [state[i] i moves[move_name]] : row[].split() == state mmoves_length = (mmoves.split()) best_moves_length = (submission.loc[, ].split()) mmoves_length &lt; best_moves_length: submission.loc[, ] = mmoves () submission.reset_index().to_csv(, index=) AssertionError: () </code></pre> <p>'</p> <p>Further due to the solution, we define an SCO-optimizer for solving the problem and we define the code for the same below :-</p> <p>'<br> def SCOptimizer(pidr, depthr, verbose=False, verbose2=False):<br> for x in tqdm(pidr): <br> og_benchmark = len(paths['moves'][x].split('.'))<br> if verbose: print('Starting *** Pid', x, "Moves", og_benchmark)<br> for depth in depthr: <br> pid, pt, moves, allowed, initial_state, solution, wild = paths[['id','puzzle_type','moves','allowed_moves','initial_state','solution_state','num_wildcards']].values[x]<br> moves = moves.split('.')<br> benchmark = len(moves)<br> state = initial_state.split(';')<br> allowed = {k: Permutation(v) for k, v in eval(allowed).items()}<br> best_moves = []<br> solution, num_wrong = validState(moves, state, allowed, solution.split(';'))<br> ogsolution = solution<br> best_end_state = solution<br> best_total_moves = []<br> best_end_moves = []<br> print("Pid",pid, "Depth", depth, 'Wilds', wild, 'Moves', len(moves))<br> best_for_depth = 0<br> moves_to_states = []<br> for i in range(len(moves)): <br> moves_to_states.append(';'.join(oState(moves[i], state, allowed)))<br> state = moves_to_states[-1].split(';')<br> mc = [k for k in allowed] + ['-'+k for k in allowed] <br> while len(moves) &gt; depth:<br> solution = best_end_state<br> best_round_end_moves = []<br> dstates = getDepth(best_end_state, allowed, depth)<br> # added -- Start: check from initial stage here<br> if initial_state in dstates:<br> best_moves = []<br> moves = []<br> best_end_state = initial_state<br> else: #added -- END<br> for i in range(len(moves)):<br> state = moves_to_states[i]<br> if state in dstates:<br> best_moves = moves[:i+1]<br> moves = moves[:i+1]<br> best_end_state = state<br> break</p> <pre><code> state = best_end_state() while (state) != solution: m mc: temp_state = (m, state, allowed) s2 = (temp_state) s2 == dstates: state = temp_state best_end_moves(m) break best_total_moves = best_end_moves + best_total_moves best_for_depth &gt; (best_moves + best_total_moves) - benchmark: best_for_depth = (best_moves + best_total_moves) - benchmark verbose2: (,pid, , depth, , (best_moves + best_total_moves) - benchmark, , (moves)) change_moves = best_moves + best_total_moves (change_moves)&gt; and (change_moves) &lt; benchmark: paths = (best_moves + best_total_moves) state = initial_state() best_end_moves = verbose: (,pid, , depth, , (paths()) - og_benchmark) gc() </code></pre> <h1>-----------</h1> <p>pids = [336,337]<br> SCOptimizer(pids, range(1,2), False, True) <br> score = paths.moves.map(lambda x: len(x.split('.'))).sum()<br> print(score, score - baseline)</p> <h1>-----------</h1> <p>'</p> <h1>Future Scope</h1> <p>The solution prescribed provides a vector matrix manipulation approach to solve the NxNxN translation method to provide a method to optimize a 1x1x1 primary base approach. We can add a reinforcement learning agent to make the method for adaptable to the changes in the data, or can add gradient optimization algorithms to build better optimizations for edge cases. Further changes can be alteration to the NxNxN technique.</p>