Have set up a label server and starting training. One question I had, as someone relatively new to deep learning based segmentation model development, is if the input images need to be co-registered to some sort of standard? or should the model take care of that? I am using the segmentation.py file.
This is hard to answer without knowing how different the images are from each other. In my experience, the augmentation pipeline in MonaiLabel is quite robust to rotational and translational differences between samples. If your dataset is substantially variable, training may take somewhat longer though…
Thanks for your input! these are all thin cut temporal bone CT’s so I don’t think there is that much variability but there are certainly some.
Can you please expand on what is the type of dataset you’re using? What do you mean by co-register? Do you have more than one modality per case?
Let us know,
Hi @diazandr3s ,
No, it is just a thin-cut temporal bone CT scan. Given variability in patient positioning and such, I thought it might improve accuracy to have them registered to a common atlas image, but in my training so far using deepedit.py it seems unnecessary per muratmaga’s thoughts
One question I do have is that I was wondering if it/s possible to train the model server side only, ie not issuing a request through the Slicer gui? I have access to a linux server with a much larger graphics card, but because of firewall restrictions I can’t access it from my usual machine and would have to run it all there as well as move the images there.
Thanks for the update.
One question I do have is that I was wondering if it/s possible to train the model server side only, ie not issuing a request through the Slicer GUI?
Yes, you can run MONAI Label without starting the server. Here I explained how this can be done for batch inference: Can TotalSegmentator segment maxillofacial CT as well? - #9 by diazandr3s
But you can do the same for any task. For training, just change this argument to train: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/main.py#L297
It just works as a standard Python script.
Hope this helps,
Yes, thank you very much! I have it working (for now). I really appreciate all the past discussions as well as your help with this question.
I am wondering now, is there away to customize/set certain samples to be train vs validation? or get some insight as to which samples are validation/train in each run?
By default, MONAI Label splits the dataset into 80% for training and 20% for validation.
Here is the method that does this split: MONAILabel/monailabel/tasks/train/basic_train.py at main · Project-MONAI/MONAILabel · GitHub
You could overwrite this method and specify a custom validation set instead. As it is done here: MONAILabel/sample-apps/deepedit_multilabel/lib/train.py at testsMICCAI · Project-MONAI/MONAILabel · GitHub
If using the Segmentation mode, this method should be overwritten in the lib/trainers/segmentation.py file
I hope this helps,