Hi Murat! Good point, the existing documentation so far assumes a pretrained model. However, in my case this was not a problem - I used MONAILabel for inner ear structure segmentation in microCT (128^3 cubic volumes). I started with 0 annotated volumes, but nonetheless, I used the pre-trained left-atrium DeepEdit model (different anatomy AND modality), because 1) I wanted a quick start without much fiddling with the DeepEdit app code, and 2) I thought it might beneficial that the early encoder layers already have some “clue” about atomic 3D patch geometries. Not sure whether point 2) actually was beneficial (didn’t try the corollary), but to my delight, the DeepEdit’s UNet model quickly started snapping to the anatomy. Already after 2-3 manual annotations, I got surprisingly good segmentation guesses by the model. In your case, you have already 20 pre-annotated volumes. You can indicate this in the datastore.json
file. Upon first start of DeepEdit, instead of annotating right-away, you could manually trigger the first pre-training. The resulting model might already be quite “valuable”. Let me know if I can help.
3 Likes