Thanks Andras. So far I’ve been using vmtkcenterlines via PypePad so am excited to try this. Thanks
Hey, great question and I do agree that that would be ideal.
I’m making a fully automated clinician-oriented tool where the clinician can input raw CT images and we output useful metrics for the medical conditions we’re interested in. I’ve got the segmentation fully automated and would be ideal to automate this portion of the work too. I have my vmtk pipeline working great with just this one stage of intervention so we’ll see how well the fully-automated version can compare
Hi Andras, I’m pleased to now have my workflow now working as a .py script. At the centreline extraction stage I’m using “vmtkcenterlines” but I don’t see any kind of “Auto-detect” prompt or button appearing? Can you please confirm if I’m using the wrong command or otherwise missing something? Thanks
It is in Extract Centerline module:
Thanks so much for such a quick reply! I think perhaps I misunderstood the capabilities: I was hoping to run a script from the command line, completely avoiding the need to use 3DSlicer or other programs. Is this not possible?
I have my script working (without needing to enter 3DSlicer) but there a few points which require manual intervention which I’m currently trying to remove. Is it possible using pypes to use this kind of auto-detect functionality or must that be done in 3DSlicer? Thanks
You can have a look at the source code of this module, it is a pure Python script. You can either cop-paste just the parts that you need into your current script or use the code as is.
If you want to use the entire module as is, then you need to use Slicer’s virtual Python environment, but you need to use some Python environment anyway, so why not use Slicer’s? You can pip install any Python packages into it the same way as into any other. The Python environment is available without showing any GUI, so you can run your Python script with
Slicer --no-main-window --python-script path/to/myscript.py as you would do it with
python path/to/myscript.py in another Python environment.
There are Slicer docker images that you can use (you may need to add a step to install SlicerVMTK, but probably that’s all), so it all becomes a single command on the command-line. You can also run Slicer as a web service, so that you can use any of its features remotely, from a web or mobile application via a REST API. If you are developing a desktop application then probably you could save enormous amount of software development, maintenance, and support time by building your application based on Slicer (replacing the top-level GUI with your custom UI layer - this is how most companies develop their Slicer-based medical imaging products).
Also note that automated processing workflows can never guarantee 100% success rate. In medical applications, the typical target for automated processing success rate is 95% because there are always variability between patients, in imaging, etc. In some cases, maybe you can achieve 99%. But you cannot afford to errors slip through for a few percent of your patients. Therefore, you always need a GUI (for approval/quality assurance) and a way to manually correct those 5% of cases where automation does not provide perfect results.
Thanks again Andras, this is extremely useful and valuable advice.
My background is not in coding so it’ll take me a little time to digest all of this information and put it into practice. I have a very simple script in pypepad which works great for my vmtk operations, I’m hoping I can get this working without too much trouble, either way I appreciate you being on hand to help!
Thanks also for the warning about automated workflows, I’m definitely taking that advice on board and we already plan to include a gui with the option for manual intervention if necessary.
If you have not invested a lot of time into developing your application yet then it is a perfect time to make a decision about switching to a proper medical imaging application platform, which can drive your entire clinical workflow: getting the input images (via DICOM files or networking), specifying inputs, running processing, showing results, allowing users to make manual adjustments, generating structured reports and saving/sending it via DICOM or other formats. Slicer provides 99% of all these features, so you can focus on developing your custom processing workflow, underlying custom algorithms (if needed), and design a convenient custom GUI.
If you write a bit more about what your overall goal is then we can make recommendations about how to achieve it. We are of course all like Slicer so we will be somewhat biased towards recommending it in general, but we also friends with developers of other web and desktop medical imaging platforms (OHIF, MITK, Mevis, etc.) so if you tell us specific requirements then we may be able to guide you in the right direction.
Thank you Andras!
I have already developed the segmentation algorithm workflow so now it is just the centreline and geometry analysis I definitely need to tackle. The project I am working on is just a pilot study so it doesn’t need to be perfect or flashy but of course it would be good to build a solid foundation using the best tools available. I appreciate your comment about using slicer or similar as the platform, I’d certainly like to explore the capabilities if it could be quick to try. Quite easily I could likely wrap up what I’ve got and incorporate it in, though again I am not a programming-whizz!
The workflow is as follows:
- Import images
- Use segmentation script I’ve already fine-tuned and want to continue using. Exports segmented .nii.gz
- Then the bit I’ve been working on recently: using vmtk to extract a .dat of the vessel diameter fully automatically. More details on that further down.
- Finally to create a report with visuals and stating the maximum diameter of the segmented anatomy. Again I’ve already got a basic version of this but would be great to use some of what slicer may have available.
So I’d be interested potentially in using the slicer platform but equally for now simply want to get the bit I’ve been doing in vmtk working automatically so that I have at least a basic version of whole automated pipeline. There are just two manual intervention steps I’d like to solve in the first instance, one of them (thresholding) should be I can select the same settings each time, whereas the centreline seed points was the more difficult step and the reason for my initial post.
My vmtk script is quite simple and could be further streamlined but here are the steps I’m carrying out:
#Threshold & create surface mesh:
vmtklevelsetsegmentation -ifile “data_file” --pipe vmtkmarchingcubes -i @.o -ofile Sx_1thresh.vtp
#Computer centrelines combined with viewer:
vmtksurfacereader -ifile Sx_1thresh.vtp --pipe vmtkcenterlines --pipe vmtkrenderer --pipe vmtksurfaceviewer -opacity 0.25 --pipe vmtksurfaceviewer -i @vmtkcenterlines.o -array MaximumInscribedSphereRadius -ofile Sx_2centrelines.vtp
#Compute centre-line geometry
vmtkcenterlinegeometry -ifile Sx_2centrelines.vtp -smoothing 1 -ofile Sx_3clgeo.vtp
vmtkcenterlinegeometry -ifile Sx_2centrelines.vtp -smoothing 1 -ofile Sx_3clgeodat.dat
This week I want to try your suggestions from earlier in the thread but again must state that I’m trying to at least give the option for a fully-automated script with no user intervention necessary.
Thanks in advance, I very much appreciate your help!!
You can find the automatic endpoint detection implementation in onAutoDetectEndPoints method. It has some code to get/put VTK data objects from/to MRML nodes, but you can ignore those and just work with the VTK data objects.
I’m starting to pick up this piece of work again and have been running some tests within Slicer’s python environment. I can load volumes and models, carry out thresholds etc all with the centralised Slicer functions but I am now stuck at how to access/use the vmtk functions as mentioned in this thread within the slicer environment.
For example I can load my model with:
But I can’t for the life of me figure out the equivilant command to use, for example, the extract centreline and autodetect end points functions as discussed earlier in this thread? It is certainly my lack of coding experience letting me down. Any help you can provide would be very much appreciated. Thanks in advance
You can use the automatic test of the Extract Centerline module as an example of how to use the module from a Python script:
Thanks as always for such a prompt response, Andras.
I am still a little lost though, my background is not in coding!
When I enter
logic = ExtractCenterlineLogic()
NameError: name 'ExtractCenterlineLogic' is not defined
I have used the module finder and see it states:
Internal name: ExtractCenterline
Type: Python Scripted Loadable
But I still for the life of me can’t figure out how I can access this within the slicer python environment. I have tried many combinations of " slicer." such as “slicer.ScriptedLoadableModule.” to then track it down but still have no luck. Could you please possibly provide an idiot-proof guidance?
In the test that I linked above,
ExtractCenterlineLogic was in the same file, so there was no need to import it. However, if you want to use this class from another Python file then you need to import it, for example by adding this line:
from ExtractCenterline import ExtractCenterlineLogic
Thanks Andras. For now I’m trying to use it from the Slicer python interactor and that line worked. It now works as expected with:
logic = ExtractCenterlineLogic()
But, once I enter:
logic.run(inputVolume, outputVolume, threshold, True)
I now get:
AttributeError: 'ExtractCenterlineLogic' object has no attribute 'run'
Apologies, I feel very feeble and don’t like having to ask for so much help but am otherwise completely lost and unable to make progress! My end goal is to simply feed in an already level set thresholded .vtp model and use the auto detect endpoints function to compute the centreline.
Hi Andras, do you have any ideas for how to solve this current issue? I am hoping that what I want to do is quite simple but I’m struggling with the programmatic implementation.
As before my end goal is to feed in an already level set thresholded .vtp model and use the auto detect endpoints function to find the endpoints then compute the centreline.
I am able to feed in the .vtp model but I’m struggling with programming of the auto detect endpoints function.
Any assistance you’re able to provide would be very much appreciated.
It looks like the tests have not been updated since an earlier version of the module; the error message is correct that there is no run() method in ExtractCenterlineLogic. There is an extractCenterline() method which takes as inputs surfacePolyData, endPointsMarkupsNode, and a curveSamplingDistance. Here is what I think you need:
segmentationName = 'MySegmentationName' # replace with the name of your segmentation
segmentName = 'MySegmentName' # replace with the name of the segment you want to find the centerline of
segmentationNode = slicer.util.getNode(segmentationName)
segmentID = segmentationNode.GetSegmentation().GetSegmentIdBySegmentName(segmentName)
extractLogic = ExtractCenterline.ExtractCenterlineLogic()
# Preprocess the surface
inputSurfacePolyData = extractLogic.polyDataFromNode(segmentationNode, segmentID)
targetNumberOfPoints = 5000.0
decimationAggressiveness = 4 # I had to lower this to 3.5 in at least one case to get it to work, 4 is the default in the module
subdivideInputSurface = False
preprocessedPolyData = extractLogic.preprocess(inputSurfacePolyData, targetNumberOfPoints, decimationAggressiveness, subdivideInputSurface)
# Auto-detect the endpoints
endPointsMarkupsNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLMarkupsFiducialNode", "Centerline endpoints")
networkPolyData = extractLogic.extractNetwork(preprocessedPolyData, endPointsMarkupsNode)
endpointPositions = extractLogic.getEndPoints(networkPolyData, startPointPosition)
for position in endpointPositions:
# Extract the centerline
centerlineCurveNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLMarkupsCurveNode", "Centerline curve")
centerlinePolyData, voronoiDiagramPolyData = extractLogic.extractCenterline(preprocessedPolyData, endPointsMarkupsNode)
centerlinePropertiesTableNode = None
extractLogic.createCurveTreeFromCenterline(centerlinePolyData, centerlineCurveNode, centerlinePropertiesTableNode)
After you run this, the centerline should be in
centerlineCurveNode, and the auto-detected endpoints should be in
endPointsMarkupsNode. I don’t know if the preprocessing step is strictly necessary (it seems like you may know more about the inner workings of VMTK than I do), but it seems recommended. To get this code snippet, I tried to pull the relevant lines out of a few functions in the ExtractCenterlineLogic class (SlicerExtension-VMTK/ExtractCenterline.py at 3787ea4a300da28ec5f0824f0715f2713b631155 · vmtk/SlicerExtension-VMTK · GitHub). I have not tried running the auto-detect endpoints this way, but I have done the centerline extraction with supplied endpoints using VMTK using the above methods in one of my own python modules.
Mike, thank you so much!
I’m having a few hiccups getting this to completely work but it is a very useful starting point and I’m much further along than I was before, so thank you. I’m working to get it all ironed out now and hopefully have my first automated centerlines produced soon
Another approach I have used is with a surface mesh that is open at each place I would like there to be centerline seed. vmtk can detect the centroid of each open region and provide the seed location.
This discussion was quiet helpful for me.
But i get stuck at the part where teh slicer environment is used.
I already have a quiet sufisticated virtual environment for a project and i also need the automatic centerline extraction.
Is there a way to get “slicer” into a already existing venv, or just use parts of the extionsion?
or is it possible to pass the pype the point coordinates manually?