@fedorov I have taken a look at the different parts of your script and the manual, and have just yet understood the way mpReview
works. We are actually working in the field of maxilla-cranial surgery, with head MRIs where only the skull has to be extracted, and we didn’t really bothered nor knew about multi-parametric reviewing. So at was at the beginning surprised of seeing that much variation between the number of NIfTI files in Reconstructions
folders. But after verification we indeed have variations in the X-Ray exposure for some MRIs.
Does it make a difference in the process of extracting the bones to have MRIs with different exposure? Or would it make segmentation or metallic artifact reduction easier?
For the testing on our files on mpReviewpreprocessor2 :
When running dicomsort
, as all the DICOM files are anonymized, the file path tends to be relatively unreliable, as it takes empty or encrypted metadata. But mpReviewpreprocessor 2 does convert the files in the chosen output directory in the Reconstructions folder, and constructs readable NifTI files.
Here the path organized by dicomsort and the Reconstructions folder made by mpReviewpreprocessor2:
While walking through the files I saw that indeed some of the head MRIs that were multiparametric with a separate NIfTI file for each parameter
The way your script was working is interesting because it allows us to treat DICOMs in bulks, and so to be a first step to automate the extraction process of the skulls and metallic artifacts, so we can eliminate human input in our data and keep consistency.
But I also tried to use dcm2niix to make something more problem specific, as it is possible to batch convert whole DICOM directories with the dcm2niixbatch batch_config.yaml
command and to chose the file name in the YAML script.
I tried to write a function that allows me to create the batch_config.yaml
file and that takes as inputs the directory containing the DICOM files and the empty directory where we want the NIfTl files to be created, also giving the name of the directory containing the DICOM images to the NIfTI files.
The function also creates the new directories containing the NIfTI files, and organize the batch_config.yaml
file.
After some tests, the function returned a batch_config.yaml
file, and creates the empty directories for the NifTI files, but the YAML file itself has still some issues, with the inversion of the Files and Options dictionary, and bad indentation.
Empty directories, and non-conform yaml file:
Here is the code involved in the creation of the YAML file:
!/usr/bin/python3
import argparse, sys, shutil, os, logging
import ruamel.yaml as ry
from ast import literal_eval
from pathlib import Path
def batch_creator(inputDir , outputDir):
#options have to be (for now) manually changed
optionStr = "{'Options': {'isGz': False, 'isFlipY': False, 'isVerbose': False, 'isCreateBIDS': False, 'isOnlySingleFile': False},"
#dirIn contains all the dicom files, dirOut is empty, the yaml file is created in the dirOut directory (for testing purposes)
dirIn = inputDir
dirOut = outputDir
yamlDir = outputDir #for testing issues the yaml file is also in the output directory
listDicomDir = os.listdir(dirIn)
n = len(listDicomDir)
#FileArray represents the dictionary (in string format) that has to be further converted into yaml
FileArray = []
#while iterating through the whole directory containing the Dicom files, we create for each Dicom a corresponding folder @ dirOut
for i in range (0,n):
tempName = listDicomDir[i]
outPathName = dirOut+"/"+tempName+"_dcm2niix"
os.mkdir(Path(outPathName))
FileArray.extend([{'in_dir': dirIn+"/"+tempName ,'out_dir': outPathName , 'filename': tempName+"_dcm2niix" }]) #dictionary in string format
AlmostYaml = optionStr+" 'Files':"+str(FileArray)+"}" #here we join the Options and Files part of the yaml file
dict_batchFile = literal_eval(AlmostYaml) #this should convert the string into a python dictionary
save_path = Path(yamlDir) #then we create the empty yaml file in the Out directory
batch_config = open(os.path.join(save_path,"batch_config.yaml"),"w+")
ry.dump(dict_batchFile,batch_config, default_flow_style=False) #and dump our finished yaml
I will try to find the mistakes that I made in the function and yaml syntax, and update it here later. The conversion should then be done relatively easy.
In the end, the database should be (it’s utopic) consistent enough to train the convolutional neural network based metal artifact reduction algorithm presented here , or to make statistical analysis on the maxillofacial framework.