Scripting parallel iterative deconvolution with many time points on a large cluster using ImageJ

318 views Asked by At

I have an interesting ImageJ scripting problem that I wanted to share. An imaging scientist gives to me a data set with 258 time points with 13 'Z-stack' images. All together there are 3,354 tif images. He has a macro that he made using the imageJ macro recording functionality, and the macro works on his windows machine except it takes forever. I have access to a very large academic computing cluster where I could conceivably ask for as many nodes as time-points. The input files are the 3,354 tif images labelled like 'img_000000000_ZeissEpiGreen_000.tif', the nine digit number increases 1-258 and the three digit number is the Z-stack order 1-13, the other input file is a point-spread function image made with tiny beads at sub-resolution. Here is the Macro "iterative_parallel_deconvolution.ijm". I changed the paths to correspond to the necessary paths in the cluster.

//******* SET THESE VARIABLES FIRST!  ********
path = "//tmp//images//";
seqFilename = "img_000000000_ZeissEpiGreen_000.tif";
PSFpath = "//tmp//runfiles//20xLWDZeissEpiPSFsinglebeadnoDICprismCROPPED64x64.tif";
numTimepoints = 258;
numZslices = 13;
xyScaling = 0.289 //microns/pixel
zScaling = 10 //microns/z-slice
timeInterval = 300; //seconds
//********************************************

getDateAndTime(year1, month1, dayOfWeek1, dayOfMonth1, hour1, minute1, second1, msec); //to print start and end times
print("Started " + month1 + "/" + dayOfMonth1 + "/" + year1 + " " + hour1 + ":" + minute1 + ":" + second1);

//number of images in sequence
fileList = getFileList(path);
numImages = fileList.length;

//filename and path for saving each timepoint z-stack
pathMinusLastSlash = substring(path, 1, lengthOf(path) - 1);
baseFilenameIndex = lastIndexOf(pathMinusLastSlash, "\\");
baseFilename = substring(pathMinusLastSlash, baseFilenameIndex + 1, lengthOf(pathMinusLastSlash));
saveDir = substring(path, 0, baseFilenameIndex + 2);

//loop to save each timepoint z-stack and deconvolve it
for(t = 0; t < numTimepoints; t++){
        time = IJ.pad(t, 9);
        run("Image Sequence...", "open=[" + path + seqFilename + "] number=" + numImages + " starting=1 increment=1 scale=100 file=[" + time + "] sort");
        run("Properties...", "channels=1 slices=" + numZslices + " frames=1 unit=um pixel_width=" + xyScaling + " pixel_height=" + xyScaling + " voxel_depth=" + zScaling + " frame=[0 sec] origin=0,0");
        filename = baseFilename + "-t" + time + ".tif";
        saveAs("tiff", saveDir + filename);
        close();

        // WPL deconvolution -----------------
        pathToBlurredImage = saveDir + filename;
        pathToPsf = PSFpath;
        pathToDeblurredImage = saveDir + "decon-WPL_" + filename;
        boundary = "REFLEXIVE"; //available options: REFLEXIVE, PERIODIC, ZERO
        resizing = "AUTO"; // available options: AUTO, MINIMAL, NEXT_POWER_OF_TWO
        output = "SAME_AS_SOURCE"; // available options: SAME_AS_SOURCE, BYTE, SHORT, FLOAT
        precision = "SINGLE"; //available options: SINGLE, DOUBLE
        threshold = "-1"; //if -1, then disabled
        maxIters = "5";
        nOfThreads = "32";
        showIter = "false";
        gamma = "0";
        filterXY = "1.0";
        filterZ = "1.0";
        normalize = "false";
        logMean = "false";
        antiRing = "true";
        changeThreshPercent = "0.01";
        db = "false";
        detectDivergence = "true";
        call("edu.emory.mathcs.restoretools.iterative.ParallelIterativeDeconvolution3D.deconvolveWPL", pathToBlurredImage, pathToPsf, pathToDeblurredImage, boundary, resizing, output, precision, threshold, maxIters, nOfThreads, showIter, gamma, filterXY, filterZ, normalize, logMean, antiRing, changeThreshPercent, db, detectDivergence);
}

//save deconvolved timepoints in one TIFF
run("Image Sequence...", "open=["+ saveDir + "decon-WPL_" + baseFilename + "-t000000000.tif] number=999 starting=1 increment=1 scale=100 file=decon-WPL_" + baseFilename + "-t sort");
run("Stack to Hyperstack...", "order=xyczt(default) channels=1 slices=" + numZslices + " frames=" + numTimepoints + " display=Grayscale");
run("Properties...", "channels=1 slices=" + numZslices + " frames=" + numTimepoints + " unit=um pixel_width=" + xyScaling + " pixel_height=" + xyScaling + " voxel_depth=" + zScaling + " frame=[" + timeInterval + " sec] origin=0,0");
saveAs("tiff", saveDir + "decon-WPL_" + baseFilename + ".tif");
close();

getDateAndTime(year2, month2, dayOfWeek2, dayOfMonth2, hour2, minute2, second2, msec);
print("Ended " + month2 + "/" + dayOfMonth2 + "/" + year2 + " " + hour2 + ":" + minute2 + ":" + second2);

The website for the ImageJ plugin Parallel Iterative Deconvolution is here: https://sites.google.com/site/piotrwendykier/software/deconvolution/paralleliterativedeconvolution

Here is the PBS script which I used to submit the job to the cluster, with this command: 'qsub -l walltime=24:00:00,nodes=1:ppn=32 -q largemem ./PID3.pbs'. I could have requested up to 40 ppn, but program states that they must be a power of 2.

#PBS -S /bin/bash
#PBS -V
#PBS -N PID_Test
#PBS -k n
#PBS -r n
#PBS -m abe

Xvfb :566 &
export DISPLAY=:566.0 &&

cd /tmp &&

mkdir -p /tmp/runfiles /tmp/images &&

cp /home/rcf-proj/met1/pid1/runfiles/* /tmp/runfiles/ &&
cp /home/rcf-proj/met1/pid1/images/*.tif /tmp/images/ &&

java -Xms512G -Xmx512G -Dplugins.dir=/home/rcf-proj/met1/software/fiji/Fiji.app/plugins/ -cp /home/rcf-proj/met1/software/imagej/ij.jar -jar /home/rcf-proj/met1/software/imagej/ij.jar -macro /tmp/runfiles/iterative_parallel_deconvolution.ijm -batch &&

tar czf /tmp/PIDTest.tar.gz /tmp/images &&

cp /tmp/PIDTest.tar.gz /home/rcf-proj/met1/output/ &&

rm -rf /tmp/images &&
rm -rf /tmp/runfiles &&

exit

We have to use Xvfb to keep ImageJ from sending out images to a non-fake display the display number is arbitrary. This program ran for six hours but made no output images, is it because I needed to have a open image?

I would like to redesign this macro so that I can split up each time-point and send it to its own node for processing. If you have any ideas on how you would go about this, we would be really grateful for your feedback. The only caveats are that we have to use the parallel iterative deconvolution software plugin with ImageJ

Thanks!

1

There are 1 answers

0
he1ix On

Regarding the use of Xvfb, if you were using Fiji's ImageJ-launcher (most likely ImageJ-linux64 in your case) you could use the --headless option that takes care of all the GUI calls embedded in ImageJ and has been tested by many people for running ImageJ in cluster environments.

This way you'd also benefit from seeing all the output that is produced by e.g. IJ.log() calls in a macro, I'm not sure if this is the case the way you're calling ImageJ.

You might also consider putting a setBatchMode(true) at the start of you macro, but I'm not quite sure if this makes any difference when running in --headless mode. See e.g. the example BatchModeTest.txt for details.

As you're intending to run that stuff on a cluster, it is probably worth checking out the Fiji Archipelago page in the wiki that gives a lot of details and hints how to achieve this.

Cheers ~Niko