.nii.gz file to properly oriented grayscale image

1.7k views Asked by At

I have been doing some deep learning, usually with normal image dataset like jpeg or png. Recently, I came across a dataset where I have been provided with test images as png/jpeg with ground truth in .nii.gz file. If I open the .nii.gz file it contains one .nii file. Now, being a beginner with this type of file, I was still able to plot the test image and ground truth in Colab. However, there are some issues with it.

The code I used:

# Reading Image
import nibabel as nib
imagePath=testImageDir+'/141549_83.png'
testImage=cv2.imread(imagePath);
GTPath=testGTDir+'/141549_83.nii.gz'
niiObject=nib.load(GTPath)
print(niiObject)
testGT_extradim=niiObject.get_data() #testGTImage is nXmXchannel image
testGT_rot=cv2.resize(testGT_extradim,(512,512)) #By default imshow only acceptson(n,m) or (n,m,3) or (n,m,4) & this image was (n,m,1)
# Displaying figures
plt.figure(figsize=(12,8), dpi= 100)
plt.subplot(121)
plt.imshow(testImage)
plt.subplot(122)
plt.imshow(testGT_rot)

Output of print(niiObject):

<class 'nibabel.nifti1.Nifti1Image'>
data shape (512, 512, 1)
affine: 
[[ 1.  0. -0. -0.]
 [ 0.  1. -0. -0.]
 [ 0.  0.  1.  0.]
 [ 0.  0.  0.  1.]]
metadata:
<class 'nibabel.nifti1.Nifti1Header'> object, endian='<'
sizeof_hdr      : 348
data_type       : b''
db_name         : b''
extents         : 0
session_error   : 0
regular         : b'r'
dim_info        : 0
dim             : [  3 512 512   1   1   1   1   1]
intent_p1       : 0.0
intent_p2       : 0.0
intent_p3       : 0.0
intent_code     : none
datatype        : uint16
bitpix          : 16
slice_start     : 0
pixdim          : [1. 1. 1. 1. 0. 0. 0. 0.]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 2
cal_max         : 0.0
cal_min         : 0.0
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : b''
aux_file        : b''
qform_code      : aligned
sform_code      : scanner
quatern_b       : 0.0
quatern_c       : 0.0
quatern_d       : 0.0
qoffset_x       : -0.0
qoffset_y       : -0.0
qoffset_z       : 0.0
srow_x          : [ 1.  0. -0. -0.]
srow_y          : [ 0.  1. -0. -0.]
srow_z          : [0. 0. 1. 0.]
intent_name     : b''
magic           : b'n+1'
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:7: DeprecationWarning: get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).

* deprecated from version: 3.0
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 5.0
  import sys

I have gone thru this answer but it has an img attribute in the object which I seem to be missing above (I mean not exact but a similar object).

Image output:

Colab Output

Now, before this I had tried opening the .nii file in ImageJ application and it looked something like this:

ImageJ Output

You can see that ImageJ one is different from the Colab ground truth image in terms of orientation and color. ImageJ one correctly tells the nuclei positions in the tissue image on left in Colab output (as below).

Correct test image vs ground truth:

Correct GT

Correct ground truth vs Incorrect ground truth:

Incorrect GT

So, question is divided into to sub-questions (as indicated by title)::

i) How to read the nii image so that it is in proper orientation as ImageJ one?

EDIT: I seem to have corrected the orientation problem. It seems the image was mirrored along y=x line so I reversed the mirroring with this code:

   testGT=cv2.rotate(testGT_rot,cv2.cv2.ROTATE_90_CLOCKWISE)
   r,c=testGT.shape
   for i in range(0,r):  
    for j in range(0,c):
        temp=testGT[i,j]
        testGT[i,j]=testGT[j,i]
        testGT[j,i]=temp
   testGT=cv2.flip(testGT,1)

However, I am still open to better methods for a deeper understanding of why this happened in nibabel and could it have been solved using any nibabel object parameter that I am not aware of.

ii) Both ImageJ and nibabel loaded images are not in grayscale as some prominent nuclei are not visible in those images (they are of very low intensity and and maximum intensity in ImageJ image was 40 [for the brightest nuclei]). How to convert it into grayscale (0 to 255 scale so that the low intensity nuclei scale out and are more visible)?

0

There are 0 answers