How to volume render a transparent space with mlab.pipleine.volume() with mlab.pipleine.scalar_field()?

23 views Asked by At

Can anybody help me create a video describing a volume rendered scalar field which shrinks?

I am simulating a material dynamics problem. At every simulation step, I got a vector which is reshaped into a 3D scalar field array later, and whose max and min values are 1.0 and anything lower, respectively. In my problem, the coordinates which return the min value are ignored in the next simulation step. So, I want to visualize this process as an animation, like a volume rendered space gradually shrinking.

Below is a simplified code. I provide mlab.pipeline.volume() with a mlab.pipeline.scarlar_field object generated from a data array. I thought that if I set 0.0 at the ignored coordinates in a data array, the area corresponding to the coordinates would get transparent, and I could make the video I wanted. But, as the for loop proceeds, the space gets darker from red-ish color and finally becomes black link to the resultant video on my Google Drive.

So, I am wondering

  1. How can I plot the ignored area as a transparent space?
  2. Why is the whole space rendered red OR black? Actually, this happens to my original code as well. My expectation was something more colorful because the data array stores values between 0.0 and 1.0.
from mayavi import mlab
from moviepy import editor
from tqdm import tqdm
import os
import numpy as np


mlab.options.offscreen = True


# Make a logging directory
log_root = os.getcwd()
log_dir = os.path.join(log_root, "test") 
os.makedirs(log_dir, exist_ok=True)
# Initial rendering of mass distr
fig = mlab.figure(bgcolor=(0, 0, 0), size=(960, 960))
resolution = 3
num_coord_ax = 2**resolution
init_scalars = np.ones((num_coord_ax, num_coord_ax, num_coord_ax))
scalar_field = mlab.pipeline.scalar_field(init_scalars)
volume = mlab.pipeline.volume(scalar_field, figure=fig)
mlab.colorbar(orientation="vertical")
# Boolean index (bidx) vector which manage active vxs in the volume rendered space
active_bidx = init_scalars.astype(int).reshape(-1)


# Psudoe simulation loop
#for i in tqdm(range(128), desc="Simulating..."):
for i in range(512):
    # Sample vector as a mimic of the problem of my interest
    new_data = np.random.rand(active_bidx.sum())
    new_data /= new_data.max()  # ~ (num_active_vxs, )
    # Truncate the minimum values
    bidx = new_data == new_data.min()
    new_data[bidx] = 0.0
    # Create a Truncate the minimum values
    new_src = np.zeros_like(init_scalars)  # ~ (num_coord_ax, num_coord_ax, num_coord_ax)
    new_src.reshape(-1)[active_bidx.astype(bool)] = new_data
    # Update the boolean vector 
    active_bidx[active_bidx.astype(bool)] &= ~bidx
    print(f"{active_bidx.sum()=}")  # check the evaluated area is shrinking or not
                                    # actually, the returned value descreases, which 
                                    # implies the area is shrinking

    # Volume render the new data
    volume.mlab_source.scalars = new_src
    mlab.savefig(os.path.join(log_dir, f"{i:04}.png"), figure=fig)


# Set misc variables for composing a video from the .pngs
input_ext = ".png"
output_name = "test.mp4"
fps = 60
codec = "libx264"
# Get a list of image paths
imgs = sorted(os.listdir(log_dir))
images = [img for img in imgs if img.endswith(input_ext)]
image_paths = [os.path.join(log_dir, img) for img in images]
# Ganarate a video and save it on local
video_name = os.path.join(log_dir, output_name)
clip = editor.ImageSequenceClip(image_paths, fps=fps)
clip.write_videofile(video_name, codec=codec, fps=fps)

Thanks in advance!

0

There are 0 answers