I am using simulated kinect depth camera to receive depth images from the URDF present in my gazebo world. I have made a filter using python which only takes a part of the depth image as shown in the image and now i want to visualize this depth image as point-cloud on rviz.
Since i am new to ROS it would be great if i could get some examples.
have you ever tried this http://wiki.ros.org/depth_image_proc also, you can find examples here: https://github.com/ros-perception/image_pipeline