I visualized KITTI odometry dataset with ground truth and Velodyne point cloud data. http://www.cvlibs.net/datasets/kitti/eval_odometry.php
The height of the camera position seems different after the loop in the sequence. I am not sure this is normal or I misunderstood something. When you use this data as ground truth, you modify something or you use it as it is?
This is the code which loads the poses txt file
std::ifstream infile;
infile.open(poses_path, std::ios::in);
assert(infile.good());
std::string l1;
while (getline(infile, l1)) {
double p00, p01, p02, p03;
double p10, p11, p12, p13;
double p20, p21, p22, p23;
if (std::sscanf(l1.c_str(),
"%lf %lf %lf %lf %lf %lf %lf %lf %lf %lf %lf %lf %lf %lf %lf %lf",
&p00,&p01,&p02,&p03,
&p10,&p11,&p12,&p13,
&p20,&p21,&p22,&p23) == 12) {
}
else {
std::cerr << "Failed to read camera intrinsics!\n";
infile.close();
return;
}
Eigen::Matrix<double, 3, 3> R;
R << p00, p01, p02, p10, p11, p12, p20, p21, p22;
Eigen::Matrix<double, 3, 1> t;
t << p03, p13, p23;
Sophus::SE3d cam2World(R, t);
vCam2World.push_back(cam2World);
}
infile.close();
This is an image of a 3D point cloud viewed horizontally from close to it. As you can see, the camera position after the loop is at a different height. I converted Velodyne data to the depth value on the camera coordinate and also visualized. Then, the point cloud of the car is also a bit shifted.
This is a bird's-eye view and it seems good.
This is a color image overlaid with depth values from Velodyne. It seems good too.
the main difference is ground truth poses in left rgb camera frame. You might need to calibrate poses into that frame.