![]() We took longer than expected because we wanted to make the test with the latest version of Meshlab and it took us a bit to install and re-setup the system.ĭuring our tests with this new version of meshlab (2016.12 built on 24 January) we ran into another problem related to importing camera poses to meshlab. ![]() Thank you for making available to help us. Once again, thank you very much for your assistance ! What might be causing this? The camera intrinsic characteristics that we are giving to meshlab through VCGCamera file? The way Meshlab internally deals with those characteristics?Ĭan you (or someone else) point us or give us a dataset where we can see the image rasters perfectly aligning itself with the point cloud? Maybe, if we have this dataset, we can somehow retrace our problem and figure out what's wrong. As I said in my initial post, we suspect that this misalignment is causing some errors in aligning the texture with the mesh, using the provided tool in Meshlab. Given this, we are kind of lost on what might be happening when we try to importer rasters on Meshlab and verify that there is a clear misalignment when comparing to the point cloud, on Meshlab. ![]() Summary: From these tests, it seems the point cloud is registered in the rgb optical frame, as was our initial conviction. Notice how the x coordinate is around 2 cm. The coordinates are close to what was expected (note that the distance between the rgb and ir camera is around 0.052 meters, so we should see a 5 cm error if the point cloud was (wrongly) registered in the depth frame. Since we aligned the camera with the table a point in the border should have coordinates X around 0.026 meters, from our manual measurements. The idea is to see the coordinates of a point on the border of the table. In this test, we aligned the camera with the table, so that the camera is looking logitudinally at the table. As such, we developed a new testing procedure, to better try to better validate this alignment. It looked pretty aligned to our eyes, but we were still not satisfied by this test. Our first approach was to replicate the calibration and visualisation procedure of the Kinect 2 driver that we are using. Although our initial idea was that the depth image was calibrated wrt the RGB image, we developed a method for checking if that premise holds true. I've also used two different camera systems, an ASUS Xtion PRO Live (equivalent to the Kinect 1) and a Kinect 2.įirst of all, thank you very much for your assistance.Īs said, and following your advices, we investigated further the potential error in calibration between the point cloud and the RGB image. I've tested this using both version 1.3.2 () on Ubuntu and on 2016.12 () on Mac OS. The alignment is good but never perfect, and it should be correct, right?Īnd both overlayed, where you can see the misalignment (notice the aruco marker on the wall) Here is my last try at the VCGCamera xml file to be loaded in meshlab: My guess is I am not doing the VCGCamera file which give the information about the camera's pose and intrinsics correctly. Note that the point cloud and the image (raster) are take with the camera standing still so they should align (perfectly) I think. I cannot make the color in the vertices align propperly with the color in the imported raster. I am importing the point cloud which has color associated to the vertices.Īt the same time, I also import a raster of the associated RGB image. a kinect or an xtion), an I am trying to use it for creating a textured mesh.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |