-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am having trouble reproducing your depth pointcloud results. #18
Comments
Hi, |
Thanks for your reply. |
Hello. @Peize-Liu |
Hello. @Peize-Liu I was able to output my expected point cloud using “quadcam_7inch_n3_2023_1_14/eight_noyaw_1-sync.bag” and “quadcam_ri_sep/quadwalk_around_2-sync.bag”. The reason for the incorrect point cloud output as in my previous post was due to the following items.
I think that the difference in image sequence between D2SLAM and OmniNxt has affected this part of the process. Also, I continue to look forward to receiving new calibration files and OmniNxt datasets from you. Best regards. |
Hello, @Peize-Liu . |
@akira-akp Did you managed to resolve the upside down issue? |
@akira-akp @Peize-Liu May I know how do you use the oak drivers to load the cam datasets? The output of the datasets is at the rostopic of arducam/compressed, but the quadcam depth uses the output from the oak drivers and I am not sure how are we going to use the oak drivers and the oak drivers docker image. |
I simply used cv::rotate().
Sorry, I have about that problem too. |
Thanks for your reply ! Are the results shown above from the quadcam depth estimation? My quadcam depth estimation is giving my pointclouds that are messy and wrong. Seems to be some issues with transform or planar translation. I am not sure if I am running the depth estimation wrongly. The d2vins odom is fine though. |
Hello, @Peize-Liu . @akira-akp Have you received the data? |
Thank you for your repository!
I have a problem when I run the code.
I think that I I think that by using this repository and this dataset, I can get depth pointcloud and SLAM trajectories like in this video(https://www.youtube.com/watch?v=IOuJ7Y6dpeY).
However, when I run it, I get the following pointcloud results, which are not what I am able to reproduce.
Is this the correct combination of rosbag recording data and calibration data I should be using?
If it is incorrect, I would like to know the correct combination.
Additionally, I would like to know if there are any changes in the source code that need to be made to reproduce the data.
Also, is it correct that the aruducam photos in the rosbag recording data are upside down?
branch: pr_fix_main(https://github.com/HKUST-Aerial-Robotics/D2SLAM.git)
model file: hitnet_1x240x320_model_float16_quant_opt.onnx
engine file: hitnet_1x240x320_model_float16_quant_opt.trt
data set DL link: https://hkustconnect-my.sharepoint.com/:f:/g/personal/pliuan_connect_ust_hk/EhrnbPJoptRGvYhsa3P6KUIBscF9NArlttBNUqCaTvzsyw?e=QChj0X
using calibration file: quad_cam_calib-camchain-imucam-7-inch-n3.yaml
rosbag data: quadcam_7inch_n3_2023_1_14/eight_noyaw_1-sync.bag
rosbag data: quadcam_ri_sep/quadwalk_around_2-sync.bag
The text was updated successfully, but these errors were encountered: