-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using cow_and_ladyb dataset does not get the same effect as in the paper #2
Comments
@yinloonga Thank you for using our work and pointing out this issue. I have already incorporated the calibration file and the results look much better. However, I happened to stumble upon one more trick the voxblox repo does, i.e. interpolation of known poses to match the rgbd timestamps. (The results shown in the paper make use of these interpolated poses obtained from the voxblox implementation) I am working on implementing the interpolation code in ROS and will make a PR as soon as possible and will inform u about the same on this issue. |
Sorry to bother. I checked the code, and I found the calibration just times the static transform matrix, which to the world frame but no thing else like calibrate points etc? How this step will let the results look better? It seems just translate the frame which I don't think will have effect on the results (of course the interpolation will have effect) |
And also, I ran the default voxblox repo directly on the dataset, the result does not look like the VDBFusion paper said such bad. To be fair, I didn't use the RGB to render the mesh, the top one is the result I ran with the default repo on the voxblox which has an even larger voxel size (0.05) than the default VDB setting (0.02) Just curious, what's the config that the paper runs the voxblox and VDBFusion settings? Is there any thing I missed to have such confuse? @yinloonga if possible, could you please repone the issue again since it's quite a similar question? |
Hello @Kin-Zhang, as said on the paper: In summary, this is a well known issue from the C++ Voxblox API. If you make use of the extremely overengineered ROS wrappers + tons of dependencies, at some point you will arrive to the nice results. But we are comparing the plain TSDF libraries implementation, in isolation, thus, that's what you get. To run the experiments we wrapped the C++ API into a thin python wrapper, just to make sure we use the same dataloaders, etc, that we use for vdbfusion, the wrapper code is available at: https://github.com/PRBonn/voxblox_pybind. There we also acknowledge the fact that there are, indeed, some weird errors when using the voxblox library outside the ROS ecosystem. More info at: ethz-asl/voxblox#373 NOTE: I know the paper is quite long, but if you give it a read it might (or at least I hope that) reply most of the questions we already asked ourselves ;) |
@nachovizzo Thanks! I see. And there is a one more question mentioned above:
Sorry to bother. I checked the code, and I found the calibration just times the static transform matrix, which to the world frame but no thing else like calibrate points etc? How this step will let the results look better? It seems just translate the frame which I don't think will have effect on the results (of course the interpolation will have effect) |
Besides, after I pull this vdbfusion_ros repo, and run in the docker The result is even.... worse... How can I run the cow_and_lady bag in only the C++ API without bag or ROS, I don't know if it is the ROS problem which causes the vdbfusion_ros to perform so badly... Sorry again. Here is the screenshot, after I run the vdbfusion_ros in i7-9750H CPU, 16GB RAM (same device as I used for default voxblox repo running), default config as exactly this repo setting: |
@Kin-Zhang I'm sorry you are struggling with this so much.... but the problem resides on the cow dataset. VDBFusion is a mapping with known poses system. Which basically means that you have to your disposal the extrinsic for each pointcloud you want to process. That said, for the cow and lady dataset, there is no such thing. Since the sensor data is not synchronized, they do interpolation, etc. To carry on our experiments, we basically extracted the pointcloud+pose information that was arriving to the mapping system and use this “input” information for all the baselines used in the paper and for vdbfusion itself. To do so, I did something horrible :) I hacked the Voxblox code (since I find it extremely bloated to extract the main thing) and I just store the poses to a I must admit that this process was not really clear, though. I also don't see the advantage of keep using this particular dataset with all the aforementioned problems. If you need to run the experiments just to reproduce the results of the paper, you can use the same approach as I did. For this, you have 2 options. Option 1 (patch)Patch the voxblox library with this horrible patch: Click to expanddiff --git a/voxblox/CMakeLists.txt b/voxblox/CMakeLists.txt
index 3a0c1ef..4809b7e 100644
--- a/voxblox/CMakeLists.txt
+++ b/voxblox/CMakeLists.txt
@@ -4,6 +4,11 @@ project(voxblox)
find_package(catkin_simple REQUIRED )
catkin_simple(ALL_DEPS_REQUIRED)
+find_package(PCL REQUIRED)
+include_directories(${PCL_INCLUDE_DIRS})
+link_directories(${PCL_LIBRARY_DIRS})
+add_definitions(${PCL_DEFINITIONS})
+
set(CMAKE_MACOSX_RPATH 0)
add_definitions(-std=c++11 -Wall -Wextra)
@@ -57,7 +62,7 @@ cs_add_library(${PROJECT_NAME}_proto
cs_add_library(${PROJECT_NAME}
${${PROJECT_NAME}_SRCS}
)
-target_link_libraries(${PROJECT_NAME} ${PROJECT_NAME}_proto)
+target_link_libraries(${PROJECT_NAME} ${PROJECT_NAME}_proto ${PCL_LIBRARIES})
############
# BINARIES #
diff --git a/voxblox/include/voxblox/integrator/tsdf_integrator.h b/voxblox/include/voxblox/integrator/tsdf_integrator.h
index 9dcf817..cb5d356 100644
--- a/voxblox/include/voxblox/integrator/tsdf_integrator.h
+++ b/voxblox/include/voxblox/integrator/tsdf_integrator.h
@@ -338,6 +338,9 @@ class FastTsdfIntegrator : public TsdfIntegratorBase {
/// Used in terminating the integration early if it exceeds a time limit.
std::chrono::time_point<std::chrono::steady_clock> integration_start_time_;
+
+ // vector of poses
+ std::vector<Eigen::Matrix4f> poses_;
};
} // namespace voxblox
diff --git a/voxblox/src/integrator/tsdf_integrator.cc b/voxblox/src/integrator/tsdf_integrator.cc
index 40bf722..974f697 100644
--- a/voxblox/src/integrator/tsdf_integrator.cc
+++ b/voxblox/src/integrator/tsdf_integrator.cc
@@ -1,8 +1,42 @@
#include "voxblox/integrator/tsdf_integrator.h"
+#include <pcl/io/pcd_io.h>
+#include <pcl/io/ply_io.h>
+#include <pcl/point_types.h>
+
#include <iostream>
#include <list>
+#include <string>
+#include <sstream>
+#include <vector>
+#include <Eigen/Core>
+
+namespace {
+std::string zfill(const uint32_t& id) {
+ std::stringstream sequence_id;
+ sequence_id << std::setw(6) << std::setfill('0') << id;
+ return sequence_id.str();
+}
+
+void WritePosesFile(const std::string& path_to_file,
+ const std::vector<Eigen::Matrix4f>& result_poses) {
+ std::ofstream out_result(path_to_file);
+ for (uint32_t i = 0; i < result_poses.size(); ++i) {
+ const Eigen::Matrix4f& pose = result_poses[i];
+ for (uint32_t r = 0; r < 3; ++r) {
+ for (uint32_t c = 0; c < 4; ++c) {
+ out_result << ((r == 0 && c == 0) ? "" : " ") << pose(r, c);
+ }
+ }
+ out_result << std::endl;
+ }
+
+ out_result.close();
+}
+
+}
+
namespace voxblox {
TsdfIntegratorBase::Ptr TsdfIntegratorFactory::create(
@@ -559,6 +593,20 @@ void FastTsdfIntegrator::integratePointCloud(const Transformation& T_G_C,
timing::Timer integrate_timer("integrate/fast");
CHECK_EQ(points_C.size(), colors.size());
+ // Store the pointcloud that is being fed to the mapping system
+ static uint32_t idx{0};
+ pcl::PointCloud<pcl::PointXYZ> cloud;
+ cloud.points.reserve(points_C.size());
+ for(const auto& point: points_C) {
+ cloud.points.emplace_back(pcl::PointXYZ(point.x(), point.y(), point.z()));
+ }
+ pcl::io::savePLYFileBinary(zfill(idx++) + ".ply", cloud);
+
+ // Store the pose that the mapping system receives
+ poses_.push_back(T_G_C.getTransformationMatrix());
+ WritePosesFile("poses.txt", poses_);
+ return;
+
integration_start_time_ = std::chrono::steady_clock::now();
static int64_t reset_counter = 0;
Then run the Option 2 (use dumped data)I just uploaded the data to this location so you can just download and run the vdbfusion python API: pip install -U vdbfusion
git clone https://github.com/PRBonn/vdbfusion
cd vdbfusion
cd examples/python
./cow_pipeline.py $DATASETS/voxblox/ply --visualize Results (obtained 2 secs ago)Extra notesWith this "static" version of the dataset you can also test other mapping pipelines like octomap, without having to rely on crazy stuff happening under the hood in the voxblox library ;) PS: If you manage to ingrate |
You are right!! THANKS!!! Finally, I got the result right for vdbfusion. Thanks again. I will check those detail to see whether I can fix it in vdbfusion_ros first, and later for |
Hi, finally, I wrote another version about ROS implementation on vdbfusion copy the two main cpp, and use the voxblox By the way, how do you auto remove the not-good points [purple rectangle]? as your screenshot there I found they are auto-removed. |
@Kin-Zhang Wow! That's impressive. To remove the points, just control the NOTE: I would like to add a link to your repo in case someone has a similar setup to yours, but there is a conflict with license. Your repo is licensed under GPL, and ours with MIT. Additionally, the license headers are not part of the original implementation, which basically implies you are licensing the original vdbfusion code under your authorship with GPL license. Nothing to worry about now, but something I'd be definitely more careful |
@Kin-Zhang I just added license to all source files, I think that by just copy pasting those into your code then all potential future-legal troubles should be solved (but I'm not an expert on this field) Additionally, the HDDA makes not much sense in our mapping context, since it's highly effective ONCE the VDB topology has been already created (this would be, after the mapping pipeline finished). But to build topologies on the go, as we both do now, is not the right usage (Unless I'm missing something) |
Thanks!
Ha! I didn't notice the license before since it is automatically in gitee (since I need choose and public, Let me check the license again later), I will make changes and copy your header into the origin file. Actually, I made some changes, and I will comment on them.
Oh! That's the point. Thanks for letting me know. I also miss that point. It's happy to learn from your comments. Thanks again! |
Done. At commit: Kin-Zhang/vdbfusion_mapping@56da6a9 |
Great, would you like to add the link to this README thorugh a PR? This way you will appear in the contributors links. ;) If not I can also do it, it's just 1 line in the readme :) |
Done with the new pull request. And THANKS for leading me through all of these. Learn a lot! @nachovizzo |
Hi, thank you very much for your great work. I encountered some problems when using the
vdbfusion
framework to test thecow_and_lady
dataset:.vdb
suffix file is obtained after running the code. Then I used thevdb_viewer
command I got after compiling theOpenVDB
library to view the resulting model:Obviously,
vdbfusion
gets good results (obviously recognizing objects) compared to the actual reference model. However, compared to the effect shown in the paper, there is a big difference.Supplement: For the code, after I downloaded it, I did not modify any parameters, but just used the default configuration. The above result was that the code was suspended after running for a period of time.
vdbfusion
. What puzzles me is that after the point cloud pose information obtained throughcow_and_lady
dataset, no external parameters are used there is coordinate transformation T_V_C in the calibration file, my understanding is that the camera coordinate system is used as the base reference coordinate system here.I hope you have time to reply, thank you very much.
The text was updated successfully, but these errors were encountered: