Skip to content

Inquiry regarding Data Collection Workflow and TF/Pointcloud Alignment #1

@KongJei

Description

@KongJei

Hi,

First of all, thank you for your hard work on this project and for sharing the code.

I am currently trying to set up a data collection environment similar to how you generated the benchmark data. My goal is to move the character within the unity simulator while simultaneously obtaining images, camera poses, and depth pointclouds in the global frame.

However, I have encountered the following challenges:

TF & Pointcloud Alignment: I utilized camera_data['projection_matrix'] and camera_data['world_to_camera_matrix'] to publish the camera pose and pointcloud. Despite accounting for the Unity (LHS) to ROS (RHS) coordinate conversion, the visualization in Rviz shows that the TF and the global frame pointcloud are not correctly aligned.

Control Interface: While I noticed ROS services are mentioned, I am struggling with the practical implementation of controlling the character.

If you collected the benchmark data using ROS services, could you please share the specific workflow or scripts you used? I would like to know if you used a teleoperation system, a waypoint-based approach, or a specific service structure to command the character while recording the synchronized camera and pose data.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions