Skip to content

Depth measurement offset (possibly due to hardcoded xtable/ztable) #144

Open
@kohrt

Description

@kohrt

While calibrating the Kinect2 to our robots, I saw that the depth values do not match the robots model.

I investigated this and wrote a program that compared the measured distances to the calculated distances from a chess board I used for the calibration. From each image I used the region of the board and compared each pixels (valid) depth measurement to the calculated distance by ray intersection to the board plane. Before doing this I undistorted the images.

It seems like the Kinect2 has a static offset of around 24 mm. The depth measurement is always 24 mm further away than it should be. Here are some plots of the results. This offset seems to be not related to image coordinates or distance. The error grows towards the image corners, but there the distortion is much stronger and the values get more noisy.

plot.png
plot_x.png
plot_y.png
plot_xy.png

I only used the OpenCL depth packet processor. So I can't tell right now, if it is just related to that specific implementation, all implementation or the device itself. Tomorrow I will try out the OpenGL and CPU depth packet processor as well. Did someone experienced a similar behavior until now?

@christiankerl: Do you think it could be a problem of the packet processor implementation?

I added the code for the depth calibration to the calibration tool of my ROS Kinect2 bridge. If the depth values are decreased by the calculated offset the resulting depth cloud perfectly matches the model of our robots.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions