Description
While calibrating the Kinect2 to our robots, I saw that the depth values do not match the robots model.
I investigated this and wrote a program that compared the measured distances to the calculated distances from a chess board I used for the calibration. From each image I used the region of the board and compared each pixels (valid) depth measurement to the calculated distance by ray intersection to the board plane. Before doing this I undistorted the images.
It seems like the Kinect2 has a static offset of around 24 mm. The depth measurement is always 24 mm further away than it should be. Here are some plots of the results. This offset seems to be not related to image coordinates or distance. The error grows towards the image corners, but there the distortion is much stronger and the values get more noisy.
I only used the OpenCL depth packet processor. So I can't tell right now, if it is just related to that specific implementation, all implementation or the device itself. Tomorrow I will try out the OpenGL and CPU depth packet processor as well. Did someone experienced a similar behavior until now?
@christiankerl: Do you think it could be a problem of the packet processor implementation?
I added the code for the depth calibration to the calibration tool of my ROS Kinect2 bridge. If the depth values are decreased by the calculated offset the resulting depth cloud perfectly matches the model of our robots.