You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 23, 2024. It is now read-only.
1> I am trying to draw the trajectory of the camera on the frame which is taken by the camera at the initial position. When the odometry starts, I have stored the initial pose and computed the transformation matrix(T1) and when the camera starts moving, for each pose, I have computed transformation matrix(T2) and computed T1*T2.inverse() ( To get the T2 pose w.r.t T1 )and pushed the x, y, z of translation into camera intrinsics to get the image point of the pose T2 on the frame taken at T1. I have multiplied fx and cx with my image width and fy and cy with my image height. But the pixel values I am getting are really random and not accurate enough. I tested by plotting those points on the initial frame but they are just accumulating to the top right corner. Do I have to include anything in my procedure??
2> I am little confused about the pose_world. Are they in metres? or in any other units?
My project is training a Terrian Estimator.
A Rover or an astronaut is mounted with a monocular camera. At every step, the camera can view some part of the region which is in front of it but not the terrain which is exactly below or immediate to it. So, I have to get the terrain exactly under the foot or wheel from the previous frames taken by the camera and we can annotate that terrain patch by the readings we get from the accelerometer or for now, from astronaut itself through push buttons. So, both the terrain patch with its reading is given to any regression model. So, for every world point of the camera trajectory, we can get an approximated world points of the astronauts’ foot or wheels of the rover and see if it .We check if the particular world point is visible in the previous frames and if visible, we collect the patches of terrain from all the frames in which the corresponding world point is visible.
Thank you.
The text was updated successfully, but these errors were encountered:
yvtheja
changed the title
Transformation matrix
Transformation matrix and units of pose_world
Jun 23, 2016
My guess is that you have somewhere a transformation wrong. I feel the only way to find the bug is to go step by step through your transformations and verify each step with sample data...
I am trying to draw the trajectory of the camera on the frame which is taken by the camera at the initial position. When the odometry starts, I have stored the initial pose and computed the transformation matrix(T1) and when the camera starts moving, for each pose, I have computed transformation matrix(T2) and computed T1*T2.inverse() ( To get the T2 pose w.r.t T1 )and pushed the x, y, z of translation into camera intrinsics to get the image point of the pose T2 on the frame taken at T1. I have multiplied fx and cx with my image width and fy and cy with my image height. But the pixel values I am getting are really random and not accurate enough. I tested by plotting those points on the initial frame but they are just accumulating to the top right corner. Do I have to include anything in my procedure??
My project is training a Terrian Estimator.
A Rover or an astronaut is mounted with a monocular camera. At every step, the camera can view some part of the region which is in front of it but not the terrain which is exactly below or immediate to it. So, I have to get the terrain exactly under the foot or wheel from the previous frames taken by the camera and we can annotate that terrain patch by the readings we get from the accelerometer or for now, from astronaut itself through push buttons. So, both the terrain patch with its reading is given to any regression model. So, for every world point of the camera trajectory, we can get an approximated world points of the astronauts' foot or wheels of the rover and see if it .We check if the particular world point is visible in the previous frames and if visible, we collect the patches of terrain from all the frames in which the corresponding world point is visible.
Thank you.
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://github.com//issues/86, or mute the threadhttps://github.com/notifications/unsubscribe/AAtHtxdBppNwJ2F4hUytWs8YgWPxBSb8ks5qOoo9gaJpZM4I8z0z.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi,
1> I am trying to draw the trajectory of the camera on the frame which is taken by the camera at the initial position. When the odometry starts, I have stored the initial pose and computed the transformation matrix(T1) and when the camera starts moving, for each pose, I have computed transformation matrix(T2) and computed T1*T2.inverse() ( To get the T2 pose w.r.t T1 )and pushed the x, y, z of translation into camera intrinsics to get the image point of the pose T2 on the frame taken at T1. I have multiplied fx and cx with my image width and fy and cy with my image height. But the pixel values I am getting are really random and not accurate enough. I tested by plotting those points on the initial frame but they are just accumulating to the top right corner. Do I have to include anything in my procedure??
2> I am little confused about the pose_world. Are they in metres? or in any other units?
My project is training a Terrian Estimator.
A Rover or an astronaut is mounted with a monocular camera. At every step, the camera can view some part of the region which is in front of it but not the terrain which is exactly below or immediate to it. So, I have to get the terrain exactly under the foot or wheel from the previous frames taken by the camera and we can annotate that terrain patch by the readings we get from the accelerometer or for now, from astronaut itself through push buttons. So, both the terrain patch with its reading is given to any regression model. So, for every world point of the camera trajectory, we can get an approximated world points of the astronauts’ foot or wheels of the rover and see if it .We check if the particular world point is visible in the previous frames and if visible, we collect the patches of terrain from all the frames in which the corresponding world point is visible.
Thank you.
The text was updated successfully, but these errors were encountered: