You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, authors
I notice that your paper allows E+I fusion for depth estimation. However, the calibration files in MVSEC confuse me a lot.
I try to use the process that, starts from cam0(left event frame), then projects back to world frame (inverse of projection matrix of cam0), then transforms to cam2 (T_10@T_21), then project to the VISensor frame(projection matrix of cam2).
I believe that the pipeline is almost correct since my result shows small pixel shifts.
Is it convenient for you to provide a script for alignment? I will appreciate it.
The text was updated successfully, but these errors were encountered:
Hi, authors
I notice that your paper allows E+I fusion for depth estimation. However, the calibration files in MVSEC confuse me a lot.
I try to use the process that, starts from cam0(left event frame), then projects back to world frame (inverse of projection matrix of cam0), then transforms to cam2 (T_10@T_21), then project to the VISensor frame(projection matrix of cam2).
I believe that the pipeline is almost correct since my result shows small pixel shifts.
Is it convenient for you to provide a script for alignment? I will appreciate it.
The text was updated successfully, but these errors were encountered: