Image-Based Ego-Motion Estimation Using On-Vehicle Omnidirectional Camera
Outline of SfMMatsuhisa, Ryota, et al. "Image-based ego-motion estimation using on-vehicle omnidirectional camera." International Journal of Intelligent Transportation Systems Research 8.2 (2010): 106-117.
-
Feature point detection and tracking
The feature points are detected and tracked from the
omnidirectional image sequences. First, feature points are detected from the first frame of image sequences by the Harris operator. Then they are tracked by calculating their optical flow by the Lucas-Kanade method. -
Image conversion method
Image Conversion Method
Divide the omnidirectional images into several partial images and convert these into perspective projection model.
Therefore, the system automatically chooses the direction where as many feature points as possible are included, with the viewing angle determined beforehand.
-
Estimation by factorization
Factorization is the method of estimating the egomotion and 3D shape from plural view images.
By performing a singular value decomposition to the tracks of the feature point coordinate, factorization can calculate shape and camera movement simultaneously. A matrix is made from the coordinate values of the feature points obtained by tracking. The egomotion and 3D shape are extracted from the matrix by carrying out the singular value decomposition that added the restraint condition. -
Optimization by bundle adjustment
As mentioned above, estimation is performed by iterative calculation in perspective factorization. Therefore, it may converge to local minimum that is different from global minimum. Bundle adjustment is performed to minimize errors. The error that should be minimized is the sum of the gaps between the 2D coordinates of feature points used as input of the factorization and reprojection points from the egomotion of the camera and the 3D shape estimated by factorization.
Bundel adjustment
A Simplified Solution to Motion Estimation Using an Omnidirectional Camera and a 2-D LRF Sensor
Outline of motion estimationHoang, Van-Dung, and Kang-Hyun Jo. "A Simplified Solution to Motion Estimation Using an Omnidirectional Camera and a 2-D LRF Sensor." IEEE Transactions on Industrial Informatics 12.3 (2016): 1064-1073.
-
Motion Constraint
The epipolar constraint is described as follows:
where the essential matrix E is defined by E = [t] × R. The skew-symmetric matrix [t]× is constructed from the translation vector t = [tx, ty, tz]T. The rotation matrix R = RY RZRX, where RY , RZ, and RX are pitch, yaw, and roll rotation matrix, respectively. The rotation and translation parameters of a camera in sequential positions are computed by solving the equation above.
-
Motion Model Analysis
Out line of Motion Model Analysis
The proposed method employs an omnidirectional camera(after calibration with respect to the road plane) and an LRF to estimate the 3-D motion as pseudo-5 degrees of freedom (DOF). The special geometrical constraint reduces the number of unknown variables of transformation parameters. It is assumed that vehicle movement only consists of translation [tx, ty, tz]T, rotation angle of pitch β, and yaw α. The roll angle changes very small, and thereby it is ignored.
Pseudo code of Motion estimation
Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera
Outline of egomotion estimation and compensation algorithm for egomotion estimationGandhi, Tarak, and Mohan Trivedi. "Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera." Machine Vision and Applications 16.2 (2005): 85-95.
Large-Scale Direct SLAM for Omnidirectional Cameras
Caruso, David, Jakob Engel, and Daniel Cremers. "Large-scale direct slam for omnidirectional cameras." Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEE, 2015.
Demo
-
SLAM
Simultaneous Localization and Mapping (SLAM) is an approach to reconstruct the whole environment as well as locate the camera simultaneously. -
LSD-SLAM for omnidirectional cameras
This paper applies LSD-SLAM on omnidirectional cameras, we describe our omnidirectional, largescale direct SLAM system, which is based on LSD-SLAM.
网友评论