美文网首页
香港科技大学VINS:(一)非线性优化之IMU

香港科技大学VINS:(一)非线性优化之IMU

作者: 一一臣 | 来源:发表于2017-06-27 19:51 被阅读0次

    刚刚开源的VINS系统可以算是SLAM届的良心作品了,在同行中也激起了不小得轰动。沈老师连续两天分别在将门创投和泡泡机器人上做了两场公开课,大家反映很强烈。估计有上千人实时听了这场干货颇多的讲座。从沈老师实验室网站的数据流量看,公开课前后几天网站访问量有相当大的增加,而且IP地址遍布世界各地(除了非洲,拉丁美洲),主要集中在中国,欧洲,美国,日韩。可想而知VINS的影响力有多广,大家对SLAM的热情有多高。
    闲言少叙,因为本人对SLAM也有很浓烈的兴趣,所以在VINS刚刚发布的时候便去下载研究。至今也看了有三四遍,虽然好多细节没有理解透彻,但是对代码整体框架和流程有了一定的了解。看过代码的同学应该知道,跟ORB-SLAM相比VINS里注释很少,而且视觉融合本身就比较难,所以从本次博客开始,我想记录下自己研究VINS的心得以及疑惑的地方(iOS版本),希望对自己和大家都有所帮助。我构想的撰写顺序是先后端,再前端。后端是VINS最具创新的地方,就是围绕ceresSolver里的误差项:IMU残差,重投影残差,回环检测残差,以及边缘化残差进行。然后再关注前端图像是如何处理,如何初始化,关键帧选取和存储等等。

    本次博客主要内容有:

    1. 图像数据获取
    2. IMU 数据获取
    3. IMU 与图像的同步
    4. IMU预积分
    5. 误差优化函数中IMU误差和Jacobian的计算

    1. 图像数据获取

    在程序启动时,viewDidLoad函数就会在一开始创建获取图像的类CvVideoCamera:

    #if VINS_FRAMEWORK
        self.videoCamera = [[CvVideoCamera alloc] init];
        ui_main = true;
    #else
        self.videoCamera = [[CvVideoCamera alloc]
                            initWithParentView:imageView];
    #endif
        
        self.videoCamera.delegate = self;
        self.videoCamera.defaultAVCaptureDevicePosition =
        AVCaptureDevicePositionBack;
        
        self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
        self.videoCamera.defaultAVCaptureSessionPreset =
        AVCaptureSessionPreset640x480;
    #ifdef DATA_EXPORT
        self.videoCamera.defaultFPS = 1;
    #else
        self.videoCamera.defaultFPS = 30;
    #endif
        
        isCapturing = NO;
        
        [CameraUtils setExposureOffset: -1.0f];
        //开始获取图像
        [videoCamera start];
    

    执行start就会开始获取图像,而且获取到的图像由CvVideoCameraDelegate的precessImage函数进行处理,这里请不要与VINS.cpp类的processImage函数混淆。因为今天主要讲IMU部分,所以具体如何通过光流算法跟踪图像等前端问题留到以后讲解。这里只需要知道这段代码即可:

    img_msg->point_clouds = featuretracker.image_msg;
                //img_msg callback
                m_buf.lock();
                img_msg_buf.push(img_msg);
                is_calculate = true;
                //NSLog(@"Img timestamp %lf",img_msg_buf.front()->header);
                m_buf.unlock();
                con.notify_one();
    

    这就是图像数据的缓冲队列如何工作的。

    2. IMU数据获取

    在viewcontroller.mm文件中,作者新开辟了三个线程,包括mainLoop,saveData以及回环检测线程。mainLoop是负责IMU数据与图像数据融合的主要部分,也是我关注的重点内容。在mainLoop线程启动前,有个imuStartUpdate函数,来通过创建一个CMMotionManager单例,负责获取IMU数据。

    iOS关于CMMotionManager的介绍如下:
    A CMMotionManager object is the gateway to the motion services provided by iOS.These services provide an app with accelerometer data, rotation-rate data, magnetometer data, and other device-motion data such as attitude. These types of data originate with a device’s accelerometers and (on some models) its magnetometer and gyroscope.
    Note: Methods, properties, and data types for processing magnetometer data were introduced in iOS 5.0.
    After creating an instance of CMMotionManager, an app can use it to receive four types of motion:
    raw accelerometer data, raw gyroscope data, raw magnetometer data, and processed device-motion data (which includes accelerometer, rotation-rate, and attitude measurements). The processed device-motion data provided by Core Motion’s sensor fusion algorithms gives the device’s attitude, rotation rate, calibrated magnetic fields, the direction of gravity, and the acceleration the user is imparting to the device.
    Important: An app should create only a single instance of the [CMMotionManager]class. Multiple instances of this classcan affect the rate at which data is received from the accelerometer and gyroscope.
    An app can take one of two approaches when receiving motion data, by handling it at specified update intervals or periodically sampling the motion data. With both of these approaches, the app should call the appropriate stop method([stopAccelerometerUpdates], [stopGyroUpdates], [stopMagnetometerUpdates], and [stopDeviceMotionUpdates]) when it has finished processing accelerometer, rotation-rate, magnetometer, or device-motion data.

    根据这个CMMotionManager的描述,大家也就很容易理解代码了。
    获取加速度数据:

    [motionManager startAccelerometerUpdatesToQueue:[NSOperationQueue currentQueue]
                                            withHandler:^(CMAccelerometerData *latestAcc, NSError *error)
         {
             double header = motionManager.deviceMotion.timestamp;
             rotation_imu << motionManager.deviceMotion.attitude.yaw * 180.0 / M_PI,  //yaw
             motionManager.deviceMotion.attitude.roll * 180.0 / M_PI,  //pitch for vins
             motionManager.deviceMotion.attitude.pitch * 180.0 / M_PI;  //roll for vins
             if(imu_prepare<10)
             {
                 imu_prepare++;
             }
             shared_ptr<IMU_MSG> acc_msg(new IMU_MSG());
             acc_msg->header = latestAcc.timestamp;
             acc_msg->acc << -latestAcc.acceleration.x * GRAVITY,
             -latestAcc.acceleration.y * GRAVITY,
             -latestAcc.acceleration.z * GRAVITY;
             cur_acc = acc_msg;
         }];
    
    

    上面的代码花括号里面部分就是Handler的内容,每当加速度有更新的时候,就会触发这个handler代码块。可以看到里面有个变量来记录何时获得了十个加速度数据,这个是留个下面处理角速度计使用的。cur_acc一直记录着当前的加速度数据。
    获取角速度计数据的代码为:

    [motionManager startGyroUpdatesToQueue:[NSOperationQueue currentQueue] withHandler:^(CMGyroData *latestGyro, NSError *error)
         {
             //The time stamp is the amount of time in seconds since the device booted.
             NSTimeInterval header = latestGyro.timestamp;
             if(header<=0)
                 return;
             if(imu_prepare < 10)
                 return;
             
             IMU_MSG gyro_msg;
             gyro_msg.header = header;
             gyro_msg.gyr << latestGyro.rotationRate.x,
             latestGyro.rotationRate.y,
             latestGyro.rotationRate.z;
             
             if(gyro_buf.size() == 0)
             {
                 gyro_buf.push_back(gyro_msg);
                 gyro_buf.push_back(gyro_msg);
                 return;
             }
             else
             {
                 gyro_buf[0] = gyro_buf[1];
                 gyro_buf[1] = gyro_msg;
             }
             //interpolation
             shared_ptr<IMU_MSG> imu_msg(new IMU_MSG());
             
             if(cur_acc->header >= gyro_buf[0].header && cur_acc->header < gyro_buf[1].header)
             {
                 imu_msg->header = cur_acc->header;
                 imu_msg->acc = cur_acc->acc;
                 imu_msg->gyr = gyro_buf[0].gyr + (cur_acc->header - gyro_buf[0].header)*(gyro_buf[1].gyr - gyro_buf[0].gyr)/(gyro_buf[1].header - gyro_buf[0].header);
                 //printf("imu gyro update %lf %lf %lf\n", gyro_buf[0].header, imu_msg->header, gyro_buf[1].header);
                 //printf("imu inte update %lf %lf %lf %lf\n", imu_msg->header, gyro_buf[0].gyr.x(), imu_msg->gyr.x(), gyro_buf[1].gyr.x());
             }
             else
             {
                 printf("imu error %lf %lf %lf\n", gyro_buf[0].header, cur_acc->header, gyro_buf[1].header);
                 return;
             }
             
           //为了重点突出如何获取IMU数据,此处删去了当需要save data,start_record以及predict status部分
    //for save data
             if(start_playback)
             {
                ...
             }
             
             if(start_record)
             {
               ...
             }
             
             m_time.lock();
             lateast_imu_time = imu_msg->header;
             m_time.unlock();
    
    #if VINS_FRAMEWORK
             //predict status
             if(!CAMERA_MODE && ENABLE_IMU_PRIDICT)
             {
                ...
             }
    #endif
             m_buf.lock();
             imu_msg_buf.push(imu_msg);
             //NSLog(@"IMU_buf timestamp %lf, acc_x = %lf",imu_msg_buf.front()->header,imu_msg_buf.front()->acc.x());
             m_buf.unlock();
             con.notify_one();
         }];
    
    

    从代码中可以看到加速度和角速度也不是完全同步的,也需要通过判断两者数据的时间,进行插值处理。处理后的IMU数据被加入到imu_msg_buf,也就是IMU数据缓冲队列中。

    3. IMU与图像的同步

    在mainLoop的process函数中,作者把数据同步封装到getMeasurement函数中,通过比较IMU的时间数据和Image的时间数据来获得正确的匹配:

    std::vector<std::pair<std::vector<ImuConstPtr>, ImgConstPtr>>
    getMeasurements()
    {
        std::vector<std::pair<std::vector<ImuConstPtr>, ImgConstPtr>> measurements;
        while (true)
        {
            if (imu_msg_buf.empty() || img_msg_buf.empty())
                return measurements;
            
            if (!(imu_msg_buf.back()->header > img_msg_buf.front()->header))
            {
                NSLog(@"wait for imu, only should happen at the beginning");
                return measurements;
            }
            
            if (!(imu_msg_buf.front()->header < img_msg_buf.front()->header))
            {
                NSLog(@"throw img, only should happen at the beginning");
                img_msg_buf.pop();
                continue;
            }
            ImgConstPtr img_msg = img_msg_buf.front();
            img_msg_buf.pop();
            
            std::vector<ImuConstPtr> IMUs;
            while (imu_msg_buf.front()->header <= img_msg->header)
            {
                IMUs.emplace_back(imu_msg_buf.front());
                imu_msg_buf.pop();
            }
            //NSLog(@"IMU_buf = %d",IMUs.size());
            measurements.emplace_back(IMUs, img_msg);
        }
        return measurements;
    }
    

    4. IMU预积分

    下面到了最关键的部分了,当得到同步好的数据之后:

     std::vector<std::pair<std::vector<ImuConstPtr>, ImgConstPtr>> measurements;
        std::unique_lock<std::mutex> lk(m_buf);
        con.wait(lk, [&]
                 {
                     return (measurements = getMeasurements()).size() != 0;
                 });
        lk.unlock();
        waiting_lists = measurements.size();   
    

    process会从measurement里一个一个提取imu_msg 与img_msg数据,这里始终要记得,一个图img_msg对应着一批imu_msg数据。

    for(auto &measurement : measurements)
    

    处理某一个imu_msg 队列

            for(auto &imu_msg : measurement.first)
            {
                //VINS 处理IMU
                send_imu(imu_msg);
            }
    

    每次当得到一批imu_msg后,send_imu函数会创建且只创建一个pre_integration类进行预积分,并且把它加入到pre_integrations队列里(最后pre_integrations 队列的大小会跟WINDOW_SIZE一样)。

    void send_imu(const ImuConstPtr &imu_msg)
    {
        NSTimeInterval t = imu_msg->header;
        if (current_time < 0)
            current_time = t;
        double dt = (t - current_time);
        current_time = t;
        
        double ba[]{0.0, 0.0, 0.0};
        double bg[]{0.0, 0.0, 0.0};
        
        double dx = imu_msg->acc.x() - ba[0];
        double dy = imu_msg->acc.y() - ba[1];
        double dz = imu_msg->acc.z() - ba[2];
        
        double rx = imu_msg->gyr.x() - bg[0];
        double ry = imu_msg->gyr.y() - bg[1];
        double rz = imu_msg->gyr.z() - bg[2];
        //NSLog(@"IMU %f, dt: %f, acc: %f %f %f, gyr: %f %f %f", t, dt, dx, dy, dz, rx, ry, rz);
        
        vins.processIMU(dt, Vector3d(dx, dy, dz), Vector3d(rx, ry, rz));
    }
    
    
    void VINS::processIMU(double dt, const Vector3d &linear_acceleration, const Vector3d &angular_velocity)
    {
        if (!first_imu)
        {
            first_imu = true;
            acc_0 = linear_acceleration;
            gyr_0 = angular_velocity;
        }
    //请注意frame_count参数的作用
        if (!pre_integrations[frame_count])
        {
            pre_integrations[frame_count] = new IntegrationBase{acc_0, gyr_0, Bas[frame_count], Bgs[frame_count]};
        }
        
        if (frame_count != 0)
        {
            pre_integrations[frame_count]->push_back(dt, linear_acceleration, angular_velocity);
            
            if(solver_flag != NON_LINEAR) //comments because of recovering
            tmp_pre_integration->push_back(dt, linear_acceleration, angular_velocity);
            
            dt_buf[frame_count].push_back(dt);
            linear_acceleration_buf[frame_count].push_back(linear_acceleration);
            angular_velocity_buf[frame_count].push_back(angular_velocity);
            
            //midpoint integration
            {
                Vector3d g{0,0,GRAVITY};
                int j = frame_count;
                Vector3d un_acc_0 = Rs[j] * (acc_0 - Bas[j]) - g;
                Vector3d un_gyr = 0.5 * (gyr_0 + angular_velocity) - Bgs[j];
                Rs[j] *= Utility::deltaQ(un_gyr * dt).toRotationMatrix();
                Vector3d un_acc_1 = Rs[j] * (linear_acceleration - Bas[j]) - g;
                Vector3d un_acc = 0.5 * (un_acc_0 + un_acc_1);
                Ps[j] += dt * Vs[j] + 0.5 * dt * dt * un_acc;
                Vs[j] += dt * un_acc;
            }
            
        }
        acc_0 = linear_acceleration;
        gyr_0 = angular_velocity;
    }
    

    每次pre_integration 推入一个IMU数据后就会进行积分:

     void push_back(double dt, const Eigen::Vector3d &acc, const Eigen::Vector3d &gyr)
        {
            dt_buf.push_back(dt);
            acc_buf.push_back(acc);
            gyr_buf.push_back(gyr);
            propagate(dt, acc, gyr);
        }
    
    void propagate(double _dt, const Eigen::Vector3d &_acc_1, const Eigen::Vector3d &_gyr_1)
        {
            dt = _dt;
            acc_1 = _acc_1;
            gyr_1 = _gyr_1;
            Vector3d result_delta_p;
            Quaterniond result_delta_q;
            Vector3d result_delta_v;
            Vector3d result_linearized_ba;
            Vector3d result_linearized_bg;
            
            midPointIntegration(_dt, acc_0, gyr_0, _acc_1, _gyr_1, delta_p, delta_q, delta_v,
                                linearized_ba, linearized_bg,
                                result_delta_p, result_delta_q, result_delta_v,
                                result_linearized_ba, result_linearized_bg, 1);
            
            //checkJacobian(_dt, acc_0, gyr_0, acc_1, gyr_1, delta_p, delta_q, delta_v,
            //                    linearized_ba, linearized_bg);
            delta_p = result_delta_p;
            delta_q = result_delta_q;
            delta_v = result_delta_v;
            linearized_ba = result_linearized_ba;
            linearized_bg = result_linearized_bg;
            delta_q.normalize();
            sum_dt += dt;
            acc_0 = acc_1;
            gyr_0 = gyr_1;
            
        }
    
    

    你以为propogate函数就是真正的积分么,其实不是,最中心的在midPointIntegration函数中,里面的积分公式部分很容易理解,但是这里的Jacobian并不是IMU误差的Jacobian,但是这个Jacobian对计算IMU误差的Jacobian很重要:

    //每一次处理 Jacobian都会更新,但是请注意这个Jacobian并不是IMU残差的Jacobian
        void midPointIntegration(double _dt,
                                 const Eigen::Vector3d &_acc_0, const Eigen::Vector3d &_gyr_0,
                                 const Eigen::Vector3d &_acc_1, const Eigen::Vector3d &_gyr_1,
                                 const Eigen::Vector3d &delta_p, const Eigen::Quaterniond &delta_q, const Eigen::Vector3d &delta_v,
                                 const Eigen::Vector3d &linearized_ba, const Eigen::Vector3d &linearized_bg,
                                 Eigen::Vector3d &result_delta_p, Eigen::Quaterniond &result_delta_q, Eigen::Vector3d &result_delta_v,
                                 Eigen::Vector3d &result_linearized_ba, Eigen::Vector3d &result_linearized_bg, bool update_jacobian)
        {
            //ROS_INFO("midpoint integration");
            Vector3d un_acc_0 = delta_q * (_acc_0 - linearized_ba);
            Vector3d un_gyr = 0.5 * (_gyr_0 + _gyr_1) - linearized_bg;
            result_delta_q = delta_q * Quaterniond(1, un_gyr(0) * _dt / 2, un_gyr(1) * _dt / 2, un_gyr(2) * _dt / 2);
            Vector3d un_acc_1 = result_delta_q * (_acc_1 - linearized_ba);
            Vector3d un_acc = 0.5 * (un_acc_0 + un_acc_1);
            result_delta_p = delta_p + delta_v * _dt + 0.5 * un_acc * _dt * _dt;
            result_delta_v = delta_v + un_acc * _dt;
            result_linearized_ba = linearized_ba;
            result_linearized_bg = linearized_bg;
            
            if(update_jacobian)
            {
                //这个jacobian 和IMU_factor jacobian 有什么不同
            
                Vector3d w_x = 0.5 * (_gyr_0 + _gyr_1) - linearized_bg;
                Vector3d a_0_x = _acc_0 - linearized_ba;
                Vector3d a_1_x = _acc_1 - linearized_ba;
                Matrix3d R_w_x, R_a_0_x, R_a_1_x;
                
                R_w_x<< 0, -w_x(2), w_x(1),
                        w_x(2), 0, -w_x(0),
                        -w_x(1), w_x(0), 0;
                
                R_a_0_x<< 0, -a_0_x(2), a_0_x(1),
                          a_0_x(2), 0, -a_0_x(0),
                          -a_0_x(1), a_0_x(0), 0;
                
                R_a_1_x<< 0, -a_1_x(2), a_1_x(1),
                          a_1_x(2), 0, -a_1_x(0),
                          -a_1_x(1), a_1_x(0), 0;
                
                MatrixXd F = MatrixXd::Zero(15, 15);
                //不懂 F 的作用
                F.block<3, 3>(0, 0) = Matrix3d::Identity();
                F.block<3, 3>(0, 3) = -0.25 * delta_q.toRotationMatrix() * R_a_0_x * _dt * _dt +
                -0.25 * result_delta_q.toRotationMatrix() * R_a_1_x * (Matrix3d::Identity() - R_w_x * _dt) * _dt * _dt;
                F.block<3, 3>(0, 6) = MatrixXd::Identity(3,3) * _dt;
                F.block<3, 3>(0, 9) = -0.25 * (delta_q.toRotationMatrix() + result_delta_q.toRotationMatrix()) * _dt * _dt;
                F.block<3, 3>(0, 12) = -0.25 * result_delta_q.toRotationMatrix() * R_a_1_x * _dt * _dt * -_dt;
                F.block<3, 3>(3, 3) = Matrix3d::Identity() - R_w_x * _dt;
                F.block<3, 3>(3, 12) = -1.0 * MatrixXd::Identity(3,3) * _dt;
                F.block<3, 3>(6, 3) = -0.5 * delta_q.toRotationMatrix() * R_a_0_x * _dt +
                -0.5 * result_delta_q.toRotationMatrix() * R_a_1_x * (Matrix3d::Identity() - R_w_x * _dt) * _dt;
                F.block<3, 3>(6, 6) = Matrix3d::Identity();
                F.block<3, 3>(6, 9) = -0.5 * (delta_q.toRotationMatrix() + result_delta_q.toRotationMatrix()) * _dt;
                F.block<3, 3>(6, 12) = -0.5 * result_delta_q.toRotationMatrix() * R_a_1_x * _dt * -_dt;
                F.block<3, 3>(9, 9) = Matrix3d::Identity();
                F.block<3, 3>(12, 12) = Matrix3d::Identity();
            
                
                //cout<<"A"<<endl<<A<<endl;
                MatrixXd V = MatrixXd::Zero(15,18);
                V.block<3, 3>(0, 0) =  0.25 * delta_q.toRotationMatrix() * _dt * _dt;
                V.block<3, 3>(0, 3) =  0.25 * -result_delta_q.toRotationMatrix() * R_a_1_x  * _dt * _dt * 0.5 * _dt;
                V.block<3, 3>(0, 6) =  0.25 * result_delta_q.toRotationMatrix() * _dt * _dt;
                V.block<3, 3>(0, 9) =  V.block<3, 3>(0, 3);
                V.block<3, 3>(3, 3) =  0.5 * MatrixXd::Identity(3,3) * _dt;
                V.block<3, 3>(3, 9) =  0.5 * MatrixXd::Identity(3,3) * _dt;
                V.block<3, 3>(6, 0) =  0.5 * delta_q.toRotationMatrix() * _dt;
                V.block<3, 3>(6, 3) =  0.5 * -result_delta_q.toRotationMatrix() * R_a_1_x  * _dt * 0.5 * _dt;
                V.block<3, 3>(6, 6) =  0.5 * result_delta_q.toRotationMatrix() * _dt;
                V.block<3, 3>(6, 9) =  V.block<3, 3>(6, 3);
                V.block<3, 3>(9, 12) = MatrixXd::Identity(3,3) * _dt;
                V.block<3, 3>(12, 15) = MatrixXd::Identity(3,3) * _dt;
                
                //step_jacobian = F;
                //step_V = V;
                jacobian = F * jacobian;
                covariance = F * covariance * F.transpose() + V * noise * V.transpose();
            }
            
        }
    
    

    5. 误差优化函数中IMU残差的计算

    当处理完一批imu_msg后,在process函数中就会紧接着处理图像数据, 当图像数量达到窗口大小时,在solve_ceres函数中就会把IMU误差项加进去进行优化,在这之前有一些边缘化的操作,而且这个操作会影响pre_integrations数组,此处先略去。

      //IMU factor
        for (int i = 0; i < WINDOW_SIZE; i++)
        {
            int j = i + 1;
            IMUFactor* imu_factor = new IMUFactor(pre_integrations[j]);
            problem.AddResidualBlock(imu_factor, NULL, para_Pose[i], para_SpeedBias[i], para_Pose[j], para_SpeedBias[j]);
        }
    

    可以发现,一个IMUfactor误差项对应着一个pre_integration实例,而且我们计算的是两个帧之间IMU的误差。IMU误差的计算如下,但是这里的公式有些我也不太明白,如果有人能指点一下,那就太好了。

     residual = pre_integration->evaluate(Pi, Qi, Vi, Bai, Bgi,
                                                 Pj, Qj, Vj, Baj, Bgj);
     Eigen::Matrix<double, 15, 15> sqrt_info = Eigen::LLT<Eigen::Matrix<double, 15, 15>>(pre_integration->covariance.inverse()).matrixL().transpose();
            //sqrt_info.setIdentity();
            residual = sqrt_info * residual;
            
            
    
    Eigen::Matrix<double, 15, 1> evaluate(const Eigen::Vector3d &Pi, const Eigen::Quaterniond &Qi, const Eigen::Vector3d &Vi, const Eigen::Vector3d &Bai, const Eigen::Vector3d &Bgi,
                                              const Eigen::Vector3d &Pj, const Eigen::Quaterniond &Qj, const Eigen::Vector3d &Vj, const Eigen::Vector3d &Baj, const Eigen::Vector3d &Bgj)
        {
            Eigen::Matrix<double, 15, 1> residuals;
            
            Eigen::Matrix3d dp_dba = jacobian.block<3, 3>(O_P, O_BA);
            Eigen::Matrix3d dp_dbg = jacobian.block<3, 3>(O_P, O_BG);
            
            Eigen::Matrix3d dq_dbg = jacobian.block<3, 3>(O_R, O_BG);
            
            Eigen::Matrix3d dv_dba = jacobian.block<3, 3>(O_V, O_BA);
            Eigen::Matrix3d dv_dbg = jacobian.block<3, 3>(O_V, O_BG);
            
            Eigen::Vector3d dba = Bai - linearized_ba;
            Eigen::Vector3d dbg = Bgi - linearized_bg;
            
            Eigen::Quaterniond corrected_delta_q = delta_q * Utility::deltaQ(dq_dbg * dbg);
            Eigen::Vector3d corrected_delta_v = delta_v + dv_dba * dba + dv_dbg * dbg;
            Eigen::Vector3d corrected_delta_p = delta_p + dp_dba * dba + dp_dbg * dbg;
            
            //不懂 residual 为什么这么算
            
            Vector3d G{0,0,GRAVITY};
            residuals.block<3, 1>(O_P, 0) = Qi.inverse() * (0.5 * G * sum_dt * sum_dt + Pj - Pi - Vi * sum_dt) - corrected_delta_p;
            residuals.block<3, 1>(O_R, 0) = 2 * (corrected_delta_q.inverse() * (Qi.inverse() * Qj)).vec();
            residuals.block<3, 1>(O_V, 0) = Qi.inverse() * (G * sum_dt + Vj - Vi) - corrected_delta_v;
            residuals.block<3, 1>(O_BA, 0) = Baj - Bai;
            residuals.block<3, 1>(O_BG, 0) = Bgj - Bgi;
            return residuals;
        }
    

    下面就是IMU误差的Jacobian矩阵的计算,这里就使用到了pre_integration实例里面的Jacobian的部分结果,Jacobian数组里每一项都是IMU误差关于两帧图像状态的导数,只不过这里把pose和speedBias分开了:

                if (jacobians[0])
                {
                    Eigen::Map<Eigen::Matrix<double, 15, 7, Eigen::RowMajor>> jacobian_pose_i(jacobians[0]);
                    jacobian_pose_i.setZero();
                    
                    jacobian_pose_i.block<3, 3>(O_P, O_P) = -Qi.inverse().toRotationMatrix();
                    Vector3d G{0,0,GRAVITY};
                    jacobian_pose_i.block<3, 3>(O_P, O_R) = Utility::skewSymmetric(Qi.inverse() * (0.5 * G * sum_dt * sum_dt + Pj - Pi - Vi * sum_dt));
                    
    #if 0
                    jacobian_pose_i.block<3, 3>(O_R, O_R) = -(Qj.inverse() * Qi).toRotationMatrix();
    #else
                    Eigen::Quaterniond corrected_delta_q = pre_integration->delta_q * Utility::deltaQ(dq_dbg * (Bgi - pre_integration->linearized_bg));
                    jacobian_pose_i.block<3, 3>(O_R, O_R) = -(Utility::Qleft(Qj.inverse() * Qi) * Utility::Qright(corrected_delta_q)).bottomRightCorner<3, 3>();
    #endif
                    jacobian_pose_i.block<3, 3>(O_V, O_R) = Utility::skewSymmetric(Qi.inverse() * (G * sum_dt + Vj - Vi));
                    
                    jacobian_pose_i = sqrt_info * jacobian_pose_i;
                    
                    if (jacobian_pose_i.maxCoeff() > 1e8 || jacobian_pose_i.minCoeff() < -1e8)
                    {
                        std::cout << sqrt_info << std::endl;
                        assert(false);
                    }
                }
                if (jacobians[1])
                {
                    Eigen::Map<Eigen::Matrix<double, 15, 9, Eigen::RowMajor>> jacobian_speedbias_i(jacobians[1]);
                    jacobian_speedbias_i.setZero();
                    jacobian_speedbias_i.block<3, 3>(O_P, O_V - O_V) = -Qi.inverse().toRotationMatrix() * sum_dt;
                    jacobian_speedbias_i.block<3, 3>(O_P, O_BA - O_V) = -dp_dba;
                    jacobian_speedbias_i.block<3, 3>(O_P, O_BG - O_V) = -dp_dbg;
                    
    #if 0
                    jacobian_speedbias_i.block<3, 3>(O_R, O_BG - O_V) = -dq_dbg;
    #else
                    Eigen::Quaterniond corrected_delta_q = pre_integration->delta_q * Utility::deltaQ(dq_dbg * (Bgi - pre_integration->linearized_bg));
                    jacobian_speedbias_i.block<3, 3>(O_R, O_BG - O_V) = -Utility::Qleft(Qj.inverse() * Qi * corrected_delta_q).bottomRightCorner<3, 3>() * dq_dbg;
    #endif
                    
                    jacobian_speedbias_i.block<3, 3>(O_V, O_V - O_V) = -Qi.inverse().toRotationMatrix();
                    jacobian_speedbias_i.block<3, 3>(O_V, O_BA - O_V) = -dv_dba;
                    jacobian_speedbias_i.block<3, 3>(O_V, O_BG - O_V) = -dv_dbg;
                    
                    jacobian_speedbias_i.block<3, 3>(O_BA, O_BA - O_V) = -Eigen::Matrix3d::Identity();
                    
                    jacobian_speedbias_i.block<3, 3>(O_BG, O_BG - O_V) = -Eigen::Matrix3d::Identity();
                    
                    jacobian_speedbias_i = sqrt_info * jacobian_speedbias_i;
                    
                    assert(fabs(jacobian_speedbias_i.maxCoeff()) < 1e8);
                    assert(fabs(jacobian_speedbias_i.minCoeff()) < 1e8);
                }
                if (jacobians[2])
                {
                    Eigen::Map<Eigen::Matrix<double, 15, 7, Eigen::RowMajor>> jacobian_pose_j(jacobians[2]);
                    jacobian_pose_j.setZero();
                    
                    jacobian_pose_j.block<3, 3>(O_P, O_P) = Qi.inverse().toRotationMatrix();
                    
    #if 0
                    jacobian_pose_j.block<3, 3>(O_R, O_R) = Eigen::Matrix3d::Identity();
    #else
                    Eigen::Quaterniond corrected_delta_q = pre_integration->delta_q * Utility::deltaQ(dq_dbg * (Bgi - pre_integration->linearized_bg));
                    jacobian_pose_j.block<3, 3>(O_R, O_R) = Utility::Qleft(corrected_delta_q.inverse() * Qi.inverse() * Qj).bottomRightCorner<3, 3>();
    #endif
                    
                    jacobian_pose_j = sqrt_info * jacobian_pose_j;
                    
                    assert(fabs(jacobian_pose_j.maxCoeff()) < 1e8);
                    assert(fabs(jacobian_pose_j.minCoeff()) < 1e8);
                }
                if (jacobians[3])
                {
                    Eigen::Map<Eigen::Matrix<double, 15, 9, Eigen::RowMajor>> jacobian_speedbias_j(jacobians[3]);
                    jacobian_speedbias_j.setZero();
                    
                    jacobian_speedbias_j.block<3, 3>(O_V, O_V - O_V) = Qi.inverse().toRotationMatrix();
                    
                    jacobian_speedbias_j.block<3, 3>(O_BA, O_BA - O_V) = Eigen::Matrix3d::Identity();
                    
                    jacobian_speedbias_j.block<3, 3>(O_BG, O_BG - O_V) = Eigen::Matrix3d::Identity();
                    
                    jacobian_speedbias_j = sqrt_info * jacobian_speedbias_j;
                    
                    assert(fabs(jacobian_speedbias_j.maxCoeff()) < 1e8);
                    assert(fabs(jacobian_speedbias_j.minCoeff()) < 1e8);
                }
            }
    

    至此,我们把优化后端的IMU部分梳理完毕,下次开始梳理边缘化的计算。

    相关文章

      网友评论

          本文标题:香港科技大学VINS:(一)非线性优化之IMU

          本文链接:https://www.haomeiwen.com/subject/ueaecxtx.html