? ? ? ?在使用激光雷達設備采集點云的時候,我們都知道,激光雷達是邊運動邊采集的,每一個點云采集時的激光雷達的中心和姿態都是不一樣的,如果不加以矯正,那么這一幀數據就會出現問題,比如采集一個平面的結構的時候,所采集的數據呈現出來的可能就是一個非平面。所以,需要對采集的一幀數據進行畸變矯正,使它呈現出來正確的結果。
? ? ? ?去畸變按照傳感器的類型可以分為2D激光雷達去畸變和3D激光雷達去畸變,按照去畸變的方法,也可以分為有imu輔助的激光雷達去畸變和使用勻速運動假設的幀間配準矩陣來去點云運動畸變。
? ? ? ? 總結一點就是,2D激光雷達一般只用imu獲取的角度來矯正,也就是只用旋轉矩陣來矯正。3D激光雷達一般用imu輔助矯正,如果沒有imu信息,那么就用勻速矯正模型來矯正。勻速矯正模型在ALOAM中體現,而imu輔助的雷達點云矯正模型則在LOAM代碼中有體現。
? ? ? ?那么這次的博客分為兩個部分:
一、2D激光雷達去畸變
? ? 1)我們來看看2d_lidar_undistortion-master這個代碼
? ? ? ? ? 1. 同樣使用的是imu輔助的點云去畸變方法,不過這里只用到了旋轉,沒有對平移進行畸變矯正,也可以能使2d激光雷達運動相對緩慢?不太清楚。但是看代碼,就是很常規的,找到該雷達點時間戳所在的imu時間段,然后取該imu時間段的起始點和結束點的orientation也就是旋轉四元素,然后使用四元素插值就可以得到該雷達點所在的坐標系相對與起始點的旋轉矩陣。
bool getLaserPose(int point_index, ros::Time &_timestamp, Eigen::Quaternionf &quat_out){static int index_front = 0;static int index_back = 0;static ros::Time timestamp_front;static ros::Time timestamp_back;static Eigen::Quaternionf quat_front, quat_back;static bool index_updated_flag = false;static bool predict_orientation_flag = false;if (point_index == 0){predict_orientation_flag = false;int i = 0;while (_timestamp < imuCircularBuffer_[i].header.stamp){i++;}index_front = i - 1;index_back = i;index_updated_flag = true;}else{while (predict_orientation_flag == false&& _timestamp > imuCircularBuffer_[index_front].header.stamp&& _timestamp > imuCircularBuffer_[index_back].header.stamp){index_front--;index_back--;if (index_front < 0){//use predictionpredict_orientation_flag = true;index_front++;index_back++;}index_updated_flag = true;}}if (index_updated_flag == true){//cout << "index updated: " << index_front << " " << index_back << endl;timestamp_front = imuCircularBuffer_[index_front].header.stamp;timestamp_back = imuCircularBuffer_[index_back].header.stamp;quat_front = Eigen::Quaternionf(imuCircularBuffer_[index_front].orientation.w, imuCircularBuffer_[index_front].orientation.x, imuCircularBuffer_[index_front].orientation.y, imuCircularBuffer_[index_front].orientation.z);quat_back = Eigen::Quaternionf(imuCircularBuffer_[index_back].orientation.w, imuCircularBuffer_[index_back].orientation.x, imuCircularBuffer_[index_back].orientation.y, imuCircularBuffer_[index_back].orientation.z);index_updated_flag = false;}float alpha = (float)(_timestamp.toNSec() - timestamp_back.toNSec()) / (timestamp_front.toNSec() - timestamp_back.toNSec());if (alpha < 0){return false;}// 球面線性插值// Slerp.quat_out = quat_back.slerp(alpha, quat_front);return true;}
? ? ? ? ?2.矯正方法同樣的比較簡單
直接使用插值所的的旋轉矩陣把雷達點變換到世界坐標系中,然后使用start點的旋轉矩陣求逆,然后乘上世界點,就可以把imu世界點變換到start點所在的當前坐標系下,也就是假設start點的坐標系旋轉矩陣是單位矩陣,平移矩陣是(0,0,0)。這樣就完成了點的畸變矯正。
// 如果成功獲取當前掃描點的姿態if (getLaserPose(i, point_timestamp, point_quat) == true){Eigen::Quaternionf delta_quat = current_quat.inverse() * point_quat;Eigen::Vector3f point_out = delta_quat * point_in;point_xyzi.x = point_out(0);point_xyzi.y = point_out(1);point_xyzi.z = 0.0;point_xyzi.intensity = current_point.intensity;pointcloud_pcl.push_back(point_xyzi);}
2D激光雷達基本上就沒有使用到平移量的。
二、3D激光雷達去畸變
1)使用imu去畸變
? ? ?1.看fast-lio的點云去畸變
核心的代碼:
M3D R_i(R_imu * Exp(angvel_avr, dt));這個就是計算出來該點所在的imu的姿態。
V3D P_i(it_pcl->x, it_pcl->y, it_pcl->z); 這個就是激光雷達坐標系下的該點坐標
V3D T_ei(pos_imu + vel_imu * dt + 0.5 * acc_imu * dt * dt - imu_state.pos);由速度和加速度所帶來的位移增量。
V3D P_compensate = imu_state.offset_R_L_I.conjugate() * (imu_state.rot.conjugate() * (R_i * (imu_state.offset_R_L_I * P_i + imu_state.offset_T_L_I) + T_ei) - imu_state.offset_T_L_I);// not accurate!
分開來看
imu_state.offset_R_L_I和imu_state.offset_T_L_I 分別表示雷達到imu的旋轉和平移矩陣。
1.就是把激光雷達坐標系中的點變換到imu當前坐標系下
imu_state.offset_R_L_I * P_i + imu_state.offset_T_L_I
2.?就是把imu當前坐標系下的點變換到imu世界坐標系下
(R_i * (imu_state.offset_R_L_I * P_i + imu_state.offset_T_L_I) + T_ei)
3.這一句呢就是把點變換到該幀第一個點的坐標系下
(imu_state.rot.conjugate() * (R_i * (imu_state.offset_R_L_I * P_i + imu_state.offset_T_L_I) + T_ei)
4.就是把點從imu當前幀起始imu坐標系下,變換到與之對應的激光雷達自身坐標系下
imu_state.offset_R_L_I.conjugate() * (imu_state.rot.conjugate() * (R_i * (imu_state.offset_R_L_I * P_i + imu_state.offset_T_L_I) + T_ei) - imu_state.offset_T_L_I);
這樣就完成了激光點云的去畸變。
整體代碼如下:
if(lidar_type != MARSIM){auto it_pcl = pcl_out.points.end() - 1;for (auto it_kp = IMUpose.end() - 1; it_kp != IMUpose.begin(); it_kp--){auto head = it_kp - 1;auto tail = it_kp;R_imu<<MAT_FROM_ARRAY(head->rot);// cout<<"head imu acc: "<<acc_imu.transpose()<<endl;vel_imu<<VEC_FROM_ARRAY(head->vel);pos_imu<<VEC_FROM_ARRAY(head->pos);acc_imu<<VEC_FROM_ARRAY(tail->acc);angvel_avr<<VEC_FROM_ARRAY(tail->gyr);for(; it_pcl->curvature / double(1000) > head->offset_time; it_pcl --){dt = it_pcl->curvature / double(1000) - head->offset_time;/* Transform to the 'end' frame, using only the rotation* Note: Compensation direction is INVERSE of Frame's moving direction* So if we want to compensate a point at timestamp-i to the frame-e* P_compensate = R_imu_e ^ T * (R_i * P_i + T_ei) where T_ei is represented in global frame */M3D R_i(R_imu * Exp(angvel_avr, dt));V3D P_i(it_pcl->x, it_pcl->y, it_pcl->z);V3D T_ei(pos_imu + vel_imu * dt + 0.5 * acc_imu * dt * dt - imu_state.pos);V3D P_compensate = imu_state.offset_R_L_I.conjugate() * (imu_state.rot.conjugate() * (R_i * (imu_state.offset_R_L_I * P_i + imu_state.offset_T_L_I) + T_ei) - imu_state.offset_T_L_I);// not accurate!// save Undistorted points and their rotationit_pcl->x = P_compensate(0);it_pcl->y = P_compensate(1);it_pcl->z = P_compensate(2);if (it_pcl == pcl_out.points.begin()) break;}}}
}
2)看loam的去畸變
1.首先對imu的線加速度去掉重力干擾
void imuHandler(const sensor_msgs::Imu::ConstPtr& imuIn)
{double roll, pitch, yaw;tf::Quaternion orientation;//convert Quaternion msg to Quaterniontf::quaternionMsgToTF(imuIn->orientation, orientation);//This will get the roll pitch and yaw from the matrix about fixed axes X, Y, Z respectively. That's R = Rz(yaw)*Ry(pitch)*Rx(roll).//Here roll pitch yaw is in the global frametf::Matrix3x3(orientation).getRPY(roll, pitch, yaw);//減去重力的影響,求出xyz方向的加速度實際值,并進行坐標軸交換,統一到z軸向前,x軸向左的右手坐標系, 交換過后RPY對應fixed axes ZXY(RPY---ZXY)。Now R = Ry(yaw)*Rx(pitch)*Rz(roll).float accX = imuIn->linear_acceleration.y - sin(roll) * cos(pitch) * 9.81;float accY = imuIn->linear_acceleration.z - cos(roll) * cos(pitch) * 9.81;float accZ = imuIn->linear_acceleration.x + sin(pitch) * 9.81;//循環移位效果,形成環形數組imuPointerLast = (imuPointerLast + 1) % imuQueLength;imuTime[imuPointerLast] = imuIn->header.stamp.toSec();imuRoll[imuPointerLast] = roll;imuPitch[imuPointerLast] = pitch;imuYaw[imuPointerLast] = yaw;imuAccX[imuPointerLast] = accX;imuAccY[imuPointerLast] = accY;imuAccZ[imuPointerLast] = accZ;AccumulateIMUShift();
}
2.對imu數據進行積分
這里假設了當前幀的第一個點的初始速度是0,所以shiftx,shifty,shiftz和真實世界不是完全對應的。
//積分速度與位移
void AccumulateIMUShift()
{float roll = imuRoll[imuPointerLast];float pitch = imuPitch[imuPointerLast];float yaw = imuYaw[imuPointerLast];float accX = imuAccX[imuPointerLast];float accY = imuAccY[imuPointerLast];float accZ = imuAccZ[imuPointerLast];//將當前時刻的加速度值繞交換過的ZXY固定軸(原XYZ)分別旋轉(roll, pitch, yaw)角,轉換得到世界坐標系下的加速度值(right hand rule)//繞z軸旋轉(roll)float x1 = cos(roll) * accX - sin(roll) * accY;float y1 = sin(roll) * accX + cos(roll) * accY;float z1 = accZ;//繞x軸旋轉(pitch)float x2 = x1;float y2 = cos(pitch) * y1 - sin(pitch) * z1;float z2 = sin(pitch) * y1 + cos(pitch) * z1;//繞y軸旋轉(yaw)accX = cos(yaw) * x2 + sin(yaw) * z2;accY = y2;accZ = -sin(yaw) * x2 + cos(yaw) * z2;//上一個imu點int imuPointerBack = (imuPointerLast + imuQueLength - 1) % imuQueLength;//上一個點到當前點所經歷的時間,即計算imu測量周期double timeDiff = imuTime[imuPointerLast] - imuTime[imuPointerBack];//要求imu的頻率至少比lidar高,這樣的imu信息才使用,后面校正也才有意義if (timeDiff < scanPeriod) {//(隱含從靜止開始運動)//求每個imu時間點的位移與速度,兩點之間視為勻加速直線運動/*這里有一個問題,就是一開始imushiftx,imushifty,imushiftz都是0,inuvelox imuveloy imuveloz也都是0,那么是否就需要設備從靜止開始采集點云呢? 否則預積分累計出來的不是真實的速度,只是假設開始時刻速度和位置為0。位置為0尚且可以統一,速度為0那怎么來解釋呢?*/imuShiftX[imuPointerLast] = imuShiftX[imuPointerBack] + imuVeloX[imuPointerBack] * timeDiff + accX * timeDiff * timeDiff / 2;imuShiftY[imuPointerLast] = imuShiftY[imuPointerBack] + imuVeloY[imuPointerBack] * timeDiff + accY * timeDiff * timeDiff / 2;imuShiftZ[imuPointerLast] = imuShiftZ[imuPointerBack] + imuVeloZ[imuPointerBack] * timeDiff + accZ * timeDiff * timeDiff / 2;imuVeloX[imuPointerLast] = imuVeloX[imuPointerBack] + accX * timeDiff;imuVeloY[imuPointerLast] = imuVeloY[imuPointerBack] + accY * timeDiff;imuVeloZ[imuPointerLast] = imuVeloZ[imuPointerBack] + accZ * timeDiff;}
}
3.當然就是插值了
//點時間=點云時間+周期時間if (imuPointerLast >= 0) {//如果收到IMU數據,使用IMU矯正點云畸變float pointTime = relTime * scanPeriod;//計算點的周期時間//尋找是否有點云的時間戳小于IMU的時間戳的IMU位置:imuPointerFrontwhile (imuPointerFront != imuPointerLast) {if (timeScanCur + pointTime < imuTime[imuPointerFront]) {break;}imuPointerFront = (imuPointerFront + 1) % imuQueLength;}if (timeScanCur + pointTime > imuTime[imuPointerFront]) {//沒找到,此時imuPointerFront==imtPointerLast,只能以當前收到的最新的IMU的速度,位移,歐拉角作為當前點的速度,位移,歐拉角使用imuRollCur = imuRoll[imuPointerFront];imuPitchCur = imuPitch[imuPointerFront];imuYawCur = imuYaw[imuPointerFront];imuVeloXCur = imuVeloX[imuPointerFront];imuVeloYCur = imuVeloY[imuPointerFront];imuVeloZCur = imuVeloZ[imuPointerFront];imuShiftXCur = imuShiftX[imuPointerFront];imuShiftYCur = imuShiftY[imuPointerFront];imuShiftZCur = imuShiftZ[imuPointerFront];} else {//找到了點云時間戳小于IMU時間戳的IMU位置,則該點必處于imuPointerBack和imuPointerFront之間,據此線性插值,計算點云點的速度,位移和歐拉角int imuPointerBack = (imuPointerFront + imuQueLength - 1) % imuQueLength;//按時間距離計算權重分配比率,也即線性插值float ratioFront = (timeScanCur + pointTime - imuTime[imuPointerBack]) / (imuTime[imuPointerFront] - imuTime[imuPointerBack]);float ratioBack = (imuTime[imuPointerFront] - timeScanCur - pointTime) / (imuTime[imuPointerFront] - imuTime[imuPointerBack]);imuRollCur = imuRoll[imuPointerFront] * ratioFront + imuRoll[imuPointerBack] * ratioBack;imuPitchCur = imuPitch[imuPointerFront] * ratioFront + imuPitch[imuPointerBack] * ratioBack;if (imuYaw[imuPointerFront] - imuYaw[imuPointerBack] > M_PI) {imuYawCur = imuYaw[imuPointerFront] * ratioFront + (imuYaw[imuPointerBack] + 2 * M_PI) * ratioBack;} else if (imuYaw[imuPointerFront] - imuYaw[imuPointerBack] < -M_PI) {imuYawCur = imuYaw[imuPointerFront] * ratioFront + (imuYaw[imuPointerBack] - 2 * M_PI) * ratioBack;} else {imuYawCur = imuYaw[imuPointerFront] * ratioFront + imuYaw[imuPointerBack] * ratioBack;}//本質:imuVeloXCur = imuVeloX[imuPointerback] + (imuVelX[imuPointerFront]-imuVelX[imuPoniterBack])*ratioFrontimuVeloXCur = imuVeloX[imuPointerFront] * ratioFront + imuVeloX[imuPointerBack] * ratioBack;imuVeloYCur = imuVeloY[imuPointerFront] * ratioFront + imuVeloY[imuPointerBack] * ratioBack;imuVeloZCur = imuVeloZ[imuPointerFront] * ratioFront + imuVeloZ[imuPointerBack] * ratioBack;imuShiftXCur = imuShiftX[imuPointerFront] * ratioFront + imuShiftX[imuPointerBack] * ratioBack;imuShiftYCur = imuShiftY[imuPointerFront] * ratioFront + imuShiftY[imuPointerBack] * ratioBack;imuShiftZCur = imuShiftZ[imuPointerFront] * ratioFront + imuShiftZ[imuPointerBack] * ratioBack;}if (i == 0) {//如果是第一個點,記住點云起始位置的速度,位移,歐拉角imuRollStart = imuRollCur;imuPitchStart = imuPitchCur;imuYawStart = imuYawCur;imuVeloXStart = imuVeloXCur;imuVeloYStart = imuVeloYCur;imuVeloZStart = imuVeloZCur;imuShiftXStart = imuShiftXCur;imuShiftYStart = imuShiftYCur;imuShiftZStart = imuShiftZCur;} else {//計算之后每個點相對于第一個點的由于加減速非勻速運動產生的位移速度畸變,并對點云中的每個點位置信息重新補償矯正ShiftToStartIMU(pointTime);VeloToStartIMU();TransformToStartIMU(&point);}}laserCloudScans[scanID].push_back(point);//將每個補償矯正的點放入對應線號的容器}
主要函數就是
ShiftToStartIMU(pointTime);VeloToStartIMU();TransformToStartIMU(&point);
? ? ? ? ShiftToStartIMU(pointTime);去掉了勻速運動所產生的位移偏移,只保留了加速度帶來的位置偏移。并且把該加速度產生的位移量變換到了? start點坐標系下。
VeloToStartIMU();速度也變換到第一個點所在的坐標系下。
TransformToStartIMU(&point);把當前點從當前的imu坐標系下變換到imu世界坐標系中,然后在把點變換到imustart點坐標系下。然后有加上了ShiftToStartIMU函數所產生的在start坐標下下由加速度所產生的位移。
這樣就完成了點云的畸變矯正。
3)看aloam中的畸變矯正
11.首先看看這兩個參數是什么意思?ceres中優化的是param_q和para_t,param_q和para_t構建了q_last_curr和t_last_curr這兩個變量。
Eigen::Map<Eigen::Quaterniond> q_last_curr(para_q);
Eigen::Map<Eigen::Vector3d> t_last_curr(para_t);
2.接著看怎么去畸變
? ? ? ? 去畸變主要是在transofrmtostart中,而transoformtoend是把點云從start坐標系中變換到end坐標系中,當然這時候start和end之間的變換關系就是q_last_curr和t_last_curr,也就是說通過q_lat_curr和t_ladt_curr把start坐標系下的點變換到世界坐標系下。
? ? ? ? 那我們來看看如何去畸變,也就是把q_last_curr和t_last_curr進行插值來計算的。插值是根據時間算出來的,時間呢是根據水平角度差算出來的。
? ? ? ? 畸變矯正就是認為從start到end這一段時間內是勻速運動,
小點1. 平移量的插值
由于平移量t_last_curr本身就是假設的的最后一個點的坐標系(類比imu)的源點在start坐標系中的位置,也就是說獲取最后一個點的時候,激光雷達運動到了t_last_curr這個位置(當然是在start系中),所以直接使用s*t_last_curr就可以了。
小點2. 旋轉的插值
? 同樣的認為從start到end是勻速運動的,那么角速度也是勻速的,所以直接用四元數插值就好了,就得到了當前i點所在的坐標系。
// undistort lidar point
void TransformToStart(PointType const *const pi, PointType *const po) // 去畸變
{//interpolation ratio 插值比double s;if (DISTORTION) // 默認為假,kitti數據集上的lidar已經做過了運動補償,這里就不用了s = (pi->intensity - int(pi->intensity)) / SCAN_PERIOD;elses = 1.0;//s = 1;//運動補償:將一幀點云中所有不同時間輟的位置點補償到同一時間戳,補償需要知道每個時間戳對應的的點的位置,在這里模型為勻速模型假設Eigen::Quaterniond q_point_last = Eigen::Quaterniond::Identity().slerp(s, q_last_curr); // 兩個四元數之間的插值,slerp()函數用于執行球面線性插值,s插值參數,指定兩個四元數之間插的位置Eigen::Vector3d t_point_last = s * t_last_curr;Eigen::Vector3d point(pi->x, pi->y, pi->z);Eigen::Vector3d un_point = q_point_last * point + t_point_last;po->x = un_point.x();po->y = un_point.y();po->z = un_point.z();po->intensity = pi->intensity;
}// transform all lidar points to the start of the next framevoid TransformToEnd(PointType const *const pi, PointType *const po)
{// undistort point firstpcl::PointXYZI un_point_tmp;TransformToStart(pi, &un_point_tmp);Eigen::Vector3d un_point(un_point_tmp.x, un_point_tmp.y, un_point_tmp.z);Eigen::Vector3d point_end = q_last_curr.inverse() * (un_point - t_last_curr);po->x = point_end.x();po->y = point_end.y();po->z = point_end.z();//Remove distortion time infopo->intensity = int(pi->intensity);
}
2.中分析的勻速運動假設的畸變矯正在ceres優化中同樣可以看到
比如通過這樣的勻速假設插值就得到了當前點的畸變校正坐標系。
q_last_curr = q_identity.slerp(T(s), q_last_curr);Eigen::Matrix<T, 3, 1> t_last_curr{T(s) * t[0], T(s) * t[1], T(s) * t[2]};
具體代碼:
struct LidarEdgeFactor
{LidarEdgeFactor(Eigen::Vector3d curr_point_, Eigen::Vector3d last_point_a_,Eigen::Vector3d last_point_b_, double s_): curr_point(curr_point_), last_point_a(last_point_a_), last_point_b(last_point_b_), s(s_) {}template <typename T>bool operator()(const T *q, const T *t, T *residual) const{Eigen::Matrix<T, 3, 1> cp{T(curr_point.x()), T(curr_point.y()), T(curr_point.z())};Eigen::Matrix<T, 3, 1> lpa{T(last_point_a.x()), T(last_point_a.y()), T(last_point_a.z())};Eigen::Matrix<T, 3, 1> lpb{T(last_point_b.x()), T(last_point_b.y()), T(last_point_b.z())};//Eigen::Quaternion<T> q_last_curr{q[3], T(s) * q[0], T(s) * q[1], T(s) * q[2]};Eigen::Quaternion<T> q_last_curr{q[3], q[0], q[1], q[2]};Eigen::Quaternion<T> q_identity{T(1), T(0), T(0), T(0)};q_last_curr = q_identity.slerp(T(s), q_last_curr);Eigen::Matrix<T, 3, 1> t_last_curr{T(s) * t[0], T(s) * t[1], T(s) * t[2]};Eigen::Matrix<T, 3, 1> lp;lp = q_last_curr * cp + t_last_curr;Eigen::Matrix<T, 3, 1> nu = (lp - lpa).cross(lp - lpb); // 叉乘Eigen::Matrix<T, 3, 1> de = lpa - lpb;residual[0] = nu.x() / de.norm();residual[1] = nu.y() / de.norm();residual[2] = nu.z() / de.norm();return true;}