?一、錄制bag
本人使用的zed2i相機。
rosbag record -O 32 /zed2i/zed_node/imu/data /zed2i/zed_node/imdata_raw /zed2i/zed_node/left/image_rect_color /zed2i/zed_node/right/image_rect_color /zed2i/zed_node/left_raw/image_raw_color /zed2i/zed_node/right_raw/image_raw_color
我錄制了相機自我去畸變和原本圖像的雙目圖像,如果你也使用的ZED相機,推薦查看這篇博客,了解各個話題的含義:
雙目立體視覺(3)- ZED2 & ROS Melodic 發布RGB圖像及深度信息_zed2官方文檔-CSDN博客
?我電腦18.04,cuda11.4,zedsdk 3.8??
?圖像分辨率,設置的1280*720,10hz? ,Imu 300hz 錄制1min左右,頻率官網說調到4hz最好,之前標定發現不改也可以,如果修改可以使用以下命令
舉例:
rosrun topic_tools throttle messages /zed2i/zed_node/right_raw/image_raw_color 4.0 /right/image_raw
?以下是我的配置文件,一些自己嘗試出來的注意事項,希望給大家避坑:
1、zed2i.yaml
# params/zed2i.yaml
# Parameters for Stereolabs ZED2 camera
---general:camera_model: 'zed2i'depth:min_depth: 0.7 # Min: 0.2, Max: 3.0 - Default 0.7 - Note: reducing this value wil require more computational power and GPU memorymax_depth: 10.0 # Max: 40.0pos_tracking:imu_fusion: true # enable/disable IMU fusion. When set to false, only the optical odometry will be used.sensors:sensors_timestamp_sync: false # Synchronize Sensors messages timestamp with latest received frame 不能開,開會極大降低IMU頻率max_pub_rate: 200. # max frequency of publishing of sensors data. MAX: 400. - MIN: grab rate imu的頻率,開到400實際值會在360附近publish_imu_tf: true # publish `IMU -> <cam_name>_left_camera_frame` TFobject_detection:od_enabled: false # True to enable Object Detection [not available for ZED]model: 0 # '0': MULTI_CLASS_BOX - '1': MULTI_CLASS_BOX_ACCURATE - '2': HUMAN_BODY_FAST - '3': HUMAN_BODY_ACCURATE - '4': MULTI_CLASS_BOX_MEDIUM - '5': HUMAN_BODY_MEDIUM - '6': PERSON_HEAD_BOXconfidence_threshold: 50 # Minimum value of the detection confidence of an object [0,100]max_range: 15. # Maximum detection rangeobject_tracking_enabled: true # Enable/disable the tracking of the detected objectsbody_fitting: false # Enable/disable body fitting for 'HUMAN_BODY_X' modelsmc_people: true # Enable/disable the detection of persons for 'MULTI_CLASS_BOX_X' modelsmc_vehicle: true # Enable/disable the detection of vehicles for 'MULTI_CLASS_BOX_X' modelsmc_bag: true # Enable/disable the detection of bags for 'MULTI_CLASS_BOX_X' modelsmc_animal: true # Enable/disable the detection of animals for 'MULTI_CLASS_BOX_X' modelsmc_electronics: true # Enable/disable the detection of electronic devices for 'MULTI_CLASS_BOX_X' modelsmc_fruit_vegetable: true # Enable/disable the detection of fruits and vegetables for 'MULTI_CLASS_BOX_X' modelsmc_sport: true # Enable/disable the detection of sport-related objects for 'MULTI_CLASS_BOX_X' models
?2、common.yaml
# params/common.yaml
# Common parameters to Stereolabs ZED and ZED mini cameras
---# Dynamic parameters cannot have a namespace
brightness: 4 # Dynamic
contrast: 4 # Dynamic
hue: 0 # Dynamic
saturation: 4 # Dynamic
sharpness: 4 # Dynamic
gamma: 8 # Dynamic - Requires SDK >=v3.1
auto_exposure_gain: true # Dynamic
gain: 100 # Dynamic - works only if `auto_exposure_gain` is false
exposure: 100 # Dynamic - works only if `auto_exposure_gain` is false
auto_whitebalance: true # Dynamic
whitebalance_temperature: 42 # Dynamic - works only if `auto_whitebalance` is false
depth_confidence: 30 # Dynamic
depth_texture_conf: 100 # Dynamic
pub_frame_rate: 10.0 # Dynamic - frequency of publishing of video and depth data
point_cloud_freq: 10.0 # Dynamic - frequency of the pointcloud publishing (equal or less to `grab_frame_rate` value)general:camera_name: zed # A name for the camera (can be different from camera model and node name and can be overwritten by the launch file)zed_id: 0serial_number: 0resolution: 2 # '0': HD2K, '1': HD1080, '2': HD720, '3': VGAgrab_frame_rate: 10 # Frequency of frame grabbing for internal SDK operationsgpu_id: -1base_frame: 'base_link' # must be equal to the frame_id used in the URDF fileverbose: false # Enable info message by the ZED SDKsvo_compression: 2 # `0`: LOSSLESS, `1`: AVCHD, `2`: HEVCself_calib: true # enable/disable self calibration at startingcamera_flip: falsevideo:img_downsample_factor: 1 # Resample factor for images [0.01,1.0] The SDK works with native image sizes, but publishes rescaled image.縮放因子,例如0.5會縮小圖像長寬各一半extrinsic_in_camera_frame: true # if `false` extrinsic parameter in `camera_info` will use ROS native frame (X FORWARD, Z UP) instead of the camera frame (Z FORWARD, Y DOWN) [`true` use old behavior as for version < v3.1]depth:quality: 3 # '0': NONE, '1': PERFORMANCE, '2': QUALITY, '3': ULTRA, '4': NEURAL
#控制深度圖的質量級別。'0': NONE:不生成深度圖。'1': PERFORMANCE:優先性能的深度圖生成模式,適用于需要較高幀率的應用。'2': QUALITY':優先質量的深度圖生成模式,適用于需要更準確深度信息的應用,但可能降低幀率。'3': ULTRA':最高質量的深度圖生成模式,提供最好的深度信息,但要求高計算能力。'4': NEURAL':使用神經網絡增強的深度圖生成模式,旨在提供更準確和詳細的深度信息。sensing_mode: 0 # '0': STANDARD, '1': FILL (not use FILL for robotic applications)depth_stabilization: 1 # `0`: disabled, `1`: enabledopenni_depth_mode: false # 'false': 32bit float meters, 'true': 16bit uchar millimetersdepth_downsample_factor: 1 # Resample factor for depth data matrices [0.01,1.0] The SDK works with native data sizes, but publishes rescaled matrices (depth map, point cloud, ...)pos_tracking:pos_tracking_enabled: false # True to enable positional tracking from startpublish_tf: true # publish `odom -> base_link` TFpublish_map_tf: true # publish `map -> odom` TFmap_frame: 'map' # main frameodometry_frame: 'odom' # odometry framearea_memory_db_path: 'zed_area_memory.area' # file loaded when the node starts to restore the "known visual features" map. save_area_memory_db_on_exit: false # save the "known visual features" map when the node is correctly closed to the path indicated by `area_memory_db_path`area_memory: true # Enable to detect loop closurefloor_alignment: false # Enable to automatically calculate camera/floor offsetinitial_base_pose: [0.0,0.0,0.0, 0.0,0.0,0.0] # Initial position of the `base_frame` -> [X, Y, Z, R, P, Y]init_odom_with_first_valid_pose: true # Enable to initialize the odometry with the first valid posepath_pub_rate: 2.0 # Camera trajectory publishing frequencypath_max_count: -1 # use '-1' for unlimited path sizetwo_d_mode: false # Force navigation on a plane. If true the Z value will be fixed to "fixed_z_value", roll and pitch to zerofixed_z_value: 0.00 # Value to be used for Z coordinate if `two_d_mode` is true mapping:mapping_enabled: false # True to enable mapping and fused point cloud publicationresolution: 0.05 # maps resolution in meters [0.01f, 0.2f]max_mapping_range: -1 # maximum depth range while mapping in meters (-1 for automatic calculation) [2.0, 20.0]fused_pointcloud_freq: 1.0 # frequency of the publishing of the fused colored point cloudclicked_point_topic: '/clicked_point' # Topic published by Rviz when a point of the cloud is clicked. Used for plane detection
二、kalibr的使用
1、雙目相機標定
首先,安裝教程有很多,而且沒有明顯的坑,可以自己搜索,或者直接編譯我調整好的
報錯:ImportError: No module named igraph
解決方式: sudo apt-get install python2.7-igraph
先標定雙目:
rosrun kalibr kalibr_calibrate_cameras --bag '/home/lb/bag/3/32.bag' --topics /zed2i/zed_node/left/image_rect_color /zed2i/zed_node/right/image_rect_color --models pinhole-radtan pinhole-radtan --target '/home/lb/calib/kalibr_workspace/april_6x6.yaml'
命令含義(GPT生成):
rosrun
:這是一個ROS命令,用于在ROS包中運行特定的節點。kalibr
:這是你要運行的ROS包的名稱。kalibr_calibrate_cameras
:這是kalibr
包中你想要運行的特定節點或可執行文件。它似乎用于相機校準。--bag '/home/lb/bag/3/32.bag'
:這指定了你要用于校準的ROS包文件。ROS包是存儲ROS消息數據的一種便捷方式。--topics /zed2i/zed_node/left/image_rect_color /zed2i/zed_node/right/image_rect_color
:這指定了你要用于校準的包文件中的主題。在這種情況下,似乎是左右相機圖像。--models pinhole-radtan pinhole-radtan
:這指定了你要用于校準的相機模型。在這里,你使用的是針孔-radial-tangential模型,用于左右兩個相機。--target '/home/lb/calib/kalibr_workspace/april_6x6.yaml'
:這指定了校準目標的YAML文件。校準目標是具有已知幾何形狀的模式或物體,用于校準相機。
耐心等待之后
結果:
?重投影誤差1左右,可用
?2、Imu標定
使用 imu_utils 進行標定,它依賴ceres-solver優化庫,要先安裝ceres庫
另外,imu_utils是依賴于code_utils的,所以先編譯code_utils后再下載編譯imu_utils
?錄制2h左右,期間保持相機不動。
結果:
?新建一個imu.yaml
填入:
rostopic: /zed2i/zed_node/imu/data
update_rate: 300.0 #Hzaccelerometer_noise_density: 1.8370034577350359e-02
accelerometer_random_walk: 3.1364398099665192e-04
gyroscope_noise_density: 2.2804362845695965e-03
gyroscope_random_walk: 4.7394104123124287e-05
3、開始聯合標定
rosrun kalibr kalibr_calibrate_imu_camera --bag '/home/lb/bag/3/32.bag'? --target '/home/lb/calib/kalibr_workspace/april_6x6.yaml'? --cam '/home/lb/bag/3/32-camchain.yaml'? --imu '/home/lb/calib/kalibr_workspace/imu.yaml'
rosrun kalibr kalibr_calibrate_imu_camera --bag '/home/lb/bag/3/32.bag' --target '/home/lb/calib/kalibr_workspace/april_6x6.yaml' --cam '/home/lb/bag/3/32-camchain.yaml' --imu '/home/lb/calib/kalibr_workspace/imu.yaml'
?