ROS下獲取kinectv2相機的仿照TUM數據集格式的彩色圖和深度圖

準備工作: 1. ubuntu16.04上安裝iai-kinect2, 2. 運行roslaunch kinect2_bridge kinect2_bridge.launch, 3. 運行 rosrun save_rgbd_from_kinect2 save_rgbd_from_kinect2,開始保存圖像.

這個保存kinectV2相機的代碼如下,完整的工程可以從我的github上下載 https://github.com/Serena2018/save_color_depth_from_kinect2_with_ros/tree/master/save_rgbd_from_kinect2

問題:我使用這個第一個版本的工具獲取了rgb和depth圖像,并使用TUM數據集提供的associate.py腳本(腳本內容在文章底部)得到彩色圖和深度圖的對應(在這個工作中我才意識到,深度相機獲取的深度圖和彩色圖實時一一對應的,不信的話,你去看一下TUM數據集).

經過上面的工作,我感覺我獲取的數據集是天衣無縫的,知道今天我用我的數據集跑一下orb-slam的RGB-D接口,發現一個大問題,跟蹤過程,不連貫,出現回跳的問題,就是,比如說場景中有個人,頭一段時間,這個人已經走過去了,可是下一會,這個人又退回來了.

出了這樣的問題,我就開始排查問題,先從獲取的原始數據開始,我播放查看圖像,發現圖像是平滑變換的,不存在來回跳轉的問題,(彩色圖和深度圖都沒問題)

然后排查是不是associate.py腳本的問題,那么我就使用這個腳本,作用到TUM數據集,得到相應的association.txt文件,然后在orb-slam的RGBD接口測試該組數據,發現沒有會跳的問題出現,那么,就不是這個腳本的問題,

我發現現在我的數據集和TUM數據集的區別是,我的保存下來的圖像的時間戳的小數部分,不能保證所有的小數部分都有6位,就是說當小數部分后幾位為0時,那么就直接略去了,會出現小數部分小于6位的情況,既然我可以想到的其他變量都是相同的,那么我看一下,如果我也能做到保證小數部分都是6位,那么這個問題也許就解決了,我就將原來的代碼

os_rgb << time_val.tv_sec << "." <<time_val.tv_usec;
os_dep << time_val.tv_sec << "."<<time_val.tv_usec;

?改為

os_rgb << time_val.tv_sec << "." <<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<<time_val.tv_usec;
os_dep << time_val.tv_sec << "."<<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6) <<time_val.tv_usec;

?經過上面的修改,就可以保證圖像的時間戳的小數部分都是6位.

我重新生成了association.txt文件,再次運行orb-slam2的RGB-D接口,發現之前的會跳的問題解決了.很不可思議,但這就是事實,如何解釋這個問題呢,

/**** 函數功能:采集iaikinect2輸出的彩色圖和深度圖數據,并以文件的形式進行存儲*** 分隔符為 逗號','  * 時間戳單位為秒(s) 精確到小數點后6位(us)** maker:crp* 2017-5-13*/#include <iostream>
#include <sstream>
#include <stdio.h>
#include <stdlib.h>
#include <string>
#include <vector>#include <ros/ros.h>
#include <ros/spinner.h>
#include <sensor_msgs/CameraInfo.h>
#include <sensor_msgs/Image.h>
#include <std_msgs/String.h>#include <cv_bridge/cv_bridge.h> //將ROS下的sensor_msgs/Image消息類型轉化成cv::Mat。
#include <sensor_msgs/image_encodings.h> //頭文件sensor_msgs/Image是ROS下的圖像的類型,這個頭文件中包含對圖像進行編碼的函數#include <fstream>
#include <image_transport/image_transport.h>
#include <image_transport/subscriber_filter.h>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/opencv.hpp>
#include <sstream>using namespace std;
using namespace cv;Mat rgb, depth;
char successed_flag1 = 0, successed_flag2 = 0;string topic1_name = "/kinect2/qhd/image_color"; // topic 名稱
string topic2_name = "/kinect2/qhd/image_depth_rect";string filename_rgbdata = "/home/yunlei/recordData/RGBD/rgbdata.txt";
string filename_depthdata = "/home/yunlei/recordData/RGBD/depthdata.txt";
string save_imagedata = "/home/yunlei/recordData/RGBD/";void dispDepth(const cv::Mat &in, cv::Mat &out, const float maxValue);
void callback_function_color(const sensor_msgs::Image::ConstPtr image_data);
void callback_function_depth(const sensor_msgs::Image::ConstPtr image_data);
int main(int argc, char **argv) {string out_result;// namedWindow("image color",CV_WINDOW_AUTOSIZE);// namedWindow("image depth",CV_WINDOW_AUTOSIZE);ros::init(argc, argv, "kinect2_listen");if (!ros::ok())return 0;ros::NodeHandle n;ros::Subscriber sub1 = n.subscribe(topic1_name, 30, callback_function_color);ros::Subscriber sub2 = n.subscribe(topic2_name, 30, callback_function_depth);ros::AsyncSpinner spinner(3); // Use 3 threadsspinner.start();string rgb_str, dep_str;struct timeval time_val;struct timezone tz;double time_stamp;ofstream fout_rgb(filename_rgbdata.c_str());if (!fout_rgb) {cerr << filename_rgbdata << " file not exist" << endl;}ofstream fout_depth(filename_depthdata.c_str());if (!fout_depth) {cerr << filename_depthdata << " file not exist" << endl;}while (ros::ok()) {if (successed_flag1) {gettimeofday(&time_val, &tz); // us//  time_stamp =time_val.tv_sec+ time_val.tv_usec/1000000.0;ostringstream os_rgb;// os_rgb.setf(std::ios::fixed);// os_rgb.precision(6);os_rgb << time_val.tv_sec << "." <<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<<time_val.tv_usec;rgb_str = save_imagedata + "rgb/" + os_rgb.str() + ".png";imwrite(rgb_str, rgb);fout_rgb << os_rgb.str() << ",rgb/" << os_rgb.str() << ".png\n";successed_flag1 = 0;//   imshow("image color",rgb);cout << "rgb -- time:  " << time_val.tv_sec << "." <<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<< time_val.tv_usec<< endl;//    waitKey(1);}if (successed_flag2) {gettimeofday(&time_val, &tz); // usostringstream os_dep;// os_dep.setf(std::ios::fixed);// os_dep.precision(6);os_dep << time_val.tv_sec << "."<<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6) <<time_val.tv_usec;dep_str =save_imagedata + "depth/" + os_dep.str() + ".png"; // 輸出圖像目錄imwrite(dep_str, depth);fout_depth << os_dep.str() << ",depth/" << os_dep.str() << ".png\n";successed_flag2 = 0;//   imshow("image depth",depth);cout << "depth -- time:" << time_val.tv_sec << "." << setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<<time_val.tv_usec<< endl;}}ros::waitForShutdown();ros::shutdown();return 0;
}
void callback_function_color(const sensor_msgs::Image::ConstPtr image_data) {cv_bridge::CvImageConstPtr pCvImage; // 聲明一個CvImage指針的實例pCvImage = cv_bridge::toCvShare(image_data,image_data->encoding); //將ROS消息中的圖象信息提取,生成新cv類型的圖象,復制給CvImage指針pCvImage->image.copyTo(rgb);successed_flag1 = 1;
}
void callback_function_depth(const sensor_msgs::Image::ConstPtr image_data) {Mat temp;cv_bridge::CvImageConstPtr pCvImage; // 聲明一個CvImage指針的實例pCvImage = cv_bridge::toCvShare(image_data,image_data->encoding); //將ROS消息中的圖象信息提取,生成新cv類型的圖象,復制給CvImage指針pCvImage->image.copyTo(depth);// dispDepth(temp, depth, 12000.0f);successed_flag2 = 1;// imshow("Mat depth",depth/256);// cv::waitKey(1);
}
void dispDepth(const cv::Mat &in, cv::Mat &out, const float maxValue) {cv::Mat tmp = cv::Mat(in.rows, in.cols, CV_8U);const uint32_t maxInt = 255;#pragma omp parallel forfor (int r = 0; r < in.rows; ++r) {const uint16_t *itI = in.ptr<uint16_t>(r);uint8_t *itO = tmp.ptr<uint8_t>(r);for (int c = 0; c < in.cols; ++c, ++itI, ++itO) {*itO = (uint8_t)std::min((*itI * maxInt / maxValue), 255.0f);}}cv::applyColorMap(tmp, out, COLORMAP_JET);
}

?

associate.py腳本

#!/usr/bin/python
# Software License Agreement (BSD License)
#
# Copyright (c) 2013, Juergen Sturm, TUM
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above
#    copyright notice, this list of conditions and the following
#    disclaimer in the documentation and/or other materials provided
#    with the distribution.
#  * Neither the name of TUM nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Requirements: 
# sudo apt-get install python-argparse"""
The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
"""import argparse
import sys
import os
import numpydef read_file_list(filename):"""Reads a trajectory from a text file. File format:The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. Input:filename -- File nameOutput:dict -- dictionary of (stamp,data) tuples"""file = open(filename)data = file.read()lines = data.replace(","," ").replace("\t"," ").split("\n") list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]list = [(float(l[0]),l[1:]) for l in list if len(l)>1]return dict(list)def associate(first_list, second_list,offset,max_difference):"""Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim to find the closest match for every input tuple.Input:first_list -- first dictionary of (stamp,data) tuplessecond_list -- second dictionary of (stamp,data) tuplesoffset -- time offset between both dictionaries (e.g., to model the delay between the sensors)max_difference -- search radius for candidate generationOutput:matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))"""first_keys = first_list.keys()second_keys = second_list.keys()potential_matches = [(abs(a - (b + offset)), a, b) for a in first_keys for b in second_keys if abs(a - (b + offset)) < max_difference]potential_matches.sort()matches = []for diff, a, b in potential_matches:if a in first_keys and b in second_keys:first_keys.remove(a)second_keys.remove(b)matches.append((a, b))matches.sort()return matchesif __name__ == '__main__':# parse command lineparser = argparse.ArgumentParser(description='''This script takes two data files with timestamps and associates them   ''')parser.add_argument('first_file', help='first text file (format: timestamp data)')parser.add_argument('second_file', help='second text file (format: timestamp data)')parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.015)args = parser.parse_args()first_list = read_file_list(args.first_file)second_list = read_file_list(args.second_file)matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    if args.first_only:for a,b in matches:print("%f %s"%(a," ".join(first_list[a])))else:for a,b in matches:print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))

?

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/252407.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/252407.shtml
英文地址,請注明出處:http://en.pswp.cn/news/252407.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Java Web 九大內置對象(一)

在Jsp 中一共定義了九個內置對象&#xff0c;分別為&#xff1a; *request HttpServletRequest; *response HttpServletResponse; *session HttpSession; page This(本jsp頁面)&#xff1b; *application ServletCon…

Missing URI template variable 'XXXX' for method parameter of type String

原因&#xff1a;就是spring的controller上的RequestMapping的實參和方法里面的形參名字不一致 方法&#xff1a;改成一樣就可。 ps.還能用綁定的方法&#xff0c;不建議&#xff0c;因為太麻煩了 RequestMapping(value "/findUser/{id}",method RequestMethod.GET…

css:text-overflow屬性

參考文檔:www.w3school.com.cn/cssref/pr_t… text-overflow:ellipsis;( 顯示省略符號來代表被修剪的文本。)

Failed to load nodelet ‘/kinect2_bridge` of type `kinect2_bridge/kinect2_bridge_nodelet` to manager

之前在我的電腦上配置了libfreenect2和iai_kinect2&#xff0c;現在需要在工控機上重新安裝這兩個庫&#xff0c;講kinectV2相機安置在嬰兒車上&#xff0c;然后使用我的ros下獲取kinectV2相機的彩色圖和灰度圖的腳本&#xff0c;獲取深度圖和彩色圖。 我成功的安裝了libfreen…

object轉字符串

1、obj.tostring() obj為空時&#xff0c;拋異常。 2、convert.tostring(obj) obj為空時&#xff0c;返回null&#xff1b; 3、(string)obj obj為空時&#xff0c;返回null&#xff1b;obj不是string類型時&#xff0c;拋異常。 4、obj as string obj為空時&#xff0c;返回nul…

微信開發中,H5的video標簽使用

<video></video>是HTML5新加入的標簽&#xff0c;最近流行的h5開發多以video技術集成一個H5頁面&#xff0c;效果也是很6的。現在總結一下用到的技術&#xff0c;主要的使用環境是微信&#xff0c;部分屬性一些手機的默認瀏覽器不支持&#xff0c;這些還需要讀者親…

bundlefusion論文閱讀筆記

4. 全局位姿對齊(glob pose alignment) 輸入系統的是使用消費級的傳感器獲取的RGBD數據流&#xff0c;并且保證這些數據中的彩色圖像和深度圖像是時間和空間上都對齊的。圖像分辨率是640x480,頻率是30hz。我們的目的就是要找到frames之間的3D對應&#xff0c;然后根據這些對應…

IOC和DI的區別詳解

IOC 是英文inversion of control的縮寫&#xff0c;意思是控制反轉DI 是英文Dependency Injection的縮寫&#xff0c;意思是依賴注入 下面用一個簡單的例子來描述一下IOC和DI的關系 先看下總結&#xff1a; 依賴注入(DI)和控制反轉(IOC)是從不同的角度的描述的同一件事情&#…

TOMCAT啟動到一半停止如何解決

當你的項目過大的時候&#xff0c;往往會導致你的TOMCAT啟動時間過長&#xff0c;啟動失敗&#xff0c;遇到該情況可以試一下下面兩招&#xff1a; TOmcat啟動到一半的時候停止了&#xff0c;以下原因&#xff1a; 1、 tomcat啟動時間超過了設置時間&#xff1a; 解決辦法&…

視覺slam十四講ch6曲線擬合 代碼注釋(筆記版)

1 #include <opencv2/core/core.hpp>2 #include <ceres/ceres.h>3 #include <chrono>4 5 using namespace std;6 7 // 代價函數的計算模型8 struct CURVE_FITTING_COST9 {10 CURVE_FITTING_COST ( double x, double y ) : _x ( x ), _y ( y ) {}11 /…

Dojo 如何測試 widget

測試 dojo/framework/src/testing/README.mdcommit 84e254725f41d60f624ab5ad38fe82e15b6348a2 用于測試和斷言 Dojo 部件期望的虛擬 DOM 和行為的簡單 API。 測試 Features harness APICustom Comparatorsselectors harness.expect harness.expectPartial harness.triggerharn…

python中將四元數轉換為旋轉矩陣

在制作bundlefusion時,想測試TUM數據集,并且將groundtruth寫入到數據集中,TUM中給定的groundtruth中的旋轉是使用四元數表示的,而bundlefusion中需要SE3的形式,所以我需要首先將四元數轉換為旋轉矩陣,然后再將其與平移向量合并在一起,因為我之前關于生成bundlefusion數據集寫了…

js -- 時間轉年月日

/*** 時間轉年月日* param sdate 開始的時間* param edate 結束的時間* returns {*}*/function day2ymrStr2(sdate, edate) {var day2ymrStr "";var date1 new Date(edate);var date2 new Date(sdate);var y 0, m 0, d 0;var y1 date1.getFullYear();var m1 …

iOS sha1加密算法

最近在項目中使用到了網絡請求簽名認證的方法&#xff0c;于是在網上找關于OC sha1加密的方法&#xff0c;很快找到了一個大眾使用的封裝好的方法&#xff0c;以下代碼便是 首先需要添加頭文件 #import<CommonCrypto/CommonDigest.h> 然后直接使用下面的方法就可以了 //s…

Linux開發5款實用工具推薦

今天安利給大家5款實用的Linux開發工具&#xff0c;希望對大家工作效率的提升有所幫助。容器放眼于現實&#xff0c;現在已經是容器的時代了。容器既及其容易部署&#xff0c;又可以方便地構建開發環境。如果你針對的是特定的平臺的開發&#xff0c;將開發流程所需要的各種工具…

TUM數據集制作BundleFusion數據集

BundleFusion的數據集中,在生成.sens文件之前,包括彩色圖,深度圖和一個位姿文件,并且這個pose文件中的位姿態是有變化的,所以我懷疑,推測,在這個pose文件中可以寫入groundtruth的位姿,然后在重建的時候就按照傳入的位姿進行計算.為了測試一下效果,我從TUM數據集開始入手,這個數…

Linq查詢datatable的記錄集合

通過linq查詢datatable數據集合滿足條件的數據集 1.首先定義查詢字段的變量&#xff0c;比方深度 string strDepth查詢深度的值&#xff1b; var dataRows from datarow in dataTable(須要查詢的datatable數據集).AsEnumerable() where …

Java 概述和編程基礎

First of all&#xff0c;Java概述&#xff1a; 類是Java程序設計的基石和基本單元&#xff1b; main()方法是程序的入口&#xff0c;它是共有的、靜態的&#xff0c;參數String[] args表示一個字符串數組可以傳入該程序&#xff0c;用來傳遞外部數據以初始化程序。   計算機…

19、Fragment

一、Fragment 1.1、fragment介紹 fragment的出現是為了同時適應手機和平板&#xff0c;可以將其看做Activity的組成部分&#xff0c;甚至Activity界面完全由不同的Fragment組成&#xff0c;它擁有自己的生命 周期和接收、處理用戶的事件&#xff0c;更為重要的是&#xff0c;可…

喜好:

不喜歡吃&#xff1a;一瓣瓣的蘑菇、海帶、豆腐皮、 不喜歡喝&#xff1a;魚湯&#xff1b; 不喜歡吃&#xff1a;山楂片、法式小面包&#xff08;軟軟的&#xff09;、果凍、 不喜歡喝&#xff1a;對飲料無感、不喜歡脈動、可樂雪碧等少量還行、 喜歡&#xff1a;啃骨頭、排骨…