BundleFusion的數據集中,在生成.sens文件之前,包括彩色圖,深度圖和一個位姿文件,并且這個pose文件中的位姿態是有變化的,所以我懷疑,推測,在這個pose文件中可以寫入groundtruth的位姿,然后在重建的時候就按照傳入的位姿進行計算.為了測試一下效果,我從TUM數據集開始入手,這個數據集中有彩色圖,深度圖,還有根據時間戳配準后的文件association.txt,還有groundtruth.但是groundtruth中的數據要比association.txt中的數據量多很多,前者更加密集,我將做這件事分成一下幾個步驟:
1. 首先我要將association.txt中的時間戳在groundtruth.txt中找到匹配,并不能做到時間戳完全相等,而是找時間最近的數據.
2. 將找到的matches中的彩色圖和深度圖從他們原始的目錄下,按照BundleFusion需要的圖像命名格式對圖像進行重命名,然后再拷貝到另外一個目錄下.
3. 將配準后的對應到groundtruth.txt中存儲的位姿數據保存下來,然后將四元數表示的旋轉轉換為旋轉矩陣.
4. 將平移向量和旋轉矩陣組合成SE3位姿矩陣,寫入到文件中.
因為直接調用的是TUM數據集官網提供的associate.py中的 read_file_list()函數和associate()函數,所以需要將associate.py從TUM數據集官網上復制下來,放到一個工程內.
"""
step 1: read into two files, association.txt and groundtruth.txt
step 2: read the first column also the timestamp of each file above and thenassociate the first one to teh second one to find the matches.
step 3: for all matches, get all the correspondent rotation represented in quaternion form andthen transform them into rotation form.
step 4: combine rotation matrix generated above, translation vector and the shared_vec into thepose matrix with 4x4 form.
"""import numpy as np
from scipy.spatial.transform import Rotation as R
import associate
import os
import shutildef read_files(files_path):print(files_path)association = files_path + "association.txt"groundtruth = files_path + "groundtruth.txt"rgb_path = files_path + "rgb/"depth_path = files_path + "depth/"bf_data_path = files_path + "bf_data/"print(rgb_path)print(depth_path)assoc_list = associate.read_file_list(association)ground_list = associate.read_file_list(groundtruth)print("length of assoc_list", len(assoc_list))print("length of ground_list", len(ground_list))matches = associate.associate(assoc_list, ground_list, 0.0, 0.02)final_rgbs = []for match in matches:final_rgbs.append(match[0])# print(match)rgbs = []depths = []for data in assoc_list:if data[0] in final_rgbs:rgbs.append(data[1][0].split("/")[1])depths.append(data[1][2].split("/")[1])rgb_images = os.listdir(rgb_path)depth_images = os.listdir(depth_path)rgb_id = 0for rgb_name in rgbs:if rgb_name in rgb_images:shutil.copyfile(rgb_path + "/" + rgb_name,bf_data_path + "frame-" + str(rgb_id).zfill(6) + ".color.png")rgb_id += 1depth_id = 0for depth_name in depths:if depth_name in depth_images:shutil.copyfile(depth_path + "/" + depth_name,bf_data_path + "frame-" + str(depth_id).zfill(6) + ".depth.png")depth_id += 1print("length of matches",len(matches))groundtruth_list = []# initialize a dictionary with a listground_dict = dict(ground_list)for match in matches:# print(match)groundtruth_list.append(ground_dict[match[1]])print("length of groundtruth",len(groundtruth_list))quaternion2rotation(groundtruth_list, bf_data_path)def quaternion2rotation(groundtruth_list, bf_data_path):row_vec = np.array([0,0,0,1], dtype=np.float32)[np.newaxis, :]frame_id = 0for pose in groundtruth_list:translation = np.array([pose[0], pose[1], pose[2]], dtype=np.float32)[:, np.newaxis]quaternion = np.array([pose[3], pose[4], pose[5], pose[6]])rotation = R.from_quat(quaternion)# print(rotation.as_matrix())m34 = np.concatenate((rotation.as_matrix(), translation), axis=1)m44 = np.concatenate((m34, row_vec), axis=0)# write pose into .txt file# print(m44)fp = open( bf_data_path + "frame-" + str(frame_id).zfill(6) + ".pose.txt", 'w')for row in m44:fp.write(' '.join(str(i) for i in row)+'\n')frame_id += 1if __name__ == '__main__':# step 1: read two filesread_files("/home/yunlei/Datasets/TUM/rgbd_dataset_freiburg1_teddy/")# transform from quaternion to rotation# quaternion2rotation(groundtruth_list)
?