使用python實現自動化拉取壓縮包并處理流程

使用python實現自動化拉取壓縮包并處理流程

  • 實現成果展示
  • 使用說明

實現成果展示

最終文件
圖片文件
拉取壓縮文件解壓后的lcm文件以及解析文件目錄

使用說明

執行./run.sh
腳本中的內容主要功能是:
1、從遠程服務器上下拉制定時間更新的數據
2、將數據中的zip拷貝到指定文件夾內
3、解壓后刪除所有除了lcm之外的文件
4、新建一個out文件夾執行解析lcm可執行文件,保存所有的解析后文件
5、新建一個analysis文件夾并執行analysis腳本,對out文件夾中的內容進行analysis,并把生成的5個圖片以及三個文檔保存到analysis中
6、生成的5個圖片是:
gnss狀態,gnss狀態解釋,中心線差值對比,gnssQC,車道線數據

#!/usr/bin/python
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import sys
import numpy as np
import re
import math
import scipy.spatial as spt  # Scipy庫空間算法,使用kd樹查找最近點
import csv  # 調用數據保存文件
import pandas as pd
import cv2
import os
import time
import datetime
from datetime import datetime, timedelta
from collections import Counter
import statisticsdef write_conclution(data_path, status, last_time_list):value_counts = Counter(status)# 計算每個值所占的比例total_count = len(status)value_ratios = {value: count / total_count for value,count in value_counts.items()}# last time mode nummode = statistics.mode(last_time_list)minutes = mode // 60000seconds = (mode // 1000) % 60milliseconds = mode % 1000mode = "{:02d}'{:02d}''{:03d}'''".format(minutes, seconds, milliseconds)# last time min numminimum = min(last_time_list)minutes = minimum // 60000seconds = (minimum // 1000) % 60milliseconds = minimum % 1000minimum = "{:02d}'{:02d}''{:03d}'''".format(minutes, seconds, milliseconds)# last time mean nummean_time = int(statistics.mean(last_time_list))minutes = mean_time // 60000seconds = (mean_time // 1000) % 60milliseconds = mean_time % 1000mean_time = "{:02d}'{:02d}''{:03d}'''".format(minutes, seconds, milliseconds)# last time max nummaxmum = max(last_time_list)minutes = maxmum // 60000seconds = (maxmum // 1000) % 60milliseconds = maxmum % 1000#處理成文件形式maxmum = "{:02d}'{:02d}''{:03d}'''".format(minutes, seconds, milliseconds)with open(data_path + "conclution.csv", '+w') as loc_f:loc_f.write("1、概述" "\n""1.1、路線以及情況概述" "\n"'\n'"本路線是             共計      km;" "\n""全程定位狀態較為正常,絕大部分時間為4(狀態持續時間占全程的  % )" "\n""定位狀態變化描述:車輛穿過高架橋時容易跳變為1, 在經過較短距離的橫跨道路廣告牌時出現定位狀態變換為4->5->4" "\n""在短時間內(一般認為從上一個橋或其他的物體下出到進入下一個橋或者空中物體下的時間不超過5s)經過多次橋下,定位狀態會出現多次4->5(2)->1->0->1->5(2)->4跳變,時間不超過6秒" "\n""定位狀態在經過隧道(一般是指除入口和出口之外封閉的環境)時,狀態會持續保持0""\n""\n""1.2 異常情況描述""\n"'\n'"localization_conclution""\n""\n""1.3、異常點匯總表格""\n"'\n'"附件為全程定位狀態變化表格""\n"'\n'"gnss status 表示含義為:""\n""解狀態是4,是最高精度的解狀態,是通過地基增強方式達到的,精度最高;"  "\n""解狀態是5或2,是精度稍低一點的解狀態,也是通過地基地基增強方式達到,一般是可用衛星數少一點,差分齡期大一些,可能會有這樣的現象,多在環境不是很理想的條件下,如林蔭路、城市峽谷;" "\n""解狀態是1,是達到衛星定位,不依賴地基增強就能達到的解狀態,如果差分數據傳輸正常,那應該是搜星數特別少,少于10顆,或者差分齡期特別大,大于30S,會出現解狀態是1的;"  "\n""解狀態是0,是表示沒有定位,搜星數不足以解算出位置。" "\n""\n""\n""1.4、該路段定位總結:" "\n"'\n')with open(data_path + "conclution.csv", '+a') as loc_f:for value, ratio in value_ratios.items():loc_f.write("\t定位狀態為" + f"{value}" +"占全程的百分比為:" + f"{ratio:.2%}" + "\n")with open(data_path + "conclution.csv", '+a') as loc_f:loc_f.write("\t 1.4.2、定位狀態為0或1時, 多數情況是因為經過(橋梁,廣告牌,龍門架,隧道,大車跟隨)" "\n""\t 1.4.3、定位狀態跳變最長的時長為: " + str(maxmum) + "\n""\t 1.4.4、定位狀態跳變時長最短為: " + str(minimum) + "\n""\t 1.4.5、定位狀態跳變平均時長為:" +str(mean_time) + '\t' + "眾位數為:  " + str(mode) + "\n""該路線下的定位狀態(gnss status)跳變時間較短,ins定位精度誤差保持在(    ,    )范圍(m)內,定位精度良好." "\n""\n""1.5、該路段下的感知識別車道線與聯編程序結合四維高精地圖輸出的車道線對比情況:   " "\n"'\n'"\t 1.5.1 車道線與與聯編程序結合四維高精地圖輸出的車道線基本擬合占比為:     \n""\t 1.5.2 車道線與與聯編程序結合四維高精地圖輸出的車道線擬合較好占比為:     \n""\t 1.5.3 車道線與與聯編程序結合四維高精地圖輸出的車道線擬合較差占比為:     \n""\t 1.5.4 車道線與與聯編程序結合四維高精地圖輸出的車道線沒有擬合占比為:     \n""\t 該路線下的感知穩定,可以安全使用NOA     \n""\t 該路線下的感知比較穩定,可以在晴天且車道線清晰場景下使用NOA     \n""\t 該路線下的感知穩定性較差,請酌情使用NOA     \n""\n""\n""2、定位異常點分析:\n""\t 2.1 異常點解釋 \n"'\n'"\t 2.2 異常點圖片\n")def get_all_change_data(gnss_status):pairs = []for i in range(len(gnss_status)):if i == 0 or gnss_status[i] != gnss_status[i-1]:pairs.append((gnss_status[i], i))# print("Pairs:", pairs)index_ranges = []start_index = Nonepair0_sets = []pair1_sets = []for i in range(len(pairs)):if pairs[i][0] == 4:if start_index is not None:index_ranges.append((start_index, i))start_index = Noneelse:if start_index is None:start_index = iif len(pair0_sets) < start_index + 1:pair0_sets.append(set())pair1_sets.append(set())pair0_sets[start_index].add(pairs[i][0])pair1_sets[start_index].add(pairs[i][1])# 處理最后一個區間if start_index is not None:index_ranges.append((start_index, len(pairs)))# print("索引區間:", index_ranges)# print("pair[i][0]集合:", pair0_sets)# print("pair[i][1]集合:", pair1_sets)def get_all_file_name(data_path):img_list = os.listdir(data_path)# print(img_list)img_list = sorted(img_list)# print(img_list)return img_listdef check_number(lst, num):result = []for sub_lst in lst:if num in sub_lst:result.append(True)else:result.append(False)return resultdef update_index_ranges(lst, result_check, index_ranges):result = []for sub_lst, check in zip(lst, result_check):if check:reversed_lst = sub_lst[::-1]last_index = len(sub_lst) - reversed_lst.index(4) - 1result.append(last_index)else:result.append(0)update_index_range = []for idx, sub_lst, res in zip(index_ranges, lst, result):if len(sub_lst) > res:change_pair = idxfirst_element = change_pair[0]result_change = first_element + reschange_pair = (result_change, change_pair[1])idx = change_pairupdate_index_range.append(idx)# print(result, "\n", update_index_range)return result, update_index_rangedef get_last_time(start_utc_time, end_time):dt1 = datetime.fromtimestamp(start_utc_time / 1000000)dt2 = datetime.fromtimestamp(end_time / 1000000)diff = dt2 - dt1# 提取小時、分鐘和秒數year = dt1.yearmonth = dt1.monthday = dt1.dayhour = dt1.hourminute = dt1.minutesecond = dt1.second# 打印時分秒# print(f"{year}{month}{day}{hour:02d}:{minute:02d}:{second:02d}")start_time = f"{year}{month}{day}{hour:02d}:{minute:02d}:{second:02d}"year = dt2.yearmonth = dt2.monthday = dt2.dayhour = dt2.hourminute = dt2.minutesecond = dt2.second# 打印時分秒# print(f"{year}{month}{day}{hour:02d}:{minute:02d}:{second:02d}")end_time = f"{year}{month}{day}{hour:02d}:{minute:02d}:{second:02d}"last_time = int(diff.total_seconds()*10000)last_time_origin = last_timeminutes = last_time // 60000seconds = (last_time // 1000) % 60milliseconds = last_time % 1000last_time = "{:02d}'{:02d}''{:03d}'''".format(minutes, seconds, milliseconds)return last_time, start_time, end_time, last_time_origindef localization_error_time_filter(status, vp31_lon, vp31_lat, log_time, data_path, file_name):file_save_path = data_path+file_namestatus_4_x, status_4_y = [], []status_5_x, status_5_y = [], []status_1_x, status_1_y = [], []status_0_x, status_0_y = [], []status_e_x, status_e_y = [], []res_no4_time, res_5_time, res_1_time, res_0_time, res_e_time = [], [], [], [], []# 20230916 add analysis loclaization gnss status part from difference and save start&end time last timewith open(file_save_path + "analysis_localization_origin.csv", '+w') as loc_f:loc_f.write("error_start_time,error_end_time,error_status,last_time(ms)\n")with open(file_save_path + "analysis_localization_new.csv", '+w') as loc_f:loc_f.write("序號,起始時間,終止時間,狀態變化,持續時間(min'sec''ms'''),解釋,圖例\n")# 20230920 add analysis localization gnss status change# loc_f.closefor i in range(0, len(status)):if status[i] == 4:status_4_x.append(vp31_lon[i])status_4_y.append(vp31_lat[i])elif status[i] == 5:status_5_x.append(vp31_lon[i])status_5_y.append(vp31_lat[i])elif status[i] == 1:status_1_x.append(vp31_lon[i])status_1_y.append(vp31_lat[i])elif status[i] == 0:status_0_x.append(vp31_lon[i])status_0_y.append(vp31_lat[i])else:status_e_x.append(vp31_lon[i])status_e_y.append(vp31_lat[i])# 20230920 add new part to analysis gnss status changesplit_indices = [i for i in range(1, len(status)) if status[i] == 4 and status[i-1] != 4]sublists = []start = 0for index in split_indices:sublists.append(status[start:index])start = indexsublists.append(status[start:])# print("分割后的子列表:", sublists)index_ranges = [(start, start + len(sublist) - 1) for start,sublist in zip([0] + [index + 1 for index in split_indices], sublists)]last_pair = index_ranges[-1]second_element = last_pair[1]result = second_element - 1updated_pair = (last_pair[0], result)index_ranges[-1] = updated_pair# print("每個子列表的索引區間集合::", index_ranges)unique_sublists = [[x for x, y in zip(sublist, sublist[1:]) if x != y] + [sublist[-1]] for sublist in sublists]# print("刪除相鄰重復字符后的子列表:", unique_sublists)last_time_list = []num = 4result_check = check_number(sublists, num)# print(result_check)index_status_change = 1four_result, updated_index_ranges = update_index_ranges(sublists, result_check, index_ranges)with open(file_save_path + "analysis_localization_new.csv", '+a') as loc_f:for idx_range, sublist in zip(updated_index_ranges, unique_sublists):if idx_range[1] < len(log_time) and idx_range[0] < len(log_time):last_time, start_time, end_time, last_time_origin = get_last_time(log_time[idx_range[0]], log_time[idx_range[1]])loc_f.write(str(index_status_change)+","+str(start_time) + "," + str(end_time) + ","+ " ".join(map(str, sublist)) + "," + str(last_time) + "\n")last_time_list.append(last_time_origin)elif idx_range[0] >= len(log_time):continueelse:last_time, start_time, end_time, last_time_origin = get_last_time(log_time[idx_range[0]], log_time[len(log_time)-1])loc_f.write(str(index_status_change)+","+str(start_time) + "," + str(end_time) + ","+ " ".join(map(str, sublist)) + "," + str(last_time) + "\n")last_time_list.append(last_time_origin)index_status_change += 1# add last time list:write_conclution(file_save_path, status, last_time_list)# 20230916 addzero_indices = [i for i, x in enumerate(status) if x == 0]one_indices = [i for i, x in enumerate(status) if x == 1]four_indices = [i for i, x in enumerate(status) if x == 4]five_indices = [i for i, x in enumerate(status) if x == 5]other_indices = [i for i, x in enumerate(status) if x not in [0, 1, 4, 5]]zero_ranges = []one_ranges = []four_ranges = []five_ranges = []other_ranges = []if zero_indices:start = zero_indices[0]for i in range(1, len(zero_indices)):if zero_indices[i] != zero_indices[i-1] + 1:zero_ranges.append((start, zero_indices[i-1]))start = zero_indices[i]zero_ranges.append((start, zero_indices[-1]))for r in zero_ranges:last_time, start_time, end_time, last_time_origin = get_last_time(log_time[r[0]], log_time[r[1]])with open(file_save_path + "analysis_localization_origin.csv", '+a') as loc_f:loc_f.write(start_time + ',' + end_time +',' + '0' + ',' + str(last_time) + '\n')# print("0值的索引區間", zero_ranges)if one_indices:start = one_indices[0]for i in range(1, len(one_indices)):if one_indices[i] != one_indices[i-1] + 1:one_ranges.append((start, one_indices[i-1]))start = one_indices[i]one_ranges.append((start, one_indices[-1]))for r in one_ranges:last_time, start_time, end_time, last_time_origin = get_last_time(log_time[r[0]], log_time[r[1]])with open(file_save_path + "analysis_localization_origin.csv", '+a') as loc_f:loc_f.write(start_time + ',' + end_time +',' + '1' + ',' + str(last_time) + '\n')# print("1值的索引區間", one_ranges)if four_indices:start = four_indices[0]for i in range(1, len(four_indices)):if four_indices[i] != four_indices[i-1] + 1:four_ranges.append((start, four_indices[i-1]))start = four_indices[i]four_ranges.append((start, four_indices[-1]))for r in four_ranges:last_time, start_time, end_time, last_time_origin = get_last_time(log_time[r[0]], log_time[r[1]])with open(file_save_path + "analysis_localization_origin.csv", '+a') as loc_f:loc_f.write(start_time + ',' + end_time +',' + '4' + ',' + str(last_time) + '\n')# print("4值的索引區間", four_ranges)if five_indices:start = five_indices[0]for i in range(1, len(five_indices)):if five_indices[i] != five_indices[i-1] + 1:five_ranges.append((start, five_indices[i-1]))start = five_indices[i]five_ranges.append((start, five_indices[-1]))for r in five_ranges:last_time, start_time, end_time, last_time_origin = get_last_time(log_time[r[0]], log_time[r[1]])with open(file_save_path + "analysis_localization_origin.csv", '+a') as loc_f:loc_f.write(start_time + ',' + end_time +',' + '5' + ',' + str(last_time) + '\n')# print("5值的索引區間", five_ranges)if other_indices:start = other_indices[0]for i in range(1, len(other_indices)):if other_indices[i] != other_indices[i-1] + 1:other_ranges.append((start, other_indices[i-1]))start = other_indices[i]other_ranges.append((start, other_indices[-1]))for r in other_ranges:last_time, start_time, end_time, last_time_origin = get_last_time(log_time[r[0]], log_time[r[1]])with open(file_save_path + "analysis_localization_origin.csv", '+a') as loc_f:loc_f.write(start_time + ',' + end_time +',' + '2' + ',' + str(last_time) + '\n')# print("e值的索引區間", other_ranges)return res_no4_time, res_5_time, res_1_time, res_0_time, res_e_timedef show_status(status, vp31_lon, vp31_lat):status_4_x, status_4_y, status_4_idx = [], [], []status_5_x, status_5_y, status_5_idx = [], [], []status_1_x, status_1_y, status_1_idx = [], [], []status_0_x, status_0_y, status_0_idx = [], [], []status_e_x, status_e_y, status_e_idx = [], [], []for i in range(0, len(status)):if status[i] == 4:status_4_x.append(vp31_lon[i])status_4_y.append(vp31_lat[i])status_4_idx.append(i)elif status[i] == 5:status_5_x.append(vp31_lon[i])status_5_y.append(vp31_lat[i])status_5_idx.append(i)elif status[i] == 1:status_1_x.append(vp31_lon[i])status_1_y.append(vp31_lat[i])status_1_idx.append(i)elif status[i] == 0:status_0_x.append(vp31_lon[i])status_0_y.append(vp31_lat[i])status_0_idx.append(i)else:status_e_x.append(vp31_lon[i])status_e_y.append(vp31_lat[i])status_e_idx.append(i)plt.scatter(status_4_x, status_4_y, marker='o',color='green', label='status = 4')  # map headingplt.scatter(status_5_x, status_5_y, marker='o',color='blue', label='status = 5')  # ins_yawplt.scatter(status_1_x, status_1_y, marker='o',color='red', label='status = 1')  # map headingplt.scatter(status_0_x, status_0_y, marker='o',color='black', label='status = 0')  # ins_yaw5plt.scatter(status_e_x, status_e_y, marker='o',color='yellow', label='status else')def GPS_data_filter(x, y, z, x1, y1, z1):x_res, y_res, z_res = [], [], []x1_res, y1_res, z1_res = [], [], []for i in range(0, len(x)):if x1[i] != 0 and y1[i] != 0:x_res.append(x[i])y_res.append(y[i])z_res.append(z[i])x1_res.append(x1[i])y1_res.append(y1[i])z1_res.append(z1[i])return x_res, y_res, z_res, x1_res, y1_res, z1_resdef cma_data_filter(diff):diff_res = []for i in range(0, len(diff)):if diff[i] <= 5:diff_res.append(diff[i])else:diff_res.append(-1)return diff_resdef line_filter(l):l_res = []for i in range(0, len(l)):if l[i] >= 0:l_res.append(l[i])else:l_res.append(-1)return l_resdef cma_filter(str):if str == 'nan':return '-1'else:return strdef show_counter(lst, data_path):counter = Counter(lst)# 計算總數量total_count = sum(counter.values())# 計算每個數字的占比percentage = {num: count / total_count for num, count in counter.items()}# 打印結果with open(data_path + "conclution.csv", '+a') as loc_f:loc_f.write("3、定位QC結果查詢:\n")for num, count in counter.items():loc_f.write("數字 {} 出現的次數:{},占比:{:.2%}".format(num, count, percentage[num]) + "\n")# 提取數字和對應的次數和占比numbers = list(counter.keys())counts = list(counter.values())percentages = [percentage[num] for num in numbers]plt.figure()# 繪制條形圖plt.bar(numbers, counts)# 添加標題和標簽plt.title('Number Counts')plt.xlabel('Numbers')plt.ylabel('Counts')plt.savefig(data_path + "localization_bar"+'.png', dpi=512)# 顯示圖形# plt.show()def localization_QC(gnss_lon, gnss_lat, ins_lon,ins_lat, gnss_status, ins_status, data_path):cnter_status = 1cnter_lonlat = 1res_cnter = []for gnss_lon, gnss_lat, ins_lon, ins_lat, gnss_status, ins_status in zip(gnss_lon, gnss_lat, ins_lon,ins_lat, gnss_status, ins_status):if gnss_status == 4 and ins_status == 1:cnter_status *= 1elif gnss_status == 5 and ins_status == 1:cnter_status *= 1.5elif gnss_status == 5 and ins_status == 0:cnter_status *= 2elif gnss_status == 4 and ins_status == 0:cnter_status *= 2.5else:cnter_status *= 0if abs(gnss_lon - ins_lon) < 2e-6 and abs(gnss_lat - ins_lat) < 2e-6:cnter_lonlat *= 1elif abs(gnss_lon - ins_lon) < 4e-6 and abs(gnss_lat - ins_lat) < 4e-6:cnter_lonlat *= 2else:cnter_lonlat *= -1res_cnter.append(cnter_status * cnter_lonlat)cnter_status = 1cnter_lonlat = 1show_counter(res_cnter, data_path)def Ablines_QC(line1_length, line2_length, diff1, diff2, diff3):cnter_ab1, cnter_ab2 = 1, 1cnter_vp1, cnter_vp2, cnter_vp3 = 1, 1, 1res_cnt_ab1, res_cnt_ab2 = [], []res_cnt_vp = []for line1_length, line2_length, diff1, diff2, diff3 in zip(line1_length,line2_length, diff1, diff2, diff3):if line1_length >= 100:cnter_ab1 *= 1elif line1_length >= 50:cnter_ab1 *= 1.5elif line1_length >= 20:cnter_ab1 *= 2else:cnter_ab1 *= -1if line2_length >= 100:cnter_ab2 *= 1elif line2_length >= 50:cnter_ab2 *= 1.5elif line2_length >= 20:cnter_ab2 *= 2else:cnter_ab2 *= 0res_cnt_ab1.append(cnter_ab1)res_cnt_ab2.append(cnter_ab2)if diff1 < 0.1:cnter_vp1 *= 1elif diff1 < 0.3:cnter_vp1 *= 1.5elif diff1 < 0.5:cnter_vp1 *= 2elif diff1 < 0.7:cnter_vp1 *= 2.5elif diff1 < 0.9:cnter_vp1 *= 3else:cnter_vp1 *= 0if diff2 < 0.1:cnter_vp2 *= 1elif diff2 < 0.3:cnter_vp2 *= 1.5elif diff2 < 0.5:cnter_vp2 *= 2elif diff2 < 0.7:cnter_vp2 *= 2.5elif diff2 < 0.9:cnter_vp2 *= 3else:cnter_vp2 *= -1if diff3 < 0.1:cnter_vp3 *= 1elif diff3 < 0.3:cnter_vp3 *= 1.5elif diff3 < 0.5:cnter_vp3 *= 2elif diff3 < 0.7:cnter_vp3 *= 2.5elif diff3 < 0.9:cnter_vp3 *= 3else:cnter_vp3 *= 0.000001cnter_vp = cnter_vp3*cnter_vp1*cnter_vp2res_cnt_vp.append(cnter_vp)cnter_ab1, cnter_ab2 = 1, 1cnter_vp1, cnter_vp2, cnter_vp3 = 1, 1, 1show_counter(res_cnt_ab1)show_counter(res_cnt_ab2)show_counter(res_cnt_vp)return res_cnt_ab1, res_cnt_ab2, res_cnt_vp# def Ablines_QC(line1, line2, line3, line4, vp_center_line, ab_center_line):def save_string_after_backslash(string):index = string.rfind('/')if index != -1:result = string[index+1:]else:result = stringreturn resultdef Prepare_data(file_name1, file_name2, data_path, analysis_data_path):file_name = save_string_after_backslash(file_name1)print("filename: ", file_name)# print_dis()file_name1 = str(data_path + file_name1)file_name2 = str(data_path + file_name2)data_line1 = [l.split('\n')[0].split(',')for l in open(file_name1, "r").readlines()[1:]]data_line2 = [l.split('\n')[0].split(',')for l in open(file_name2, "r").readlines()[1:]]gnss_timestamps = [int(l[0]) for l in data_line1]vp31_lon_ins = [float(l[2]) for l in data_line2]vp31_lat_ins = [float(l[3]) for l in data_line2]vp31_lon_gnss = [float(l[4]) for l in data_line2]vp31_lat_gnss = [float(l[5]) for l in data_line2]vp31_gnss_status = [int(l[8]) for l in data_line2]# print(vp31_gnss_status)vp31_ins_status = [int(l[9]) for l in data_line2]vp31_lon_gnss_f, vp31_lat_gnss_f, vp31_gnss_status_f, vp31_lon_ins_f, vp31_lat_ins_f, vp31_ins_status_f = GPS_data_filter(vp31_lon_gnss,vp31_lat_gnss, vp31_gnss_status, vp31_lon_ins, vp31_lat_ins, vp31_ins_status)# vp31_lon_ins, vp31_lat_ins, vp31_ins_status = GPS_data_filter(#     vp31_lon_ins, vp31_lat_ins, vp31_ins_status)log_time = [int(l[12]) for l in data_line2]diff1 = [float(cma_filter(l[0]))for l in data_line1]diff1 = cma_data_filter(diff1)diff2 = cma_data_filter([float(cma_filter(l[1]))for l in data_line1])diff3 = cma_data_filter([float(cma_filter(l[2]))for l in data_line1])l2 = line_filter([float(l[4]) for l in data_line1])l3 = line_filter([float(l[5]) for l in data_line1])idx = np.array([i for i in range(0, len(diff1))])idx_ins = np.array([i for i in range(0, len(vp31_ins_status))])idx_gnss = np.array([i for i in range(0, len(vp31_gnss_status))])idx3 = np.array([i for i in range(0, len(l2))])file_name = file_name[:-7]print("update name : ", file_name)data_path = analysis_data_path+'analysis/'print("analysis data path is:", data_path)# 記錄所有數據信息localization_error_time_filter(vp31_gnss_status, vp31_lon_ins, vp31_lat_ins, log_time, data_path, file_name)# GNSS status & gnss filterplt.figure(1)plt.subplot(1, 2, 1)show_status(vp31_gnss_status, vp31_lon_ins, vp31_lat_ins)plt.legend(loc="upper right")plt.title("gnss status origin")plt.xlabel("lon")  # 橫軸名稱plt.ylabel("lat")  # 縱軸名稱plt.subplot(1, 2, 2)show_status(vp31_gnss_status_f, vp31_lon_ins_f, vp31_lat_ins_f)plt.legend(loc="upper right")plt.title("gnss status filer")plt.xlabel("lon")  # 橫軸名稱plt.ylabel("lat")  # 縱軸名稱plt.savefig(os.path.join(data_path, file_name +"gnss_status"+'.png'), dpi=1024)# plt.subplot(2, 2, 2)plt.clf()plt.figure(2)plt.plot(idx, diff1, ls='-', lw=2, label='center_line_diff',color='black')  # map headingplt.plot(idx, diff1, ls='-', lw=2, label='left_line_diff',color='blue')  # map headingplt.plot(idx, diff3, ls='-', lw=2, label='right_line_diff',color='red')  # map headingplt.plot()plt.legend()plt.title("line_diff")plt.xlabel("idx")  # 橫軸名稱plt.ylabel("diff(m)")  # 縱軸名稱plt.savefig(os.path.join(data_path, file_name + "diff"+'.png'), dpi=512)plt.clf()plt.figure(3)plt.plot(idx_ins, vp31_ins_status, ls='-', lw=2, label='ins_status',color='red')plt.plot(idx_gnss, vp31_gnss_status, ls='-', lw=2, label='gnss_status',color='blue')plt.legend()plt.title("ins_status")plt.xlabel("idx")  # 橫軸名稱plt.ylabel("ins_status")plt.savefig(os.path.join(data_path, file_name +"gnss_status_explian"+'.png'),dpi=512)# plt.subplot(2, 2, 4)plt.clf()plt.figure(4)plt.plot(idx3, l2, ls='-', lw=2, label='line2_length',color='blue')plt.plot(idx3, l3, ls='-', lw=2, label='line3_length',color='black')plt.savefig(os.path.join(data_path, file_name + '.png'), dpi=512)# plt.legend()# plt.show()data_save_path = data_path+file_namelocalization_QC(vp31_lon_gnss, vp31_lat_gnss, vp31_lon_ins,vp31_lat_ins, vp31_gnss_status, vp31_ins_status, data_save_path)if __name__ == "__main__":if len(sys.argv) < 1:print(" please add file name!! ")sys.exit(1)data_path = sys.argv[1]analysis_data_path = data_path[:-4]print("analysis file save path is : ", analysis_data_path, '\n')# Prepare_data(file_name1, file_name2)file_name_list = get_all_file_name(data_path)for i in range(1, len(file_name_list), 2):Prepare_data(file_name_list[i-1], file_name_list[i], data_path, analysis_data_path)time.sleep(1)# if len(sys.argv) < 2:#     print(" please add file name!! ")#     sys.exit(1)# file_name1, file_name2 = sys.argv[1], sys.argv[2]# data_path = "路徑"# Prepare_data(file_name1, file_name2, data_path)

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/pingmian/78592.shtml
繁體地址,請注明出處:http://hk.pswp.cn/pingmian/78592.shtml
英文地址,請注明出處:http://en.pswp.cn/pingmian/78592.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

香橙派打包qt文件報錯“xcb 插件無法加載”與“QObject::moveToThread”線程錯誤的解決方案

PyQt 報錯總結&#xff1a;打包文件過程&#xff0c;“xcb 插件無法加載”與“QObject::moveToThread”線程錯誤的解決方案全解析 在使用 PyQt5 搭建圖形界面時&#xff0c;打包文件的過程中出現的問題&#xff0c;真難繃&#xff0c;搞了半天。 Qt 平臺插件 xcb 無法加載QOb…

Missashe考研日記-day29

Missashe考研日記-day29 1 專業課408 學習時間&#xff1a;3h學習內容&#xff1a; 今天先是把虛擬存儲剩余的課聽完了&#xff0c;然后就是做課后選擇題&#xff0c;57道&#xff0c;已經接受了OS課后題尤其多的事實了。解決并且理解完習題之后就開始預習文件管理的內容&…

【Linux】第十二章 安裝和更新軟件包

目錄 1. 什么是RPM&#xff1f; 2. dnf是什么&#xff0c;它和rpm有什么聯系和區別&#xff1f; 3. RHEL 中如何做才能啟用對第三方存儲庫的支持&#xff1f; 4. 怎么理解RHEL9中的應用流(Application Streams)和模塊(Modules)&#xff1f; 5. RHEL9 有兩個必要的軟件存儲…

新時代下的存儲過程開發實踐與優化

隨著現代應用系統的復雜度不斷增加&#xff0c;數據庫作為核心的數據存儲和處理引擎&#xff0c;其性能和可靠性顯得尤為重要。存儲過程&#xff08;Stored Procedure&#xff09;作為一種封裝在數據庫中的應用邏輯&#xff0c;使得開發者能夠在數據庫層面實現數據操作、數據校…

從梯度消失到百層網絡:ResNet 是如何改變深度學習成為經典的?

自AlexNet贏得2012年ImageNet競賽以來&#xff0c;每個新的獲勝架構通常都會增加更多層數以降低錯誤率。一段時間內&#xff0c;增加層數確實有效&#xff0c;但隨著網絡深度的增加&#xff0c;深度學習中一個常見的問題——梯度消失或梯度爆炸開始出現。 梯度消失問題會導致梯…

JVM——引入

什么是JVM&#xff1f;它與JDK、JRE的關系&#xff1f; JVM、JRE 和 JDK 是 Java 平臺的三個核心組件&#xff0c;各自承擔著不同的職責&#xff0c;它們之間的關系密不可分。理解它們的區別和聯系有助于更好地開發、部署和運行 Java 應用程序。對于 Java 開發者來說&#xff…

PyCharm 2023升級2024 版本

windows下把老版本卸載之后&#xff0c;需要把環境變量&#xff0c;注冊表信息刪除。 并且把C:\Users\用戶\AppData 文件夾下的 Local\JetBrains和Roaming\JetBrains 都刪除&#xff0c;再重新安裝 原舊項目升級的方式&#xff1a; 1.2023虛擬機的文件夾是venv 改為.venv…

從外賣大戰看O2O新趨勢:上門私廚平臺系統架構設計解析

京東高調進軍外賣市場&#xff0c;美團全力防守&#xff0c;兩大巨頭的競爭讓整個行業風起云涌。但在這場外賣大戰之外&#xff0c;一個更具潛力的細分市場正在悄然興起——上門私廚服務。 與標準化外賣不同&#xff0c;上門私廚提供的是個性化定制服務。廚師帶著新鮮食材上門現…

驅動開發系列53 - 一個OpenGL應用程序是如何調用到驅動廠商GL庫的

一:概述 一個 OpenGL 應用程序調用 GPU 驅動的過程,主要是通過動態鏈接庫(libGL.so)來完成的。本文從上到下梳理一下整個調用鏈,包含 GLVND、Mesa 或廠商驅動之間的關系。 二:調用關系 1. 首先一個 OpenGL 應用程序(比如游戲或圖形渲染軟件)在運行時會調用 OpenGL 提供…

springboot3 聲明式 HTTP 接口

1 介紹 在 Spring 6 和 Spring Boot 3 中&#xff0c;我們可以使用 Java 接口來定義聲明式的遠程 HTTP 服務。這種方法受到 Feign 等流行 HTTP 客戶端庫的啟發&#xff0c;與在 Spring Data 中定義 Repository 的方法類似。 聲明式 HTTP 接口包括用于 HTTP exchange 的注解方法…

多級緩存架構設計與實踐經驗

多級緩存架構設計與實踐經驗 在互聯網大廠Java求職者的面試中&#xff0c;經常會被問到關于多級緩存的架構設計和實踐經驗。本文通過一個故事場景來展示這些問題的實際解決方案。 第一輪提問 面試官&#xff1a;馬架構&#xff0c;歡迎來到我們公司的面試現場。請問您對多級…

Mac「brew」快速安裝Redis

安裝Redis 步驟 1&#xff1a;安裝 Redis 打開終端&#xff08;Terminal&#xff09;。 運行以下命令安裝 Redis&#xff1a; brew install redis步驟 2&#xff1a;啟動 Redis 安裝完成后&#xff0c;可以使用以下命令啟動 Redis 服務&#xff1a; brew services start redis…

文獻閱讀(一)植物應對干旱的生理學反應 | The physiology of plant responses to drought

分享一篇Science上的綜述文章&#xff0c;主要探討了植物應對干旱的生理機制&#xff0c;強調通過調控激素信號提升植物耐旱性、保障糧食安全的重要性。 摘要 干旱每年致使農作物產量的損失&#xff0c;比所有病原體造成損失的總和還要多。為適應土壤中的濕度梯度變化&#x…

if consteval

if consteval 是 C23 引入的新特性&#xff0c;該特性是關于immediate function 的&#xff0c;即consteval function。用于在編譯時檢查當前是否處于 立即函數上下文&#xff08;即常量求值環境&#xff09;&#xff0c;并根據結果選擇執行不同的代碼路徑。它是對 std::is_con…

MANIPTRANS:通過殘差學習實現高效的靈巧雙手操作遷移

25年3月來自北京通用 AI 國家重點實驗室、清華大學和北大的論文“ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via Residual Learning”。 人手在交互中起著核心作用&#xff0c;推動著靈巧機器人操作研究的不斷深入。數據驅動的具身智能算法需要精確、大…

Field訪問對象int字段,對象訪問int字段,通過openjdk17 C++源碼看對象字段訪問原理

在Java反射機制中&#xff0c;訪問對象的int類型字段值&#xff08;如field.getInt(object)&#xff09;的底層實現涉及JVM對內存偏移量的計算與直接內存訪問。本文通過分析OpenJDK 17源碼&#xff0c;揭示這一過程的核心實現邏輯。 一、字段偏移量計算 1. Java層初始化偏移量…

Java查詢數據庫表信息導出Word

參考: POI生成Word多級標題格式_poi設置word標題-CSDN博客 1.概述 使用jdbc查詢數據庫把表信息導出為word文檔, 導出為word時需要下載word模板文件。 已實現數據庫: KingbaseES, 實現代碼: 點擊跳轉 2.效果圖 2.1.生成word內容 所有數據庫合并 數據庫不合并 2.2.生成文件…

Qt中的全局函數講解集合(全)

在頭文件<QtGlobal>中包含了Qt的全局函數&#xff0c;現在就這些全局函數一一詳解。 1.qAbs 原型&#xff1a; template <typename T> T qAbs(const T &t)一個用于計算絕對值的函數。它可以用于計算各種數值類型的絕對值&#xff0c;包括整數、浮點數等 示…

AI與IT協同的典型案例

簡介 本篇代碼示例展示了IT從業者如何與AI協同工作&#xff0c;發揮各自優勢。這些案例均來自2025年的最新企業實踐&#xff0c;涵蓋了不同IT崗位的應用場景。 一、GitHub Copilot生成代碼框架 開發工程師AI協作示例&#xff1a;利用GitHub Copilot生成代碼框架&#xff0c;…

三網通電玩城平臺系統結構與源碼工程詳解(二):Node.js 服務端核心邏輯實現

本篇文章將聚焦服務端游戲邏輯實現&#xff0c;以 Node.js Socket.io 作為主要通信與邏輯處理框架&#xff0c;展開用戶登錄驗證、房間分配、子游戲調度與事件廣播機制的剖析&#xff0c;并附上多個核心代碼段。 一、服務端文件結構概覽 /server/├── index.js …