版本信息總結以及工具介紹
組件 | 版本 | 說明 |
---|---|---|
RKNN Lite | 2.1.0 | 嵌入式端推理庫 |
RKNN Runtime | 2.1.0 | 運行時庫 (967d001cc8) |
RKNN Driver | 0.9.8 | NPU驅動程序 |
模型版本 | 6 | RKNN模型格式版本 |
工具鏈版本 | 2.1.0+708089d1 | 模型轉換工具鏈 |
Python | 3.10 | 編程語言 |
OpenCV | 4.x | 圖像處理庫 |
目標平臺 | rk3588 | Rockchip RK3588芯片 |
rklm-toolkit
(瑞芯微模型工具 )、torch
(深度學習框架 )、transformers
(模型加載 )?這些核心庫,支撐模型轉換、推理。
區分兩個工具
rkllm-toolkit
:更側重?大語言模型(LLM)?相關的轉換、部署(比如 DeepSeek 這類對話模型 )。rknn-toolkit2:
專門針對?計算機視覺模型(像 YOLOv5 )?的轉換、優化,適配 RK3588 的 NPU 加速。
YOLOv5 是視覺模型,得用?rknn-toolkit2
?轉換
一,模型轉換
思路是先在虛擬機里把模型轉換好(RKNN格式)再傳到板端測試能使用后再部署到ROS2工作空間里。
- 示例文件:
bus.jpg
(測試圖片 )、dataset.txt
(數據集配置 )、model_config.yml
(模型配置 )、test.py
(轉換 / 推理測試腳本 )、yolov5s_relu.onnx
(YOLOv5 模型的 ONNX 格式文件 )等,這些是用?rknn-toolkit2
?轉換、測試 YOLOv5 模型的關鍵物料 。
?
1. 解壓?rknn-toolkit2-2.1.0.zip
?
unzip rknn-toolkit2-2.1.0.zip
cd rknn-toolkit2-2.1.0/examples/onnx/yolov5 # 瑞芯微給的 YOLO 轉換示例目錄
2. 準備 YOLOv5 模型(從?yolov5-6.0.zip
?里拿)
解壓?yolov5-6.0.zip
?,找到?yolov5s.pt
(或其他版本 ),轉成 ONNX 格式(YOLOv5 自帶轉換腳本 ):
cd yolov5-6.0
python export.py --weights yolov5s.pt --include onnx # 生成 yolov5s.onnx
?
?3. 用?rknn-toolkit2
?轉成 RKNN 格式
這里需要改一下轉換腳本,因為默認是rk3566 ,與板子rk3588不適配。
修改好參數的轉換腳本(適配RK3588)
import os
import urllib
import traceback
import time
import sys
import numpy as np
import cv2
from rknn.api import RKNN# Model from https://github.com/airockchip/rknn_model_zoo
ONNX_MODEL = 'yolov5s_relu.onnx'
RKNN_MODEL = 'yolov5s_relu_rk3588.rknn' # 明確標識RK3588平臺
IMG_PATH = './bus.jpg'
DATASET = './dataset.txt'QUANTIZE_ON = TrueOBJ_THRESH = 0.25
NMS_THRESH = 0.45
IMG_SIZE = 640CLASSES = ("person", "bicycle", "car", "motorbike ", "aeroplane ", "bus ", "train", "truck ", "boat", "traffic light","fire hydrant", "stop sign ", "parking meter", "bench", "bird", "cat", "dog ", "horse ", "sheep", "cow", "elephant","bear", "zebra ", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite","baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife ","spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza ", "donut", "cake", "chair", "sofa","pottedplant", "bed", "diningtable", "toilet ", "tvmonitor", "laptop ", "mouse ", "remote ", "keyboard ", "cell phone", "microwave ","oven ", "toaster", "sink", "refrigerator ", "book", "clock", "vase", "scissors ", "teddy bear ", "hair drier", "toothbrush ")def xywh2xyxy(x):y = np.copy(x)y[:, 0] = x[:, 0] - x[:, 2] / 2y[:, 1] = x[:, 1] - x[:, 3] / 2y[:, 2] = x[:, 0] + x[:, 2] / 2y[:, 3] = x[:, 1] + x[:, 3] / 2return ydef process(input, mask, anchors):anchors = [anchors[i] for i in mask]grid_h, grid_w = map(int, input.shape[0:2])box_confidence = input[..., 4]box_confidence = np.expand_dims(box_confidence, axis=-1)box_class_probs = input[..., 5:]box_xy = input[..., :2]*2 - 0.5col = np.tile(np.arange(0, grid_w), grid_w).reshape(-1, grid_w)row = np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_h)col = col.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)row = row.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)grid = np.concatenate((col, row), axis=-1)box_xy += gridbox_xy *= int(IMG_SIZE/grid_h)box_wh = pow(input[..., 2:4]*2, 2)box_wh = box_wh * anchorsbox = np.concatenate((box_xy, box_wh), axis=-1)return box, box_confidence, box_class_probsdef filter_boxes(boxes, box_confidences, box_class_probs):boxes = boxes.reshape(-1, 4)box_confidences = box_confidences.reshape(-1)box_class_probs = box_class_probs.reshape(-1, box_class_probs.shape[-1])_box_pos = np.where(box_confidences >= OBJ_THRESH)boxes = boxes[_box_pos]box_confidences = box_confidences[_box_pos]box_class_probs = box_class_probs[_box_pos]class_max_score = np.max(box_class_probs, axis=-1)classes = np.argmax(box_class_probs, axis=-1)_class_pos = np.where(class_max_score >= OBJ_THRESH)boxes = boxes[_class_pos]classes = classes[_class_pos]scores = (class_max_score * box_confidences)[_class_pos]return boxes, classes, scoresdef nms_boxes(boxes, scores):x = boxes[:, 0]y = boxes[:, 1]w = boxes[:, 2] - boxes[:, 0]h = boxes[:, 3] - boxes[:, 1]areas = w * horder = scores.argsort()[::-1]keep = []while order.size > 0:i = order[0]keep.append(i)xx1 = np.maximum(x[i], x[order[1:]])yy1 = np.maximum(y[i], y[order[1:]])xx2 = np.minimum(x[i] + w[i], x[order[1:]] + w[order[1:]])yy2 = np.minimum(y[i] + h[i], y[order[1:]] + h[order[1:]])w1 = np.maximum(0.0, xx2 - xx1 + 0.00001)h1 = np.maximum(0.0, yy2 - yy1 + 0.00001)inter = w1 * h1ovr = inter / (areas[i] + areas[order[1:]] - inter)inds = np.where(ovr <= NMS_THRESH)[0]order = order[inds + 1]keep = np.array(keep)return keepdef yolov5_post_process(input_data):masks = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],[59, 119], [116, 90], [156, 198], [373, 326]]boxes, classes, scores = [], [], []for input, mask in zip(input_data, masks):b, c, s = process(input, mask, anchors)b, c, s = filter_boxes(b, c, s)boxes.append(b)classes.append(c)scores.append(s)boxes = np.concatenate(boxes)boxes = xywh2xyxy(boxes)classes = np.concatenate(classes)scores = np.concatenate(scores)nboxes, nclasses, nscores = [], [], []for c in set(classes):inds = np.where(classes == c)b = boxes[inds]c = classes[inds]s = scores[inds]keep = nms_boxes(b, s)nboxes.append(b[keep])nclasses.append(c[keep])nscores.append(s[keep])if not nclasses and not nscores:return None, None, Noneboxes = np.concatenate(nboxes)classes = np.concatenate(nclasses)scores = np.concatenate(nscores)return boxes, classes, scoresdef draw(image, boxes, scores, classes):print("{:^12} {:^12} {}".format('class', 'score', 'xmin, ymin, xmax, ymax'))print('-' * 50)for box, score, cl in zip(boxes, scores, classes):top, left, right, bottom = boxtop = int(top)left = int(left)right = int(right)bottom = int(bottom)cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)cv2.putText(image, '{0} {1:.2f}'.format(CLASSES[cl], score),(top, left - 6),cv2.FONT_HERSHEY_SIMPLEX,0.6, (0, 0, 255), 2)print("{:^12} {:^12.3f} [{:>4}, {:>4}, {:>4}, {:>4}]".format(CLASSES[cl], score, top, left, right, bottom))def letterbox(im, new_shape=(640, 640), color=(0, 0, 0)):shape = im.shape[:2]if isinstance(new_shape, int):new_shape = (new_shape, new_shape)r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])ratio = r, rnew_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]dw /= 2dh /= 2if shape[::-1] != new_unpad:im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))left, right = int(round(dw - 0.1)), int(round(dw + 0.1))im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)return im, ratio, (dw, dh)if __name__ == '__main__':rknn = RKNN(verbose=True)# 關鍵修改:config 階段指定 target_platform 為 rk3588print('--> Config model')rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588')print('done')print('--> Loading model')ret = rknn.load_onnx(model=ONNX_MODEL)if ret != 0:print('Load model failed!')exit(ret)print('done')print('--> Building model')# 移除 build 階段的 target_platform 參數(避免版本不兼容)ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=DATASET)if ret != 0:print('Build model failed!')exit(ret)print('done')print('--> Export rknn model')ret = rknn.export_rknn(RKNN_MODEL)if ret != 0:print('Export rknn model failed!')exit(ret)print('done')print('--> Init runtime environment')ret = rknn.init_runtime()if ret != 0:print('Init runtime environment failed!')exit(ret)print('done')img = cv2.imread(IMG_PATH)img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))print('--> Running model')img2 = np.expand_dims(img, 0)outputs = rknn.inference(inputs=[img2], data_format=['nhwc'])np.save('./onnx_yolov5_0.npy', outputs[0])np.save('./onnx_yolov5_1.npy', outputs[1])np.save('./onnx_yolov5_2.npy', outputs[2])print('done')input0_data = outputs[0]input1_data = outputs[1]input2_data = outputs[2]input0_data = input0_data.reshape([3, -1]+list(input0_data.shape[-2:]))input1_data = input1_data.reshape([3, -1]+list(input1_data.shape[-2:]))input2_data = input2_data.reshape([3, -1]+list(input2_data.shape[-2:]))input_data = list()input_data.append(np.transpose(input0_data, (2, 3, 0, 1)))input_data.append(np.transpose(input1_data, (2, 3, 0, 1)))input_data.append(np.transpose(input2_data, (2, 3, 0, 1)))boxes, classes, scores = yolov5_post_process(input_data)img_1 = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)if boxes is not None:draw(img_1, boxes, scores, classes)cv2.imwrite('result.jpg', img_1)print('Save results to result.jpg!')rknn.release()
?保存好運行腳本即可,會在當前目錄生成轉換好的rknn格式的模型文件
python3 test.py
轉換完成后,腳本會自動使用?bus.jpg
?測試推理效果,并保存結果為?out.jpg
:
# 查看檢測結果圖片
ls out.jpg # 或用圖像查看器打開
結果驗證
- 圖片中應顯示檢測框和類別標簽(如?
person
、bus
)。 - 若推理失敗,檢查:
- ONNX 模型是否正確(如是否包含?
relu
?層適配 NPU)。 - 終端輸出的錯誤信息(如量化精度損失、算子不支持等)。
- ONNX 模型是否正確(如是否包含?
yolov5s_relu_rk3588.rknn
二,移植板端
將轉換好的yolov5s_relu_rk3588.rknn傳到板端(注意對比md5碼,傳輸過程可能損壞)
(提前把需要用到的工具以及測試圖片都傳到板端目錄里了)
先在分別是檢測腳本,用來測試模型的待檢測圖片,rknn-toolkit工具,轉換好的模型?
檢測腳本:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import cv2
import numpy as np
from rknnlite.api import RKNNLite
import os# 全局參數
OBJ_THRESH = 0.25
NMS_THRESH = 0.45
IMG_SIZE = 640
CLASSES = ("person", "bicycle", "car", "motorbike ", "aeroplane ", "bus ", "train", "truck ", "boat", "traffic light","fire hydrant", "stop sign ", "parking meter", "bench", "bird", "cat", "dog ", "horse ", "sheep", "cow", "elephant","bear", "zebra ", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite","baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife ","spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza ", "donut", "cake", "chair", "sofa","pottedplant", "bed", "diningtable", "toilet ", "tvmonitor", "laptop", "mouse", "remote ", "keyboard ", "cell phone", "microwave ","oven ", "toaster", "sink", "refrigerator ", "book", "clock", "vase", "scissors ", "teddy bear ", "hair drier", "toothbrush ")def load_rknn_model(model_path):"""加載RKNN模型并初始化運行時"""rknn = RKNNLite()# 加載模型if rknn.load_rknn(model_path) != 0:print('模型加載失敗')return None# 使用兼容性最好的初始化方法if rknn.init_runtime() != 0:print('運行環境初始化失敗')return Noneprint('RKNN Lite 初始化成功')return rknndef preprocess(image):"""圖像預處理(與轉換時完全一致)"""# BGR->RGB轉換image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)# 直接resize到640x640image = cv2.resize(image, (IMG_SIZE, IMG_SIZE))# 添加batch維度 (NHWC格式)return np.expand_dims(image, axis=0)def xywh2xyxy(x):"""中心坐標轉邊界坐標"""y = np.copy(x)y[:, 0] = x[:, 0] - x[:, 2] / 2 # x1y[:, 1] = x[:, 1] - x[:, 3] / 2 # y1y[:, 2] = x[:, 0] + x[:, 2] / 2 # x2y[:, 3] = x[:, 1] + x[:, 3] / 2 # y2return ydef process(input, mask, anchors):"""處理單個輸出層"""anchors = [anchors[i] for i in mask]grid_h, grid_w = map(int, input.shape[0:2])# 提取置信度和類別概率box_confidence = np.expand_dims(input[..., 4], axis=-1)box_class_probs = input[..., 5:]# 計算邊界框中心box_xy = input[..., :2] * 2 - 0.5# 構建網格坐標col = np.tile(np.arange(grid_w), grid_h).reshape(grid_h, grid_w)row = np.tile(np.arange(grid_h).reshape(-1, 1), grid_w)grid = np.stack([col, row], axis=-1).reshape(grid_h, grid_w, 1, 2)# 調整邊界框位置box_xy += gridbox_xy *= int(IMG_SIZE / grid_h)# 計算邊界框尺寸box_wh = pow(input[..., 2:4] * 2, 2) * anchorsreturn np.concatenate((box_xy, box_wh), axis=-1), box_confidence, box_class_probsdef filter_boxes(boxes, confidences, class_probs):"""過濾低置信度邊界框"""boxes = boxes.reshape(-1, 4)confidences = confidences.reshape(-1)class_probs = class_probs.reshape(-1, class_probs.shape[-1])# 第一次過濾:目標置信度keep = confidences >= OBJ_THRESHboxes = boxes[keep]confidences = confidences[keep]class_probs = class_probs[keep]# 第二次過濾:類別置信度class_max = np.max(class_probs, axis=-1)classes = np.argmax(class_probs, axis=-1)keep = class_max >= OBJ_THRESHreturn boxes[keep], classes[keep], (class_max * confidences)[keep]def nms_boxes(boxes, scores):"""非極大值抑制"""x1, y1 = boxes[:, 0], boxes[:, 1]x2, y2 = boxes[:, 2], boxes[:, 3]areas = (x2 - x1) * (y2 - y1)order = scores.argsort()[::-1]keep = []while order.size > 0:i = order[0]keep.append(i)# 計算IoUxx1 = np.maximum(x1[i], x1[order[1:]])yy1 = np.maximum(y1[i], y1[order[1:]])xx2 = np.minimum(x2[i], x2[order[1:]])yy2 = np.minimum(y2[i], y2[order[1:]])w = np.maximum(0.0, xx2 - xx1)h = np.maximum(0.0, yy2 - yy1)inter = w * hiou = inter / (areas[i] + areas[order[1:]] - inter)# 保留低重疊框inds = np.where(iou <= NMS_THRESH)[0]order = order[inds + 1]return np.array(keep)def yolov5_post_process(inputs):"""YOLOv5后處理主函數"""masks = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],[59, 119], [116, 90], [156, 198], [373, 326]]all_boxes, all_classes, all_scores = [], [], []# 處理三個輸出層for input, mask in zip(inputs, masks):boxes, confs, probs = process(input, mask, anchors)boxes, classes, scores = filter_boxes(boxes, confs, probs)all_boxes.append(boxes)all_classes.append(classes)all_scores.append(scores)# 合并所有檢測結果boxes = np.concatenate(all_boxes, axis=0)classes = np.concatenate(all_classes, axis=0)scores = np.concatenate(all_scores, axis=0)# 無檢測結果處理if len(boxes) == 0:return None, None, None# 轉換坐標格式boxes = xywh2xyxy(boxes)# 按類別進行NMSfinal_boxes, final_classes, final_scores = [], [], []for cls in set(classes):idx = classes == clscls_boxes = boxes[idx]cls_scores = scores[idx]keep = nms_boxes(cls_boxes, cls_scores)final_boxes.append(cls_boxes[keep])final_classes.append(classes[idx][keep])final_scores.append(cls_scores[keep])return (np.concatenate(final_boxes),np.concatenate(final_classes),np.concatenate(final_scores))def prepare_outputs(outputs):"""對齊虛擬機輸出處理邏輯"""out0 = outputs[0].reshape([3, -1] + list(outputs[0].shape[-2:]))out1 = outputs[1].reshape([3, -1] + list(outputs[1].shape[-2:]))out2 = outputs[2].reshape([3, -1] + list(outputs[2].shape[-2:]))return [np.transpose(out0, (2, 3, 0, 1)),np.transpose(out1, (2, 3, 0, 1)),np.transpose(out2, (2, 3, 0, 1))]def draw_results(image, boxes, classes, scores):"""繪制檢測結果"""for box, cls, score in zip(boxes, classes, scores):x1, y1, x2, y2 = map(int, box)label = f"{CLASSES[cls]} {score:.2f}"cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)cv2.putText(image, label, (x1, y1-10),cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2)return imageif __name__ == "__main__":# 配置參數MODEL_PATH = "yolov5s_relu_rk3588.rknn"IMAGE_PATH = "person2.jpg"OUTPUT_PATH = "detection_result.jpg"# 加載模型print("="*50)print("RKNN YOLOv5 目標檢測")print(f"模型: {MODEL_PATH}")print(f"圖像: {IMAGE_PATH}")print("="*50)rknn = load_rknn_model(MODEL_PATH)if not rknn:exit(1)# 讀取圖像image = cv2.imread(IMAGE_PATH)if image is None:print(f"錯誤: 無法讀取圖像 {IMAGE_PATH}")rknn.release()exit(1)orig_h, orig_w = image.shape[:2]print(f"圖像尺寸: {orig_w}x{orig_h}")# 預處理input_data = preprocess(image)# 推理outputs = rknn.inference(inputs=[input_data], data_format=["nhwc"])# 處理輸出processed_outs = prepare_outputs(outputs)# 后處理boxes, classes, scores = yolov5_post_process(processed_outs)# 結果處理if boxes is not None:# 坐標映射回原圖scale_x, scale_y = orig_w/IMG_SIZE, orig_h/IMG_SIZEboxes[:, [0, 2]] *= scale_xboxes[:, [1, 3]] *= scale_y# 繪制結果result_img = draw_results(image.copy(), boxes, classes, scores)cv2.imwrite(OUTPUT_PATH, result_img)# 打印結果print(f"檢測到 {len(boxes)} 個目標:")for i, (box, cls, score) in enumerate(zip(boxes, classes, scores)):print(f"{i+1}. {CLASSES[cls]}: {score:.2f} @ [{int(box[0])},{int(box[1])} {int(box[2])},{int(box[3])}]")print(f"結果保存至: {OUTPUT_PATH}")else:print("未檢測到目標")# 釋放資源rknn.release()print("檢測完成")
運行后會調用模型識別當前目錄里的測試圖片并在當前目錄生成標出檢測框detection_result.jpg,
可打開圖片查看是否識別準確,我之前出現了亂框的情況,先是發現md5碼對比原來在虛擬機里杠轉換出來時不一樣,后來優化傳輸方式(壓縮后scp傳輸)后還是存在亂框的問題,才又檢查發現是板端腳本的問題,因為處理環境的改變(之前是在虛擬機),所以需要做相應調整。
三,部署到ROS工作空間
接下來開始模型在ROS2humble工作空間下的部署
1,在自己的工作空間創建功能包
cd ~/Astra_ws/src
ros2 pkg create --build-type ament_python yolov5_rockchip \--dependencies rclpy cv_bridge sensor_msgs image_transport rknn_api
功能包目錄結構
yolov5_rockchip/
├── CMakeLists.txt
├── package.xml
├── setup.py
├── setup.cfg
└── src/└── yolov5_rockchip/├── __init__.py└── yolov5_node.py # 主節點文件
2、部署模型文件
?創建模型存放目錄
mkdir -p ~/Astra_ws/src/yolov5_rockchip/models
cp ~/models/yolov5s_relu_rk3588.rknn ~/Astra_ws/src/yolov5_rockchip/models/
把之前轉換好的RKNN模型放在在功能包里創建的model目錄下
修改 package.xml 添加依賴
確保package.xml
包含以下依賴:
<exec_depend>rclpy</exec_depend>
<exec_depend>cv_bridge</exec_depend>
<exec_depend>sensor_msgs</exec_depend>
<exec_depend>image_transport</exec_depend>
<exec_depend>rknn_api</exec_depend>
3、編寫 YOLOv5 節點代碼
創建主節點文件
touch ~/Astra_ws/src/yolov5_rockchip/src/yolov5_rockchip/yolov5_node.py
chmod +x ~/Astra_ws/src/yolov5_rockchip/src/yolov5_rockchip/yolov5_node.py
yolov5_node.py
#!/home/elf/miniconda3/envs/rknn-env/bin/python3 # 替換為你的Python路徑
# -*- coding: utf-8 -*-
import os
import sys# 強制添加rknn_toolkit_lite2到Python路徑
rknn_path = '/home/elf/miniconda3/envs/rknn-env/lib/python3.10/site-packages' # 替換為你的site-packages路徑
if rknn_path not in sys.path:sys.path.append(rknn_path)import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import cv2
import numpy as np# 全局參數(僅聲明,不在此處賦值)
OBJ_THRESH = None
NMS_THRESH = None
IMG_SIZE = 640
CLASSES = ("person", "bicycle", "car", "motorbike ", "aeroplane ", "bus ", "train", "truck ", "boat", "traffic light","fire hydrant", "stop sign ", "parking meter", "bench", "bird", "cat", "dog ", "horse ", "sheep", "cow", "elephant","bear", "zebra ", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite","baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife ","spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza ", "donut", "cake", "chair", "sofa","pottedplant", "bed", "diningtable", "toilet ", "tvmonitor", "laptop", "mouse", "remote ", "keyboard ", "cell phone", "microwave ","oven ", "toaster", "sink", "refrigerator ", "book", "clock", "vase", "scissors ", "teddy bear ", "hair drier", "toothbrush ")class YOLOv5Node(Node):def __init__(self):super().__init__('yolov5_rockchip_node')# 聲明可配置參數self.declare_parameter('model_path', '/home/elf/Astra_ws/src/yolov5_rockchip/models/yolov5s_relu_rk3588.rknn')self.declare_parameter('input_topic', '/camera/color/image_raw')self.declare_parameter('output_topic', '/yolov5/detections')self.declare_parameter('obj_thresh', 0.25) # 默認值直接寫在這里self.declare_parameter('nms_thresh', 0.45) # 默認值直接寫在這里# 正確使用global聲明(在賦值前,且不引用全局默認值)global OBJ_THRESH, NMS_THRESHOBJ_THRESH = self.get_parameter('obj_thresh').valueNMS_THRESH = self.get_parameter('nms_thresh').value# 初始化RKNN模型self.rknn = self.load_rknn_model()if not self.rknn:self.get_logger().fatal('模型加載失敗,節點退出')exit(1)# 創建圖像轉換橋接self.bridge = CvBridge()# 創建訂閱者和發布者self.subscription = self.create_subscription(Image,self.get_parameter('input_topic').value,self.image_callback,10)self.subscription # 防止未使用變量警告self.publisher = self.create_publisher(Image,self.get_parameter('output_topic').value,10)self.get_logger().info(f'YOLOv5 Rockchip節點已啟動,目標檢測閾值: {OBJ_THRESH}')def load_rknn_model(self):"""加載RKNN模型并初始化運行時"""try:# 從rknn_toolkit_lite2導入RKNNLitefrom rknn_toolkit_lite2.rknnlite.api import RKNNLiterknn = RKNNLite()model_path = self.get_parameter('model_path').value# 加載模型if rknn.load_rknn(model_path) != 0:self.get_logger().error(f'加載模型失敗: {model_path}')return None# 初始化模型運行時if rknn.init_runtime() != 0:self.get_logger().error('初始化模型運行時失敗')return Noneself.get_logger().info('RKNN Lite 初始化成功')return rknnexcept ImportError as e:self.get_logger().error(f'模塊導入失敗: {e},請檢查rknn_toolkit_lite2安裝')return Nonedef preprocess(self, image):"""圖像預處理(與轉換時完全一致)"""# BGR->RGB轉換image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)# 調整大小到640x640image = cv2.resize(image, (IMG_SIZE, IMG_SIZE))# 添加batch維度 (NHWC格式)return np.expand_dims(image, axis=0)def xywh2xyxy(self, x):"""中心坐標轉邊界坐標"""y = np.copy(x)y[:, 0] = x[:, 0] - x[:, 2] / 2 # x1y[:, 1] = x[:, 1] - x[:, 3] / 2 # y1y[:, 2] = x[:, 0] + x[:, 2] / 2 # x2y[:, 3] = x[:, 1] + x[:, 3] / 2 # y2return ydef process(self, input, mask, anchors):"""處理單個輸出層"""anchors = [anchors[i] for i in mask]grid_h, grid_w = map(int, input.shape[0:2])# 提取置信度和類別概率box_confidence = np.expand_dims(input[..., 4], axis=-1)box_class_probs = input[..., 5:]# 計算邊界框中心box_xy = input[..., :2] * 2 - 0.5# 構建網格坐標col = np.tile(np.arange(grid_w), grid_h).reshape(grid_h, grid_w)row = np.tile(np.arange(grid_h).reshape(-1, 1), grid_w)grid = np.stack([col, row], axis=-1).reshape(grid_h, grid_w, 1, 2)# 調整邊界框位置box_xy += gridbox_xy *= int(IMG_SIZE / grid_h)# 計算邊界框尺寸box_wh = pow(input[..., 2:4] * 2, 2) * anchorsreturn np.concatenate((box_xy, box_wh), axis=-1), box_confidence, box_class_probsdef filter_boxes(self, boxes, confidences, class_probs):"""過濾低置信度邊界框"""boxes = boxes.reshape(-1, 4)confidences = confidences.reshape(-1)class_probs = class_probs.reshape(-1, class_probs.shape[-1])# 第一次過濾:目標置信度keep = confidences >= OBJ_THRESHboxes = boxes[keep]confidences = confidences[keep]class_probs = class_probs[keep]# 第二次過濾:類別置信度class_max = np.max(class_probs, axis=-1)classes = np.argmax(class_probs, axis=-1)keep = class_max >= OBJ_THRESHreturn boxes[keep], classes[keep], (class_max * confidences)[keep]def nms_boxes(self, boxes, scores):"""非極大值抑制"""x1, y1 = boxes[:, 0], boxes[:, 1]x2, y2 = boxes[:, 2], boxes[:, 3]areas = (x2 - x1) * (y2 - y1)order = scores.argsort()[::-1]keep = []while order.size > 0:i = order[0]keep.append(i)# 計算IoUxx1 = np.maximum(x1[i], x1[order[1:]])yy1 = np.maximum(y1[i], y1[order[1:]])xx2 = np.minimum(x2[i], x2[order[1:]])yy2 = np.minimum(y2[i], y2[order[1:]])w = np.maximum(0.0, xx2 - xx1)h = np.maximum(0.0, yy2 - yy1)inter = w * hiou = inter / (areas[i] + areas[order[1:]] - inter)# 保留低重疊框inds = np.where(iou <= NMS_THRESH)[0]order = order[inds + 1]return np.array(keep)def yolov5_post_process(self, inputs):"""YOLOv5后處理主函數"""masks = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],[59, 119], [116, 90], [156, 198], [373, 326]]all_boxes, all_classes, all_scores = [], [], []# 處理三個輸出層for input, mask in zip(inputs, masks):boxes, confs, probs = self.process(input, mask, anchors)boxes, classes, scores = self.filter_boxes(boxes, confs, probs)all_boxes.append(boxes)all_classes.append(classes)all_scores.append(scores)# 合并所有檢測結果boxes = np.concatenate(all_boxes, axis=0)classes = np.concatenate(all_classes, axis=0)scores = np.concatenate(all_scores, axis=0)# 無檢測結果處理if len(boxes) == 0:return None, None, None# 轉換坐標格式boxes = self.xywh2xyxy(boxes)# 按類別進行NMSfinal_boxes, final_classes, final_scores = [], [], []for cls in set(classes):idx = classes == clscls_boxes = boxes[idx]cls_scores = scores[idx]keep = self.nms_boxes(cls_boxes, cls_scores)final_boxes.append(cls_boxes[keep])final_classes.append(classes[idx][keep])final_scores.append(cls_scores[keep])return (np.concatenate(final_boxes),np.concatenate(final_classes),np.concatenate(final_scores))def prepare_outputs(self, outputs):"""對齊虛擬機輸出處理邏輯"""out0 = outputs[0].reshape([3, -1] + list(outputs[0].shape[-2:]))out1 = outputs[1].reshape([3, -1] + list(outputs[1].shape[-2:]))out2 = outputs[2].reshape([3, -1] + list(outputs[2].shape[-2:]))return [np.transpose(out0, (2, 3, 0, 1)),np.transpose(out1, (2, 3, 0, 1)),np.transpose(out2, (2, 3, 0, 1))]def draw_results(self, image, boxes, classes, scores):"""繪制檢測結果"""orig_h, orig_w = image.shape[:2]if boxes is not None:# 坐標映射回原圖scale_x, scale_y = orig_w/IMG_SIZE, orig_h/IMG_SIZEboxes[:, [0, 2]] *= scale_xboxes[:, [1, 3]] *= scale_y# 繪制結果for box, cls, score in zip(boxes, classes, scores):x1, y1, x2, y2 = map(int, box)label = f"{CLASSES[cls]} {score:.2f}"cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)cv2.putText(image, label, (x1, y1-10),cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2)return imagedef image_callback(self, msg):"""圖像回調函數,處理每一幀相機數據"""try:# 將ROS圖像消息轉換為OpenCV格式cv_image = self.bridge.imgmsg_to_cv2(msg, 'bgr8')except Exception as e:self.get_logger().error(f'圖像轉換錯誤: {e}')return# 預處理圖像input_data = self.preprocess(cv_image)# 模型推理try:outputs = self.rknn.inference(inputs=[input_data], data_format=["nhwc"])except Exception as e:self.get_logger().error(f'模型推理錯誤: {e}')return# 處理輸出try:processed_outs = self.prepare_outputs(outputs)boxes, classes, scores = self.yolov5_post_process(processed_outs)except Exception as e:self.get_logger().error(f'后處理錯誤: {e}')return# 繪制檢測結果result_image = self.draw_results(cv_image.copy(), boxes, classes, scores)# 發布結果圖像try:result_msg = self.bridge.cv2_to_imgmsg(result_image, 'bgr8')self.publisher.publish(result_msg)except Exception as e:self.get_logger().error(f'結果發布錯誤: {e}')def main(args=None):rclpy.init(args=args)node = YOLOv5Node()try:rclpy.spin(node)except KeyboardInterrupt:node.get_logger().info('節點被用戶中斷')finally:# 釋放資源if hasattr(node, 'rknn') and node.rknn:node.rknn.release()node.destroy_node()rclpy.shutdown()if __name__ == '__main__':main()
配置 setup.py
確保setup.py
包含節點入口:
entry_points={'console_scripts': ['yolov5_node = yolov5_rockchip.yolov5_node:main',],
},
編譯功能包
cd ~/Astra_ws
colcon build --packages-select yolov5_rockchip
source install/setup.bash
啟動 YOLOv5 檢測節點
忘記說了注意啟動Astrapro相機,因為yolo節點的輸入(訂閱)是啟動后相機發布的圖像話題(發布)也就是指令后面--input/camera/color/image_raw的意思。
ros2 run yolov5_rockchip yolov5_node --input /camera/color/image_raw
驗證部署結果
查看話題列表:
ros2 topic list | grep detections
# 應看到 /detections 話題
如下圖,左邊是運行yolo節點后,右邊是新開終端然后查看的所有話題(當時我用的是ros2 topic list 沒有用管道,不然只有/yolov5/detections)最下面的/yolov5/detections就是我們的yolo模型識別節點運行后發布的話題
?下面這個圖是啟動相機后運行yolo節點前的話題,對比可發現多了我們的yolo模型識別節點運行后發布的話題/yolov5/detections
?
使用 rviz2 可視化結果:
ros2 run rviz2 rviz2
?啟動rviz后點擊左下角Add
?點擊By topic
選擇/yolov5/detections
話題再點image即可顯示我們的檢測加了識別框之后的畫面