mmdetection訓練(1)voc格式的數據集(自制)
- 提前準備
- 一、voc數據集
- 二、修改配置代碼進行訓練(敲黑板!!!!!)
- 1.數據集相關內容修改
- 2.自定義配置文件構建
- 三、訓練及其評估測試
- 總結
提前準備
voc數據集,mmdetection代碼庫
一、voc數據集
需要有以下三個文件夾的格式數據
(這里不需要完全按照vocdevkit/voc2007這種進行構建,下面教大家如何修改)
存放xml標簽文件
放的train.txt,test.txt,和val.txt
存放原始的圖片數據
二、修改配置代碼進行訓練(敲黑板!!!!!)
強調:這里包括以下的內容,均在自己創建模板文件下進行修改,原則上不對原始代碼進行修改,修改原始代碼繁瑣且容易搞混破壞代碼整體結構,以下均為自己創建配置文件,請自己按需改寫(個人喜歡這樣的配置方式)。
1.數據集相關內容修改
(1)configs/base/datasets/voc0712.py中
修改相關的路徑與voc數據集保持一致(這個文件夾中只要修改路徑不要修改別的東西)
# dataset settings
dataset_type = 'VOCDataset'
# data_root = 'data/VOCdevkit/'
data_root = '/home/ubuntu/data/Official-SSDD-OPEN/BBox_SSDD/'# Example to use different file client
# Method 1: simply set the data root and let the file I/O module
# automatically Infer from prefix (not support LMDB and Memcache yet)# data_root = 's3://openmmlab/datasets/detection/segmentation/VOCdevkit/'# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
# backend_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/segmentation/',
# 'data/': 's3://openmmlab/datasets/segmentation/'
# }))
backend_args = None###數據增強的方法
train_pipeline = [dict(type='LoadImageFromFile', backend_args=backend_args),dict(type='LoadAnnotations', with_bbox=True),dict(type='Resize', scale=(1000, 600), keep_ratio=True),dict(type='RandomFlip', prob=0.5),dict(type='PackDetInputs')
]
test_pipeline = [dict(type='LoadImageFromFile', backend_args=backend_args),dict(type='Resize', scale=(1000, 600), keep_ratio=True),# avoid bboxes being resizeddict(type='LoadAnnotations', with_bbox=True),dict(type='PackDetInputs',meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape','scale_factor'))
]###數據加載
train_dataloader = dict(batch_size=2,num_workers=2,persistent_workers=True,sampler=dict(type='DefaultSampler', shuffle=True),batch_sampler=dict(type='AspectRatioBatchSampler'),dataset=dict(type='RepeatDataset',times=3,dataset=dict(type='ConcatDataset',# VOCDataset will add different `dataset_type` in dataset.metainfo,# which will get error if using ConcatDataset. Adding# `ignore_keys` can avoid this error.ignore_keys=['dataset_type'],datasets=[dict(type=dataset_type,data_root=data_root,# ann_file='VOC2007/ImageSets/Main/trainval.txt',ann_file='voc_style/ImageSets/Main/train.txt',data_prefix=dict(sub_data_root='voc_style/'),filter_cfg=dict(filter_empty_gt=True, min_size=32, bbox_min_size=32),pipeline=train_pipeline,backend_args=backend_args),# dict(# type=dataset_type,# data_root=data_root,# ann_file='VOC2012/ImageSets/Main/trainval.txt',# data_prefix=dict(sub_data_root='VOC2012/'),# filter_cfg=dict(# filter_empty_gt=True, min_size=32, bbox_min_size=32),# pipeline=train_pipeline,# backend_args=backend_args)])))val_dataloader = dict(batch_size=1,num_workers=2,persistent_workers=True,drop_last=False,sampler=dict(type='DefaultSampler', shuffle=False),dataset=dict(type=dataset_type,data_root=data_root,ann_file='voc_style/ImageSets/Main/test.txt',data_prefix=dict(sub_data_root='voc_style/'),test_mode=True,pipeline=test_pipeline,backend_args=backend_args))
test_dataloader = val_dataloader# Pascal VOC2007 uses `11points` as default evaluate mode, while PASCAL
# VOC2012 defaults to use 'area'.
val_evaluator = dict(type='VOCMetric', metric='mAP', eval_mode='11points')
test_evaluator = val_evaluator
(2)修改 mmdet/datasets/voc.py文件
修改自己數據集的類別信息與框的顏色,并且一定注釋取消voc2007和2012版本判斷的要求,方便后面使用自己的數據集路徑。
# Copyright (c) OpenMMLab. All rights reserved.
from mmdet.registry import DATASETS
from .xml_style import XMLDataset@DATASETS.register_module()
class VOCDataset(XMLDataset):"""Dataset for PASCAL VOC."""# 標準的voc格式類別信息# METAINFO = {# 'classes':# ('aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat',# 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person',# 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'),# # palette is a list of color tuples, which is used for visualization.# 'palette': [(106, 0, 228), (119, 11, 32), (165, 42, 42), (0, 0, 192),# (197, 226, 255), (0, 60, 100), (0, 0, 142), (255, 77, 255),# (153, 69, 1), (120, 166, 157), (0, 182, 199),# (0, 226, 252), (182, 182, 255), (0, 0, 230), (220, 20, 60),# (163, 255, 0), (0, 82, 0), (3, 95, 161), (0, 80, 100),# (183, 130, 88)]# }### 修改的數據類別信息METAINFO = {'classes':('ship', ),# palette is a list of color tuples, which is used for visualization.'palette': [(106, 0, 228)]}def __init__(self, **kwargs):super().__init__(**kwargs)# if 'VOC2007' in self.sub_data_root:# self._metainfo['dataset_type'] = 'VOC2007'# elif 'VOC2012' in self.sub_data_root:# self._metainfo['dataset_type'] = 'VOC2012'# else:# self._metainfo['dataset_type'] = None
(3)修改網絡配置文件中的輸出類別(非必須操作)
configs/base/models/faster-rcnn_r50_fpn.py
2.自定義配置文件構建
(1)在代碼根目錄新建myconfig.py的文件,
(2)復制以下內容到其中:
新的配置文件主要是分為三個部分
1、倒入相應的庫文件(base)
2、模型加載文件:一定要家在修改num_classses=‘你的類別’
3、數據集配置:直接復制configs/base/datasets/voc0712.py即可
# 新配置繼承了基本配置,并做了必要的修改
# _base_ = './configs/faster_rcnn/mask-rcnn_r50-caffe_fpn_ms-poly-1x_coco.py'
_base_ = './configs/faster_rcnn/faster-rcnn_r50_fpn_1x_voc.py'
# 我們還需要更改 head 中的 num_classes 以匹配數據集中的類別數
########------模型相關配置--------#########
model = dict(roi_head=dict(bbox_head=dict(num_classes=1)))########------修改數據集相關配置--------#########
backend_args = None# dataset settingsdataset_type = 'VOCDataset'
# data_root = 'data/VOCdevkit/'
data_root = '/home/ubuntu/data/Official-SSDD-OPEN/BBox_SSDD/'###數據增強的方法
train_pipeline = [dict(type='LoadImageFromFile', backend_args=backend_args),dict(type='LoadAnnotations', with_bbox=True),dict(type='Resize', scale=(1000, 600), keep_ratio=True),dict(type='RandomFlip', prob=0.5),dict(type='PackDetInputs')
]
test_pipeline = [dict(type='LoadImageFromFile', backend_args=backend_args),dict(type='Resize', scale=(1000, 600), keep_ratio=True),# avoid bboxes being resizeddict(type='LoadAnnotations', with_bbox=True),dict(type='PackDetInputs',meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape','scale_factor'))
]###數據加載
train_dataloader = dict(batch_size=2,num_workers=2,persistent_workers=True,sampler=dict(type='DefaultSampler', shuffle=True),batch_sampler=dict(type='AspectRatioBatchSampler'),dataset=dict(type='RepeatDataset',times=3,dataset=dict(type='ConcatDataset',# VOCDataset will add different `dataset_type` in dataset.metainfo,# which will get error if using ConcatDataset. Adding# `ignore_keys` can avoid this error.ignore_keys=['dataset_type'],datasets=[dict(type=dataset_type,data_root=data_root,# ann_file='VOC2007/ImageSets/Main/trainval.txt',ann_file='voc_style/ImageSets/Main/train.txt',data_prefix=dict(sub_data_root='voc_style/'),filter_cfg=dict(filter_empty_gt=True, min_size=32, bbox_min_size=32),pipeline=train_pipeline,backend_args=backend_args),# dict(# type=dataset_type,# data_root=data_root,# ann_file='VOC2012/ImageSets/Main/trainval.txt',# data_prefix=dict(sub_data_root='VOC2012/'),# filter_cfg=dict(# filter_empty_gt=True, min_size=32, bbox_min_size=32),# pipeline=train_pipeline,# backend_args=backend_args)])))val_dataloader = dict(batch_size=1,num_workers=2,persistent_workers=True,drop_last=False,sampler=dict(type='DefaultSampler', shuffle=False),dataset=dict(type=dataset_type,data_root=data_root,ann_file='voc_style/ImageSets/Main/test.txt',data_prefix=dict(sub_data_root='voc_style/'),test_mode=True,pipeline=test_pipeline,backend_args=backend_args))
test_dataloader = val_dataloader
###數據加載
train_dataloader = dict(batch_size=2,num_workers=2,persistent_workers=True,sampler=dict(type='DefaultSampler', shuffle=True),batch_sampler=dict(type='AspectRatioBatchSampler'),dataset=dict(type='RepeatDataset',times=3,dataset=dict(type='ConcatDataset',# VOCDataset will add different `dataset_type` in dataset.metainfo,# which will get error if using ConcatDataset. Adding# `ignore_keys` can avoid this error.ignore_keys=['dataset_type'],datasets=[dict(type=dataset_type,data_root=data_root,# ann_file='VOC2007/ImageSets/Main/trainval.txt',ann_file='voc_style/ImageSets/Main/train.txt',data_prefix=dict(sub_data_root='voc_style/'),filter_cfg=dict(filter_empty_gt=True, min_size=32, bbox_min_size=32),pipeline=train_pipeline,backend_args=backend_args),# dict(# type=dataset_type,# data_root=data_root,# ann_file='VOC2012/ImageSets/Main/trainval.txt',# data_prefix=dict(sub_data_root='VOC2012/'),# filter_cfg=dict(# filter_empty_gt=True, min_size=32, bbox_min_size=32),# pipeline=train_pipeline,# backend_args=backend_args)])))val_dataloader = dict(batch_size=1,num_workers=2,persistent_workers=True,drop_last=False,sampler=dict(type='DefaultSampler', shuffle=False),dataset=dict(type=dataset_type,data_root=data_root,ann_file='voc_style/ImageSets/Main/test.txt',data_prefix=dict(sub_data_root='voc_style/'),test_mode=True,pipeline=test_pipeline,backend_args=backend_args))
test_dataloader = val_dataloader# 使用預訓練的 Mask R-CNN 模型權重來做初始化,可以提高模型性能
# load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco_bbox_mAP-0.408__segm_mAP-0.37_20200504_163245-42aa3d00.pth'
三、訓練及其評估測試
模型訓練的命令:
python tools/train.py myconfig_voc.py```
模型測試的命令```bash
在這里插入代碼片
總結
訓練遇到的問題
記錄下遇到的問題。訓練SSDD數據集的時候,發現coco格式訓練正常,但voc格式訓練出現map值很低,一直升不上去。試了下2.x版本的mmdetection訓練voc格式沒問題,解決方案是在configs/base/datasets/voc0712.py中刪掉bbox_min_size=32即可,原文鏈接:https://blog.csdn.net/Pliter/article/details/134389961