在上文《 faster-rcnn系列學習之準備數據》,我們已經介紹了imdb與roidb的一些情況,下面我們準備再繼續說一下rpn階段和fast rcnn階段的數據準備整個處理流程。
由于這兩個階段的數據準備有些重合,所以放在一起說明。
我們并行地從train_rpn與train_fast_rcnn說起,這兩個函數在train_faster_rcnn_alt_opt.py中。
def train_rpn(queue=None, imdb_name=None, init_model=None, solver=None,max_iters=None, cfg=None):"""Train a Region Proposal Network in a separate training process."""# Not using any proposals, just ground-truth boxescfg.TRAIN.HAS_RPN = Truecfg.TRAIN.BBOX_REG = False # applies only to Fast R-CNN bbox regressioncfg.TRAIN.PROPOSAL_METHOD = 'gt'cfg.TRAIN.IMS_PER_BATCH = 1print 'Init model: {}'.format(init_model)print('Using config:')pprint.pprint(cfg)import caffe_init_caffe(cfg)roidb, imdb = get_roidb(imdb_name)print 'roidb len: {}'.format(len(roidb))output_dir = get_output_dir(imdb)print 'Output will be saved to `{:s}`'.format(output_dir)model_paths = train_net(solver, roidb, output_dir,pretrained_model=init_model,max_iters=max_iters)# Cleanup all but the final modelfor i in model_paths[:-1]:os.remove(i)rpn_model_path = model_paths[-1]# Send final model path through the multiprocessing queuequeue.put({'model_path': rpn_model_path})def train_fast_rcnn(queue=None, imdb_name=None, init_model=None, solver=None,max_iters=None, cfg=None, rpn_file=None):"""Train a Fast R-CNN using proposals generated by an RPN."""cfg.TRAIN.HAS_RPN = False # not generating prosals on-the-flycfg.TRAIN.PROPOSAL_METHOD = 'rpn' # use pre-computed RPN proposals insteadcfg.TRAIN.IMS_PER_BATCH = 2print 'Init model: {}'.format(init_model)print 'RPN proposals: {}'.format(rpn_file)print('Using config:')pprint.pprint(cfg)import caffe_init_caffe(cfg)roidb, imdb = get_roidb(imdb_name, rpn_file=rpn_file)output_dir = get_output_dir(imdb)print 'Output will be saved to `{:s}`'.format(output_dir)# Train Fast R-CNNmodel_paths = train_net(solver, roidb, output_dir,pretrained_model=init_model,max_iters=max_iters)# Cleanup all but the final modelfor i in model_paths[:-1]:os.remove(i)fast_rcnn_model_path = model_paths[-1]# Send Fast R-CNN model path over the multiprocessing queuequeue.put({'model_path': fast_rcnn_model_path})
顯然兩段代碼很相似。很顯然,兩個子網絡都從vgg-16開始訓起,自然初始輸入是相似的。
但設置不同,rpn:??
?cfg.TRAIN.HAS_RPN = True?
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?cfg.TRAIN.PROPOSAL_METHOD = 'gt' #使用gt_roidb
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ???cfg.TRAIN.IMS_PER_BATCH = 1
? ? ? ? ? ? ? ? ? ? ? ? 而fast rcnn; ?
? ? ? ? ? ? ? ? ? ? ? ? ? cfg.TRAIN.HAS_RPN = False ? ? ? ? ?
? ? cfg.TRAIN.PROPOSAL_METHOD = 'rpn' ? #使用rpn_roidb
? ? ? cfg.TRAIN.IMS_PER_BATCH = 2
我們接下來從roidb, imdb = get_roidb(imdb_name)說起。
def get_roidb(imdb_name, rpn_file=None):imdb = get_imdb(imdb_name)#通過工廠類獲取圖片數據庫信息print 'Loaded dataset `{:s}` for training'.format(imdb.name)imdb.set_proposal_method(cfg.TRAIN.PROPOSAL_METHOD)print 'Set proposal method: {:s}'.format(cfg.TRAIN.PROPOSAL_METHOD)if rpn_file is not None:imdb.config['rpn_file'] = rpn_fileroidb = get_training_roidb(imdb)#獲得訓練數據return roidb, imdb
我們先看這句:
imdb.set_proposal_method(cfg.TRAIN.PROPOSAL_METHOD)
def set_proposal_method(self, method):method = eval('self.' + method + '_roidb') # python中eval是可以具體運行里面的字符串的self.roidb_handler = method
對于rpn來說:?eval('self.gt_roidb');
對于fast rcnn來說:eval('self.rpn_roidb');eval是python的語法,指運行里面的字符串。這里這兩個命令都在pascal_voc.py中。我們逐一來看。
rpn:?
def gt_roidb(self):"""Return the database of ground-truth regions of interest.This function loads/saves from/to a cache file to speed up future calls."""cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl')if os.path.exists(cache_file):with open(cache_file, 'rb') as fid:roidb = cPickle.load(fid)print '{} gt roidb loaded from {}'.format(self.name, cache_file)return roidbgt_roidb = [self._load_pascal_annotation(index)for index in self.image_index]with open(cache_file, 'wb') as fid:cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL)print 'wrote gt roidb to {}'.format(cache_file)return gt_roidb
?這個函數是pascal_voc對象的核心函數之一,它將返回roidb數據對象。
?首先它會在cache路徑下找到以擴展名’.pkl’結尾的緩存,這個文件是通過cPickle工具將roidb序列化存儲的。如果該文件存在,那么它會先讀取這里的內容,以提高效率(所以如果你換數據集的時候,要先把cache文件給刪除,否則會造成錯誤)。否則它將調用 _load _pascal _annotation這個私有函數加載roidb中的數據,并將其保存在緩存文件中,返回roidb。
def _load_pascal_annotation(self, index):"""Load image and bounding boxes info from XML file in the PASCAL VOCformat."""filename = os.path.join(self._data_path, 'Annotations', index + '.xml')tree = ET.parse(filename)objs = tree.findall('object')if not self.config['use_diff']:# Exclude the samples labeled as difficultnon_diff_objs = [obj for obj in objs if int(obj.find('difficult').text) == 0]# if len(non_diff_objs) != len(objs):# print 'Removed {} difficult objects'.format(# len(objs) - len(non_diff_objs))objs = non_diff_objsnum_objs = len(objs)boxes = np.zeros((num_objs, 4), dtype=np.uint16)gt_classes = np.zeros((num_objs), dtype=np.int32)overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)# "Seg" area for pascal is just the box areaseg_areas = np.zeros((num_objs), dtype=np.float32)# Load object bounding boxes into a data frame.for ix, obj in enumerate(objs):bbox = obj.find('bndbox')# Make pixel indexes 0-basedx1 = float(bbox.find('xmin').text) - 1y1 = float(bbox.find('ymin').text) - 1x2 = float(bbox.find('xmax').text) - 1y2 = float(bbox.find('ymax').text) - 1cls = self._class_to_ind[obj.find('name').text.lower().strip()]boxes[ix, :] = [x1, y1, x2, y2]gt_classes[ix] = clsoverlaps[ix, cls] = 1.0seg_areas[ix] = (x2 - x1 + 1) * (y2 - y1 + 1)overlaps = scipy.sparse.csr_matrix(overlaps)return {'boxes' : boxes,'gt_classes': gt_classes,'gt_overlaps' : overlaps,'flipped' : False,'seg_areas' : seg_areas}
該函數根據每個圖像的索引,到Annotations這個文件夾下去找相應的xml標注數據,然后加載所有的bounding box對象,并去除所有的“復雜”對象。
xml的解析到此結束,接下來是roidb中的幾個類成員的賦值:
- ? boxes 一個二維數組 ? 每一行存儲 xmin ymin xmax ymax ,行指的多個box的序號
- ? gt_classes存儲了每個box所對應的類索引(類數組在初始化函數中聲明)
- ? overlap是一個二維數組,行指box的序號,列共有21列,存儲的是0.0或者1.0 ,當box對應的類別時,自然為1.0.這實際上是指對于ground truth box,由于這里的候選框 ?就是ground truth box,所以自然重疊后為1,而與其他的自然重疊設為0.后來被轉成了稀疏矩陣
- ?seg _areas存儲著 box的面積
- ?flipped 為false 代表該圖片還未被翻轉(后來在train.py里會將翻轉的圖片加進去,用該變量用于區分)
fast rcnn:?
def rpn_roidb(self):if int(self._year) == 2007 or self._image_set != 'test':gt_roidb = self.gt_roidb()rpn_roidb = self._load_rpn_roidb(gt_roidb)roidb = imdb.merge_roidbs(gt_roidb, rpn_roidb)else:roidb = self._load_rpn_roidb(None)return roidbdef _load_rpn_roidb(self, gt_roidb):filename = self.config['rpn_file']print 'loading {}'.format(filename)assert os.path.exists(filename), \'rpn data not found at: {}'.format(filename)with open(filename, 'rb') as f:box_list = cPickle.load(f)return self.create_roidb_from_box_list(box_list, gt_roidb)
在經過RPN網絡產生了proposal以后,這個函數作用是將這些proposal 的 roi與groudtruth結合起來,變成rpn_roidb.
最后用merge _roidbs將gt_roidb與rpn _roidb合并,輸出
載入rpn_roidb的過程首先需要獲得rpn的proposal,然后再進行處理。下面看create_roidb_from_box_list
def create_roidb_from_box_list(self, box_list, gt_roidb):assert len(box_list) == self.num_images, \'Number of boxes must match number of ground-truth images' # box_list是一個數組,每個元素是一個列表,每個列表指一幅圖像中含有的盒子個數roidb = []for i in xrange(self.num_images):boxes = box_list[i]num_boxes = boxes.shape[0] # 行為盒子序號,列為盒子坐標 overlaps = np.zeros((num_boxes, self.num_classes), dtype=np.float32)if gt_roidb is not None and gt_roidb[i]['boxes'].size > 0:gt_boxes = gt_roidb[i]['boxes']gt_classes = gt_roidb[i]['gt_classes']gt_overlaps = bbox_overlaps(boxes.astype(np.float),gt_boxes.astype(np.float)) argmaxes = gt_overlaps.argmax(axis=1)maxes = gt_overlaps.max(axis=1)I = np.where(maxes > 0)[0]overlaps[I, gt_classes[argmaxes[I]]] = maxes[I]overlaps = scipy.sparse.csr_matrix(overlaps)roidb.append({'boxes' : boxes,'gt_classes' : np.zeros((num_boxes,), dtype=np.int32), #為0'gt_overlaps' : overlaps, 'flipped' : False,'seg_areas' : np.zeros((num_boxes,), dtype=np.float32),})return roidb
?box_list是一個數組,每個元素是一個列表,每個列表指一幅圖像中含有的盒子 。gt_roidb自然是rpn訓練階段獲得的ground truth 盒子的情況。
?bbox_overlaps:每個proposal的box都與groud-truth的box做一次重合度計算,與anchor _target _layer.py中類似
? overlap = (重合部分面積) / (proposal _box面積 + gt_boxes面積 - 重合部分面積)
?對于每個proposal,選出最大的那個gt _boxes的值所對應的類別,然后填寫相應地重疊值,到相應的class index下。
這里fast rcnn生成的roidb,結構與rpn的相同。而gt_overlaps如下:
0 (背景類) 1 2 。。。。 21
1 0 0.8 0 0
2 0 0 0.6 0
3 0 0 0 0 (全0,為背景)
; ..............................................
n 0 0 0 0.8
橫坐標是盒子的序號,縱坐標是種類。這里需要知道的是對于每一個盒子,我們記錄它的重疊度,考慮是與某一種類的重疊度,而沒有記錄下與某個ground truth box的重疊度。有可能出現這樣的情況,某個圖片含有兩個貓,而一個候選框與這兩只貓的ground truth box的重疊度相同,且最大。那么我們沒有必要記住與哪個貓重疊度最大,而只需要知道是與貓重疊度最大。因為這里重疊度僅僅是用來與閾值做比較的。排除掉某些低前景的box. 后面進行box的回歸也會出現類似的情況。待后續。
#將一般是兩個roidb堆疊在一起,進行合并。@staticmethoddef merge_roidbs(a, b):assert len(a) == len(b)for i in xrange(len(a)):a[i]['boxes'] = np.vstack((a[i]['boxes'], b[i]['boxes'])) # 豎著堆疊a[i]['gt_classes'] = np.hstack((a[i]['gt_classes'], # 橫著拉長b[i]['gt_classes']))a[i]['gt_overlaps'] = scipy.sparse.vstack([a[i]['gt_overlaps'],b[i]['gt_overlaps']]) #豎著堆疊a[i]['seg_areas'] = np.hstack((a[i]['seg_areas'],b[i]['seg_areas']))return a
最后fast rcnn在訓練階段返回的是含有ground truth box與proposal box的roidb的信息,在測試時僅返回proposal box的roidb的信息。
這里參與計算的一定是所有rpn階段提取的proposals.
get_roidb 中還有最后一步:get_training_roidb。參考http://blog.csdn.net/xiamentingtao/article/details/78449751??,
對于生成的roidb新增了一些屬性。形成了如下信息:
接下來我們再看train_net,他們在train.py中。該程序封裝了一個Solver,并且定義了snapshot.并且這里生成了box對應的回歸目標。下面我們仔細分析。
def train_net(solver_prototxt, roidb, output_dir,pretrained_model=None, max_iters=40000):"""Train a Fast R-CNN network.""" # 其實還有rpnroidb = filter_roidb(roidb)sw = SolverWrapper(solver_prototxt, roidb, output_dir,pretrained_model=pretrained_model)print 'Solving...'model_paths = sw.train_model(max_iters)print 'done solving'return model_paths
首先過濾掉一些box。
def filter_roidb(roidb):"""Remove roidb entries that have no usable RoIs."""def is_valid(entry): # entry是指一幅圖片# Valid images have:# (1) At least one foreground RoI OR# (2) At least one background RoIoverlaps = entry['max_overlaps']# find boxes with sufficient overlapfg_inds = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0] #返回所有滿足條件的序號列表# Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)bg_inds = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) &(overlaps >= cfg.TRAIN.BG_THRESH_LO))[0]# image is only valid if such boxes existvalid = len(fg_inds) > 0 or len(bg_inds) > 0return validnum = len(roidb)filtered_roidb = [entry for entry in roidb if is_valid(entry)]num_after = len(filtered_roidb)print 'Filtered {} roidb entries: {} -> {}'.format(num - num_after,num, num_after)return filtered_roidb
該函數中定義了一個is_valid函數,用于判斷roidb中的每個entry(圖片)是否合理,合 理定義為至少有一個前景box或背景box。?
對于 rpn, ?roidb全是groudtruth時,因為box與對應的類的重合度(overlaps)顯然為1,所以不會過濾掉。
如果roidb包含了一些proposal,overlaps在[BG_THRESH_LO, BG_THRESH_HI]之間的都將被認為是背景,大于FG_THRESH才被認為是前景,roidb 至少要有一個前景或背景,否則將被過濾掉。 當然了對于fast rcnn的訓練階段由于包含了ground truth box,自然也不會過濾掉。對于測試階段就不一定了。其實也很顯然,如果都包含了ground truth,自然是應該可以拿來訓練的。 ?( 感覺這個函數似乎沒用~~~)
將沒用的roidb過濾掉以后,返回的就是filtered_roidb
接下來在train_net中定義了一個SolverWrapper對象sw,在對象的初始化過程中包含回歸目標的求解。
class SolverWrapper(object):"""A simple wrapper around Caffe's solver.This wrapper gives us control over he snapshotting process, which weuse to unnormalize the learned bounding-box regression weights."""def __init__(self, solver_prototxt, roidb, output_dir,pretrained_model=None):"""Initialize the SolverWrapper."""self.output_dir = output_dirif (cfg.TRAIN.HAS_RPN and cfg.TRAIN.BBOX_REG andcfg.TRAIN.BBOX_NORMALIZE_TARGETS):# RPN can only use precomputed normalization because there are no# fixed statistics to compute a prioriassert cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED
# 計算box回歸目標,并且返回各類的偏移均值和方差,if cfg.TRAIN.BBOX_REG:print 'Computing bounding-box regression targets...'self.bbox_means, self.bbox_stds = \rdl_roidb.add_bbox_regression_targets(roidb)print 'done'self.solver = caffe.SGDSolver(solver_prototxt)
# 預訓練參數寫入if pretrained_model is not None:print ('Loading pretrained model ''weights from {:s}').format(pretrained_model)self.solver.net.copy_from(pretrained_model)self.solver_param = caffe_pb2.SolverParameter()with open(solver_prototxt, 'rt') as f:pb2.text_format.Merge(f.read(), self.solver_param)
#所有的前面的數據準備工作都是為了這一句話,將roidb設置進去,接下來就正式進入剖析訓練過程的部分了。self.solver.net.layers[0].set_roidb(roidb)
首先對于rpn來說,cfg.TRAIN.BBOX_REG=false,因此不需要計算各類的偏移均值和方差。這是必然的,
因為rpn剛開始是沒有anchor的,只有圖片和groundtruth box,自然就不需要計算回歸目標了,而在fast rcnn階段才需要。所以
如下的操作是針對在rpn提取proposal形成的roidb操作的。而且根據config.py的
設置:
__C.TRAIN.BBOX_NORMALIZE_TARGETS = True
# Deprecated (inside weights)
__C.TRAIN.BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0)
# Normalize the targets using "precomputed" (or made up) means and stdevs
# (BBOX_NORMALIZE_TARGETS must also be True)
__C.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED = False
__C.TRAIN.BBOX_NORMALIZE_MEANS = (0.0, 0.0, 0.0, 0.0)
__C.TRAIN.BBOX_NORMALIZE_STDS = (0.1, 0.1, 0.2, 0.2)
我們看到其實總的來說有設置歸一化,但是由于rpn時cfg.TRAIN.BBOX_REG=false,所以對于rpn階段,不進行歸一化。只有在fast rcnn階段才進行。
(啰嗦一點,不知道清楚沒?)
def add_bbox_regression_targets(roidb):"""Add information needed to train bounding-box regressors."""assert len(roidb) > 0assert 'max_classes' in roidb[0], 'Did you call prepare_roidb first?'num_images = len(roidb)#這里的個數是rpn提取的所有proposal的個數# Infer number of classes from the number of columns in gt_overlapsnum_classes = roidb[0]['gt_overlaps'].shape[1]for im_i in xrange(num_images):rois = roidb[im_i]['boxes']max_overlaps = roidb[im_i]['max_overlaps']max_classes = roidb[im_i]['max_classes']roidb[im_i]['bbox_targets'] = \_compute_targets(rois, max_overlaps, max_classes)if cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED:# Use fixed / precomputed "means" and "stds" instead of empirical valuesmeans = np.tile(np.array(cfg.TRAIN.BBOX_NORMALIZE_MEANS), (num_classes, 1))stds = np.tile(np.array(cfg.TRAIN.BBOX_NORMALIZE_STDS), (num_classes, 1))else:# Compute values needed for means and stds# var(x) = E(x^2) - E(x)^2class_counts = np.zeros((num_classes, 1)) + cfg.EPSsums = np.zeros((num_classes, 4))squared_sums = np.zeros((num_classes, 4))for im_i in xrange(num_images):targets = roidb[im_i]['bbox_targets']for cls in xrange(1, num_classes):cls_inds = np.where(targets[:, 0] == cls)[0]if cls_inds.size > 0:class_counts[cls] += cls_inds.sizesums[cls, :] += targets[cls_inds, 1:].sum(axis=0)squared_sums[cls, :] += \(targets[cls_inds, 1:] ** 2).sum(axis=0)means = sums / class_countsstds = np.sqrt(squared_sums / class_counts - means ** 2)print 'bbox target means:'print meansprint means[1:, :].mean(axis=0) # ignore bg classprint 'bbox target stdevs:'print stdsprint stds[1:, :].mean(axis=0) # ignore bg class# Normalize targetsif cfg.TRAIN.BBOX_NORMALIZE_TARGETS:print "Normalizing targets"for im_i in xrange(num_images):targets = roidb[im_i]['bbox_targets']for cls in xrange(1, num_classes):cls_inds = np.where(targets[:, 0] == cls)[0]roidb[im_i]['bbox_targets'][cls_inds, 1:] -= means[cls, :]roidb[im_i]['bbox_targets'][cls_inds, 1:] /= stds[cls, :]else:print "NOT normalizing targets"# These values will be needed for making predictions# (the predicts will need to be unnormalized and uncentered)return means.ravel(), stds.ravel()# 返回的是鋪平后的向量,也就是由21*4變成了1*(21*4)=1*84
這個函數首先計算了bbox_targets,為roidb增加了一個key:bbox_targets,主要通過函數_compute_targets來實現。而后面則利用rpn階段提取的所有box的bbox_targets分別計算了各類的偏移均值和方差,shape是num_classes*4,(由于參與計算均值和方差的box顯然個數很多,所以大部分的類均應該都可以計算出均值和方差,唯獨強調的是這里的box也有可能是
背景,但是我們這里不計算背景類,所以means,和stds的第一行為(0,0,0,0)。
然后使用計算出的均值和方差對bbox_targets進行了歸一化。最后返回鋪平后的均值和方差。shape也就是由21*4變成了1*(21*4)=1*84
下面我們來看_compute_targets。
這個函數用來計算一張圖片的所有box的回歸信息。這將直接應用于在后面的回歸損失計算。
def _compute_targets(rois, overlaps, labels): # 參數rois只含有當前圖片的box信息"""Compute bounding-box regression targets for an image."""# Indices目錄 of ground-truth ROIs# ground-truth ROIsgt_inds = np.where(overlaps == 1)[0]if len(gt_inds) == 0:# Bail if the image has no ground-truth ROIs# 不存在gt ROI,返回空數組return np.zeros((rois.shape[0], 5), dtype=np.float32)# Indices of examples for which we try to make predictions# BBOX閾值,只有ROI與gt的重疊度大于閾值,這樣的ROI才能用作bb回歸的訓練樣本ex_inds = np.where(overlaps >= cfg.TRAIN.BBOX_THRESH)[0]# Get IoU overlap between each ex ROI and gt ROI# 計算ex ROI and gt ROI的IoUex_gt_overlaps = bbox_overlaps(# 變數據格式為floatnp.ascontiguousarray(rois[ex_inds, :], dtype=np.float),np.ascontiguousarray(rois[gt_inds, :], dtype=np.float))# Find which gt ROI each ex ROI has max overlap with:# this will be the ex ROI's gt target# 這里每一行代表一個ex_roi,列代表gt_roi,元素數值代表兩者的IoUgt_assignment = ex_gt_overlaps.argmax(axis=1) #按行求最大,返回索引.gt_rois = rois[gt_inds[gt_assignment], :] #每個ex_roi對應的gt_rois,與下面ex_roi數量相同ex_rois = rois[ex_inds, :]targets = np.zeros((rois.shape[0], 5), dtype=np.float32)targets[ex_inds, 0] = labels[ex_inds] #第一個元素是labeltargets[ex_inds, 1:] = bbox_transform(ex_rois, gt_rois) #后4個元素是ex_box與gt_box的4個方位的偏移return targets
根據前面,在fast rcnn中準備roidb時,已經包含了rpn階段提取的proposal與ground truth box.而groundtruth roidb的overlaps=1,所以我們可以輕松找到所有的ground truth box.
整個過程怎么找每個box對應的ground truth box呢?實際上就是重新計算了每個box對ground truth box的重合度,然后
尋找重合度最大的對應ground truth.,進而計算偏差,輸出的是一個二維數組,橫坐標是盒子的序號,縱坐標是5維,第一維是類別,第二維到第五維為偏移。
而且這里實際上所有的ground truth也都參與了與groundtruth box的重合度計算,自然自己與自己的重疊度最大,后面計算回歸量時,他們的回歸量正好都為0.因此整個
target非0的均是proposal的回歸目標。
【注意:】這里參與計算target的是那些最大重疊度>閾值(0.5)的前景proposal.但是整個targets返回的shape卻是n*5,其中n為rois的盒子個數,包括了所有的proposal與groundtruth box。沒有計算的,target為0.
計算偏移可以參考文檔:http://caffecn.cn/?/question/160?王斌_ICT?的pdf文件
這樣得到的fast rcnn的roidb的情況如下:
roidb[img_index]包含的key, | value |
boxes | box位置信息,box_num*4的np array (x1,y1,x2,y2) |
gt_overlaps | 所有box在不同類別的得分,box_num*class_num矩陣 |
gt_classes | 所有box的真實類別,box_num長度的list |
flipped | 是否翻轉 |
?image | 該圖片的路徑,字符串 |
width | 圖片的寬 |
height? | 圖片的高 |
max_overlaps | 每個box的在所有類別的得分最大值,box_num長度 |
max_classes | 每個box的得分最高所對應的類,box_num長度 |
bbox_targets | 每個box的類別,以及與最接近的gt-box的4個方位偏移 (共5列)(c,tx,ty,tw,th) |
接下來預訓練的imagenet的參數寫入,并且將準備好的roidb送入網絡的第一層。?調用layer.py中的set_roidb方法,為網絡的第一層(RoIDataLayer)設置roidb同時打亂順序
這樣做是必然的,畢竟類imdb或者pascal_voc的實例中的roidb必須要傳到layer中,網絡才能繼續向前傳播;
在RoIDataLayer的foward方法中,就是將RoIDataLayer實例的_roidb拷貝給RoIDataLayer的top blob。
最后我們再來看一下數據準備的最后一步,也就是layer.py的RoIDataLayer類。 明確這里我們向網絡層輸入的roidb是所有的圖片的roidb,且看它如何批處理。
對于rpn,網絡的第一層為:
layer {name: 'input-data'type: 'Python'top: 'data'top: 'im_info'top: 'gt_boxes'python_param {module: 'roi_data_layer.layer'layer: 'RoIDataLayer'param_str: "'num_classes': 21"}
}
對于fast rcnn,網絡的第一層為:
layer {name: 'data'type: 'Python'top: 'data'top: 'rois'top: 'labels'top: 'bbox_targets'top: 'bbox_inside_weights'top: 'bbox_outside_weights'python_param {module: 'roi_data_layer.layer'layer: 'RoIDataLayer'param_str: "'num_classes': 21"}
}
首先看set_roidb。
def set_roidb(self, roidb):"""Set the roidb to be used by this layer during training."""self._roidb = roidbself._shuffle_roidb_inds()if cfg.TRAIN.USE_PREFETCH:self._blob_queue = Queue(10)self._prefetch_process = BlobFetcher(self._blob_queue,self._roidb,self._num_classes)self._prefetch_process.start()# Terminate the child process when the parent existsdef cleanup():print 'Terminating BlobFetcher'self._prefetch_process.terminate()self._prefetch_process.join()import atexitatexit.register(cleanup)
首先載入roidb,然后將roidb中長寬比近似的圖像放在一起(其實也就2種情況,扁的還是豎的),有利于計算速度,并且隨后隨機打亂roidbs。至于后面的USE_PREFETCH,config.py設為false,可以先忽略它。
我們接下來重要的是看setup,它將設置批次處理的規模。
??
def setup(self, bottom, top):"""Setup the RoIDataLayer."""# parse the layer parameter string, which must be valid YAMLlayer_params = yaml.load(self.param_str_)self._num_classes = layer_params['num_classes']self._name_to_top_map = {}# data blob: holds a batch of N images, each with 3 channelsidx = 0top[idx].reshape(cfg.TRAIN.IMS_PER_BATCH, 3, # 對于rpn,data的規模為1*3*600*1000 ,對于fast rcnn,2*3*600*1000max(cfg.TRAIN.SCALES), cfg.TRAIN.MAX_SIZE)self._name_to_top_map['data'] = idxidx += 1if cfg.TRAIN.HAS_RPN: # 對于rpn設置top[idx].reshape(1, 3)self._name_to_top_map['im_info'] = idx # 設置im_info每行的規模是1*3,后面可以看到為(h,w,scale)idx += 1top[idx].reshape(1, 4) self._name_to_top_map['gt_boxes'] = idx #設置gt_boxes每行的規模是1*4 idx += 1else: # not using RPN #對于fast rcnn設置# rois blob: holds R regions of interest, each is a 5-tuple# (n, x1, y1, x2, y2) specifying an image batch index n and a# rectangle (x1, y1, x2, y2)top[idx].reshape(1, 5) # 設置rois每行的規模是1*5,(n, x1, y1, x2, y2) ,n是圖片的序號,后面是矩形的坐標self._name_to_top_map['rois'] = idx idx += 1# labels blob: R categorical labels in [0, ..., K] for K foreground# classes plus backgroundtop[idx].reshape(1)self._name_to_top_map['labels'] = idx #設置labels每行的規模是1,分別取自[0, ..., K],0為背景,其他是前景idx += 1if cfg.TRAIN.BBOX_REG: # 同樣對于fast rcnn,默認設為true# bbox_targets blob: R bounding-box regression targets with 4# targets per classtop[idx].reshape(1, self._num_classes * 4) #設置bbox_targets每行的規模是1*(_num_classes * 4),也即是1*84self._name_to_top_map['bbox_targets'] = idxidx += 1# bbox_inside_weights blob: At most 4 targets per roi are active;# thisbinary vector sepcifies the subset of active targetstop[idx].reshape(1, self._num_classes * 4)self._name_to_top_map['bbox_inside_weights'] = idx #設置bbox_inside_weights每行的規模是1*(_num_classes * 4),也即是1*84idx += 1top[idx].reshape(1, self._num_classes * 4) #bbox_outside_weights每行的規模是1*(_num_classes * 4),也即是1*84self._name_to_top_map['bbox_outside_weights'] = idxidx += 1print 'RoiDataLayer: name_to_top:', self._name_to_top_mapassert len(top) == len(self._name_to_top_map)
在這里設置了批次處理的數據規模以及各個top的shape大小。對于rpn,每次處理一張圖片,對于fast rcnn,一次處理兩張圖片。? ?(? 觀察 cfg.TRAIN.IMS_PER_BATCH 的大小)
下面我們來看forward,這才開始真正的數據傳遞。其實我們這時候發現其實在roidb中并沒有存儲圖片的像素值,而只有到了這一步才開始正式讀取圖片的像素值。
def forward(self, bottom, top):"""Get blobs and copy them into this layer's top blob vector."""blobs = self._get_next_minibatch()for blob_name, blob in blobs.iteritems():top_ind = self._name_to_top_map[blob_name]# Reshape net's input blobstop[top_ind].reshape(*(blob.shape))# Copy data into net's input blobstop[top_ind].data[...] = blob.astype(np.float32, copy=False)
首先?通過blobs = self._get_next_minibatch()獲取一個批次的blob。
def _get_next_minibatch(self):"""Return the blobs to be used for the next minibatch.If cfg.TRAIN.USE_PREFETCH is True, then blobs will be computed in aseparate process and made available through self._blob_queue."""if cfg.TRAIN.USE_PREFETCH:return self._blob_queue.get()else:db_inds = self._get_next_minibatch_inds() #這里包含了一個批次的圖片個數minibatch_db = [self._roidb[i] for i in db_inds]return get_minibatch(minibatch_db, self._num_classes)
下面我們看一下這個函數:
def _get_next_minibatch_inds(self):"""Return the roidb indices for the next minibatch."""if self._cur + cfg.TRAIN.IMS_PER_BATCH >= len(self._roidb):self._shuffle_roidb_inds()db_inds = self._perm[self._cur:self._cur + cfg.TRAIN.IMS_PER_BATCH]self._cur += cfg.TRAIN.IMS_PER_BATCHreturn db_inds
self._cur在打亂roidb排序后設為0,代表當前的圖片序號。 這個函數讀取一個批次的圖片的序號,并返回,并且設置了下一次的起始圖片序號( self._cur)。
當訓練多次,所有的圖片都訓練完了,將會打亂所有的圖片的排序,重新提取序號。這里一個roidb就是代表一張圖片。所以有時候會將二者不加區別。
這時一個批次的數據序號準備好了,就可以讀取像素值了。且看get_minibatch。這時我們跳到《Faster RCNN minibatch.py解讀》
總的來說,對于一個minibatch的訓練樣本來說,這里的所有top都存儲了一批量roi的信息(指的是__C.TRAIN.BATCH_SIZE = 128),其中前景與背景box的比為1:3,對于某些符合要求的前景box進行了相應地計算,尤其是回歸目標,選出的前景圖像除ground truth box以外都是有的,背景類顯然沒有(也就是全為0)。存儲的blob的各key的規模可以參考setup.
最后參與運算的都在bbox_targets ,bbox_inside_weights ,bbox_outside_weights中體現出來(即非0項)。
接下來就是數據的拷貝。至此數據的準備工作結束。數據流的第一層輸入完畢。
這里有一個小小的疑問:
1. 在setup中,gt_boxes的規模為1*4,而在_get_next_minibatch返回的blobs中,gt_boxes為1*5,(x1,y1,x2,y2,c).有點矛盾啊?
??