minibatch.py 的功能是: Compute minibatch blobs for training a Fast R-CNN network. 與roidb不同的是, minibatch中存儲的并不是完整的整張圖像圖像,而是從圖像經過轉換后得到的四維blob以及從圖像中截取的proposals,以及與之對應的labels等
在整個faster rcnn訓練中,有兩處用到了minibatch.py,一處是rpn的開始數據輸入,另一處自然是fast rcnn的數據輸入。分別見stage1_rpn_train.pt與stage1_fast_rcnn_train.py的最前面,如下:
stage1_rpn_train.pt:
layer {name: 'input-data'type: 'Python'top: 'data'top: 'im_info'top: 'gt_boxes'python_param {module: 'roi_data_layer.layer'layer: 'RoIDataLayer'param_str: "'num_classes': 21"}
}
stage1_fast_rcnn_train.py:
name: "VGG_CNN_M_1024"
layer {name: 'data'type: 'Python'top: 'data'top: 'rois'top: 'labels'top: 'bbox_targets'top: 'bbox_inside_weights'top: 'bbox_outside_weights'python_param {module: 'roi_data_layer.layer'layer: 'RoIDataLayer'param_str: "'num_classes': 21"}
}
如上,共同的數據定義層為roi_data_layer.layer,在layer.py中,觀察前向傳播:
def forward(self, bottom, top):"""Get blobs and copy them into this layer's top blob vector."""blobs = self._get_next_minibatch()for blob_name, blob in blobs.iteritems():top_ind = self._name_to_top_map[blob_name]# Reshape net's input blobstop[top_ind].reshape(*(blob.shape))# Copy data into net's input blobstop[top_ind].data[...] = blob.astype(np.float32, copy=False)def _get_next_minibatch(self):"""Return the blobs to be used for the next minibatch.If cfg.TRAIN.USE_PREFETCH is True, then blobs will be computed in aseparate process and made available through self._blob_queue."""if cfg.TRAIN.USE_PREFETCH:return self._blob_queue.get()else:db_inds = self._get_next_minibatch_inds()minibatch_db = [self._roidb[i] for i in db_inds]return get_minibatch(minibatch_db, self._num_classes)
這時我們發現了get_minibatch,此函數出現在minibatch.py中。
在看這份代碼的時候,建議從get_minibatch開始。下面我們開始:
get_minibatch中,【輸入】:roidb是一個list,list中的每個元素是一個字典,每個字典對應一張圖片的信息,其中的主要信息有:
num_classes在pascal_voc中為21.
def get_minibatch(roidb, num_classes):"""Given a roidb, construct a minibatch sampled from it."""# 給定一個roidb,這個roidb中存儲的可能是多張圖片,也可能是單張或者多張圖片,num_images = len(roidb) # Sample random scales to use for each image in this batchrandom_scale_inds = npr.randint(0, high=len(cfg.TRAIN.SCALES),size=num_images)assert(cfg.TRAIN.BATCH_SIZE % num_images == 0), \'num_images ({}) must divide BATCH_SIZE ({})'. \format(num_images, cfg.TRAIN.BATCH_SIZE)rois_per_image = cfg.TRAIN.BATCH_SIZE / num_images #這里在fast rcnn中,為128/2=64fg_rois_per_image = np.round(cfg.TRAIN.FG_FRACTION * rois_per_image)#這里比例為0.25=1/4# Get the input image blob, formatted for caffe#將給定的roidb經過預處理(resize以及resize的scale),#然后再利用im_list_to_blob函數來將圖像轉換成caffe支持的數據結構,即 N * C * H * W的四維結構im_blob, im_scales = _get_image_blob(roidb, random_scale_inds)blobs = {'data': im_blob}if cfg.TRAIN.HAS_RPN:#用在rpnassert len(im_scales) == 1, "Single batch only"assert len(roidb) == 1, "Single batch only"# gt boxes: (x1, y1, x2, y2, cls)gt_inds = np.where(roidb[0]['gt_classes'] != 0)[0]gt_boxes = np.empty((len(gt_inds), 5), dtype=np.float32)gt_boxes[:, 0:4] = roidb[0]['boxes'][gt_inds, :] * im_scales[0]gt_boxes[:, 4] = roidb[0]['gt_classes'][gt_inds]blobs['gt_boxes'] = gt_boxes#首先解釋im_info。對于一副任意大小PxQ圖像,傳入Faster RCNN前首先reshape到固定MxN,im_info=[M, N, scale_factor]則保存了此次縮放的所有信息。blobs['im_info'] = np.array( [[im_blob.shape[2], im_blob.shape[3], im_scales[0]]],dtype=np.float32)else: # not using RPN ,用在fast rcnn# Now, build the region of interest and label blobsrois_blob = np.zeros((0, 5), dtype=np.float32)labels_blob = np.zeros((0), dtype=np.float32)bbox_targets_blob = np.zeros((0, 4 * num_classes), dtype=np.float32)bbox_inside_blob = np.zeros(bbox_targets_blob.shape, dtype=np.float32)# all_overlaps = []for im_i in xrange(num_images):# 遍歷給定的roidb中的每張圖片,隨機組合sample of RoIs, 來生成前景樣本和背景樣本。# 返回包括每張圖片中的roi(proposal)的坐標,所屬的類別,bbox回歸目標,bbox的inside_weight等 labels, overlaps, im_rois, bbox_targets, bbox_inside_weights \= _sample_rois(roidb[im_i], fg_rois_per_image, rois_per_image,num_classes)# Add to RoIs blob# _sample_rois返回的im_rois并沒有縮放,所以這里要先縮放rois = _project_im_rois(im_rois, im_scales[im_i])batch_ind = im_i * np.ones((rois.shape[0], 1))rois_blob_this_image = np.hstack((batch_ind, rois))# 加上圖片的序號,共5列(index,x1,y1,x2,y2)rois_blob = np.vstack((rois_blob, rois_blob_this_image))# 將所有的盒子豎著擺放,如下:# n x1 y1 x2 y2# 0 .. .. .. ..# 0 .. .. .. ..# : : : : :# 1 .. .. .. ..# 1 .. .. .. ..# Add to labels, bbox targets, and bbox loss blobslabels_blob = np.hstack((labels_blob, labels))# 水平向量,一維向量bbox_targets_blob = np.vstack((bbox_targets_blob, bbox_targets))bbox_inside_blob = np.vstack((bbox_inside_blob, bbox_inside_weights))# 將所有的bbox_targets_blob豎著擺放,如下: N*4k ,只有對應的類非0# tx1 ty1 wx1 wy1 tx2 ty2 wx2 wy2 tx3 ty3 wx3 wy3# 0 0 0 0 0 0 0 0 0 0 0 0# 0 0 0 0 0.2 0.3 1.0 0.5 0 0 0 0# 0 0 0 0 0 0 0 0 0 0 0 0# 0 0 0 0 0 0 0 0 0.5 0.5 1.0 1.0# 0 0 0 0 0 0 0 0 0 0 0 0# 對于bbox_inside_blob ,與bbox_targets_blob 規模相同,只不過把上面非0的元素換成1即可。# all_overlaps = np.hstack((all_overlaps, overlaps))# For debug visualizations# _vis_minibatch(im_blob, rois_blob, labels_blob, all_overlaps)blobs['rois'] = rois_blobblobs['labels'] = labels_blobif cfg.TRAIN.BBOX_REG:blobs['bbox_targets'] = bbox_targets_blobblobs['bbox_inside_weights'] = bbox_inside_blobblobs['bbox_outside_weights'] = \np.array(bbox_inside_blob > 0).astype(np.float32)
#對于bbox_outside_weights,此處看來與bbox_inside_blob 相同。return blobs
在 def get_minibatch(roidb, num_classes) 中調用此函數,傳進來的實參為單張圖像的roidb ,該函數主要功能是隨機組合sample of RoIs, 來生成前景樣本和背景樣本。這里很重要,
因為一般來說,生成的proposal背景類比較多,所以我們生成前景與背景的比例選擇為1:3,所以
這里每張圖片選取了1/4*64=16個前景,選取了3/4*64=48個背景box.
還有一個值得注意的是隨機采樣中,前景box可能會包含ground truth box.可能會參與分類,但是不會參加回歸,因為其回歸量為0. 是不是可以將
fg_inds = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0]
改為:
fg_inds = np.where(overlaps >= cfg.TRAIN.FG_THRESH && overlaps <1.0)[0]
會更合適呢,這樣就可以提取的全部是rpn的 proposal。
def _sample_rois(roidb, fg_rois_per_image, rois_per_image, num_classes):"""Generate a random sample of RoIs comprising foreground and backgroundexamples."""# label = class RoI has max overlap withlabels = roidb['max_classes']overlaps = roidb['max_overlaps']rois = roidb['boxes']# Select foreground RoIs as those with >= FG_THRESH overlapfg_inds = np.where(overlaps >= cfg.TRAIN.FG_THRESH)[0]# Guard against the case when an image has fewer than fg_rois_per_image# foreground RoIsfg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_inds.size)# Sample foreground regions without replacementif fg_inds.size > 0:fg_inds = npr.choice(fg_inds, size=fg_rois_per_this_image, replace=False)# Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)bg_inds = np.where((overlaps < cfg.TRAIN.BG_THRESH_HI) &(overlaps >= cfg.TRAIN.BG_THRESH_LO))[0]# Compute number of background RoIs to take from this image (guarding# against there being fewer than desired)bg_rois_per_this_image = rois_per_image - fg_rois_per_this_imagebg_rois_per_this_image = np.minimum(bg_rois_per_this_image,bg_inds.size)# Sample foreground regions without replacementif bg_inds.size > 0:bg_inds = npr.choice(bg_inds, size=bg_rois_per_this_image, replace=False)# The indices that we're selecting (both fg and bg)keep_inds = np.append(fg_inds, bg_inds)# Select sampled values from various arrays:labels = labels[keep_inds]# Clamp labels for the background RoIs to 0labels[fg_rois_per_this_image:] = 0overlaps = overlaps[keep_inds]rois = rois[keep_inds]# 調用_get_bbox_regression_labels函數,生成bbox_targets 和 bbox_inside_weights,#它們都是N * 4K 的ndarray,N表示keep_inds的size,也就是minibatch中樣本的個數;bbox_inside_weights #也隨之生成bbox_targets, bbox_inside_weights = _get_bbox_regression_labels(roidb['bbox_targets'][keep_inds, :], num_classes)return labels, overlaps, rois, bbox_targets, bbox_inside_weights
def _get_bbox_regression_labels(bbox_target_data, num_classes):
該函數主要是獲取bbox_target_data中回歸目標的的4個坐標編碼作為bbox_targets,同時生成bbox_inside_weights,它們都是N * 4K 的ndarray,N表示keep_inds的size,也就是minibatch中樣本的個數。
bbox_target_data: N*5 ,每一行為(c,tx,ty,tw,th)
def _get_bbox_regression_labels(bbox_target_data, num_classes):"""Bounding-box regression targets are stored in a compact form in theroidb.This function expands those targets into the 4-of-4*K representation usedby the network (i.e. only one class has non-zero targets). The loss weightsare similarly expanded.Returns:bbox_target_data (ndarray): N x 4K blob of regression targetsbbox_inside_weights (ndarray): N x 4K blob of loss weights"""clss = bbox_target_data[:, 0]bbox_targets = np.zeros((clss.size, 4 * num_classes), dtype=np.float32)bbox_inside_weights = np.zeros(bbox_targets.shape, dtype=np.float32)inds = np.where(clss > 0)[0] # 取前景框for ind in inds:cls = clss[ind]start = 4 * clsend = start + 4bbox_targets[ind, start:end] = bbox_target_data[ind, 1:]bbox_inside_weights[ind, start:end] = cfg.TRAIN.BBOX_INSIDE_WEIGHTSreturn bbox_targets, bbox_inside_weights
對于roidb的圖像進行對應的縮放操作,并返回統一的blob數據,即 N * C * H * W(這里為2*3*600*1000)的四維結構
def _get_image_blob(roidb, scale_inds):"""Builds an input blob from the images in the roidb at the specifiedscales."""num_images = len(roidb)processed_ims = []im_scales = []for i in xrange(num_images):im = cv2.imread(roidb[i]['image']) #shape:h*w*cif roidb[i]['flipped']:im = im[:, ::-1, :] # 水平翻轉target_size = cfg.TRAIN.SCALES[scale_inds[i]]im, im_scale = prep_im_for_blob(im, cfg.PIXEL_MEANS, target_size,cfg.TRAIN.MAX_SIZE)im_scales.append(im_scale)processed_ims.append(im)# Create a blob to hold the input imagesblob = im_list_to_blob(processed_ims)return blob, im_scales
以上im_list_to_blob中將一系列的圖像轉化為標準的4維矩陣,進行了填0的補全操作,使得所有的圖片的大小相同。
prep_im_for_blob 進行尺寸變化,使得最小的邊長為target_size,最大的邊長不超過cfg.TRAIN.MAX_SIZE,并且返回縮放的比例。
def prep_im_for_blob(im, pixel_means, target_size, max_size):"""Mean subtract and scale an image for use in a blob."""im = im.astype(np.float32, copy=False)im -= pixel_meansim_shape = im.shapeim_size_min = np.min(im_shape[0:2])im_size_max = np.max(im_shape[0:2])im_scale = float(target_size) / float(im_size_min)# Prevent the biggest axis from being more than MAX_SIZEif np.round(im_scale * im_size_max) > max_size:im_scale = float(max_size) / float(im_size_max)im = cv2.resize(im, None, None, fx=im_scale, fy=im_scale,interpolation=cv2.INTER_LINEAR)return im, im_scale
所以對于原始的圖片,要縮放到標準的roidb的data的格式,實際上只需要乘以im_scale即可。
反之,如果回到原始的圖片,則只需要除以im_scale即可。
參考文獻
- http://blog.csdn.net/iamzhangzhuping/article/details/51393032
- faster-rcnn 之 基于roidb get_minibatch(數據準備操作)
- faster rcnn源碼解讀(六)之minibatch