訓練
我們主要以3000fps matlab實現為敘述主體。
總體目標
- 我們需要為68個特征點的每一個特征點訓練5棵隨機樹,每棵樹4層深,即為所謂的隨機森林。
開始訓練
- 分配樣本
事實上,對于每個特征點,要訓練隨機森林,我們需要從現有的樣本和特征中抽取一部分,訓練成若干個樹。
現在,我們有N(此處N=1622)個樣本(圖片和shape)和無數個像素差特征。訓練時,對于每棵樹,我們從N個樣本采取有放回抽樣的方法隨機選取若干樣本,再隨機選取M個特征點。然后使用這些素材加以訓練。這是一般的方法。不過為了簡化,我們將N個樣本平均分成5份,且允許彼此之間有重疊。然后分配好的樣本用來作為68個特征點的共同素材。
示意圖:
代碼:
dbsize = length(Tr_Data);% rf = cell(1, params.max_numtrees);overlap_ratio = params.bagging_overlap;%重疊比例Q = floor(double(dbsize)/((1-params.bagging_overlap)*(params.max_numtrees))); %每顆樹分配的樣本個數Data = cell(1, params.max_numtrees); %為訓練每棵樹準備的樣本數據
for t = 1:params.max_numtrees% calculate the number of samples for each random tree% train t-th random treeis = max(floor((t-1)*Q - (t-1)*Q*overlap_ratio + 1), 1); ie = min(is + Q, dbsize);Data{t} = Tr_Data(is:ie);
end
2.隨機森林訓練全程
代碼:
% divide local region into grid
params.radius = ([0:1/30:1]');
params.angles = 2*pi*[0:1/36:1]';rfs = cell(length(params.meanshape), params.max_numtrees); %隨機森林的大小為68*5%parfor i = 1:length(params.meanshape)
for i = 1:length(params.meanshape)rf = cell(1, params.max_numtrees);disp(strcat(num2str(i), 'th landmark is processing...'));for t = 1:params.max_numtrees% disp(strcat('training', {''}, num2str(t), '-th tree for', {''}, num2str(lmarkID), '-th landmark'));% calculate the number of samples for each random tree% train t-th random treeis = max(floor((t-1)*Q - (t-1)*Q*overlap_ratio + 1), 1); %樣本的序號ie = min(is + Q, dbsize);max_numnodes = 2^params.max_depth - 1; %最大的節點數自然是滿二叉樹的節點個數rf{t}.ind_samples = cell(max_numnodes, 1); %節點包含的樣本序號rf{t}.issplit = zeros(max_numnodes, 1);%是否分割rf{t}.pnode = zeros(max_numnodes, 1);rf{t}.depth = zeros(max_numnodes, 1);%當前深度rf{t}.cnodes = zeros(max_numnodes, 2);%當前節點的左右子節點序號rf{t}.isleafnode = zeros(max_numnodes, 1); %判斷節點是否是葉子節點rf{t}.feat = zeros(max_numnodes, 4); %圍繞特征點隨機選取的2個點的坐標(r1,a1,r2,a2)rf{t}.thresh = zeros(max_numnodes, 1); %分割節點的閾值rf{t}.ind_samples{1} = 1:(ie - is + 1)*(params.augnumber); %第t棵樹的樣本序號,也是根節點包含的樣本序號rf{t}.issplit(1) = 0;rf{t}.pnode(1) = 0;rf{t}.depth(1) = 1;rf{t}.cnodes(1, 1:2) = [0 0];rf{t}.isleafnode(1) = 1;rf{t}.feat(1, :) = zeros(1, 4);rf{t}.thresh(1) = 0;num_nodes = 1; %num_nodes為現有的節點個數num_leafnodes = 1;%num_leafnodes為現有的葉子節點個數stop = 0;while(~stop) %這個循環用于產生隨機樹,直到沒有再可以分割的點num_nodes_iter = num_nodes; %num_nodes為現有的節點個數num_split = 0; %分割節點的個數for n = 1:num_nodes_iterif ~rf{t}.issplit(n) %如果第t棵樹第n個節點已經分過,就跳過去if rf{t}.depth(n) == params.max_depth % || length(rf{t}.ind_samples{n}) < 20if rf{t}.depth(n) == 1 %應該去掉吧????????????????rf{t}.depth(n) = 1;endrf{t}.issplit(n) = 1; else% separate the samples into left and right path[thresh, feat, lcind, rcind, isvalid] = splitnode(i, rf{t}.ind_samples{n}, Data{t}, params, stage);%{if ~isvalidrf{t}.feat(n, :) = [0 0 0 0];rf{t}.thresh(n) = 0;rf{t}.issplit(n) = 1;rf{t}.cnodes(n, :) = [0 0];rf{t}.isleafnode(n) = 1;continue;end%}% set the threshold and featture for current noderf{t}.feat(n, :) = feat;rf{t}.thresh(n) = thresh;rf{t}.issplit(n) = 1;rf{t}.cnodes(n, :) = [num_nodes+1 num_nodes+2]; %當前節點的左右子節點序號rf{t}.isleafnode(n) = 0;% add left and right child nodes into the random treerf{t}.ind_samples{num_nodes+1} = lcind;rf{t}.issplit(num_nodes+1) = 0;rf{t}.pnode(num_nodes+1) = n;rf{t}.depth(num_nodes+1) = rf{t}.depth(n) + 1;rf{t}.cnodes(num_nodes+1, :) = [0 0];rf{t}.isleafnode(num_nodes+1) = 1;rf{t}.ind_samples{num_nodes+2} = rcind;rf{t}.issplit(num_nodes+2) = 0;rf{t}.pnode(num_nodes+2) = n;rf{t}.depth(num_nodes+2) = rf{t}.depth(n) + 1;rf{t}.cnodes(num_nodes+2, :) = [0 0];rf{t}.isleafnode(num_nodes+2) = 1;num_split = num_split + 1; %分割節點的次數,實際上一層分割節點的個數num_leafnodes = num_leafnodes + 1;num_nodes = num_nodes + 2;endendendif num_split == 0stop = 1;elserf{t}.num_leafnodes = num_leafnodes;rf{t}.num_nodes = num_nodes; rf{t}.id_leafnodes = find(rf{t}.isleafnode == 1); end endend% disp(strcat(num2str(i), 'th landmark is over'));rfs(i, :) = rf;
end
3.分裂節點全程
流程圖:
代碼:
function [thresh, feat, lcind, rcind, isvalid] = splitnode(lmarkID, ind_samples, Tr_Data, params, stage)if isempty(ind_samples)thresh = 0;feat = [0 0 0 0];rcind = [];lcind = [];isvalid = 1;return;
end% generate params.max_rand cndidate feature
% anglepairs = samplerandfeat(params.max_numfeat);
% radiuspairs = [rand([params.max_numfeat, 1]) rand([params.max_numfeat, 1])];
[radiuspairs, anglepairs] = getproposals(params.max_numfeats(stage), params.radius, params.angles);angles_cos = cos(anglepairs);
angles_sin = sin(anglepairs);% extract pixel difference features from pairspdfeats = zeros(params.max_numfeats(stage), length(ind_samples)); %所有的樣本均要提取相應階段的像素差特征,即比如說1000*541shapes_residual = zeros(length(ind_samples), 2);for i = 1:length(ind_samples)s = floor((ind_samples(i)-1)/(params.augnumber)) + 1; %共用樣本的序號k = mod(ind_samples(i)-1, (params.augnumber)) + 1; %不能共用盒子,而是對于同一張圖片的不同shape使用各自的盒子,使用余運算,顯然小于params.augnumber,又加1,所以答案從1:params.augnumber% calculate the relative location under the coordinate of meanshape %x1=angles_cos(:, 1)).*radiuspairs(:, 1)pixel_a_x_imgcoord = (angles_cos(:, 1)).*radiuspairs(:, 1)*params.max_raio_radius(stage)*Tr_Data{s}.intermediate_bboxes{stage}(k, 3);pixel_a_y_imgcoord = (angles_sin(:, 1)).*radiuspairs(:, 1)*params.max_raio_radius(stage)*Tr_Data{s}.intermediate_bboxes{stage}(k, 4);pixel_b_x_imgcoord = (angles_cos(:, 2)).*radiuspairs(:, 2)*params.max_raio_radius(stage)*Tr_Data{s}.intermediate_bboxes{stage}(k, 3);pixel_b_y_imgcoord = (angles_sin(:, 2)).*radiuspairs(:, 2)*params.max_raio_radius(stage)*Tr_Data{s}.intermediate_bboxes{stage}(k, 4);% no transformation%{pixel_a_x_lmcoord = pixel_a_x_imgcoord;pixel_a_y_lmcoord = pixel_a_y_imgcoord;pixel_b_x_lmcoord = pixel_b_x_imgcoord;pixel_b_y_lmcoord = pixel_b_y_imgcoord;%}% transform the pixels from image coordinate (meanshape) to coordinate of current shape%以下計算出的都是中心化的坐標[pixel_a_x_lmcoord, pixel_a_y_lmcoord] = transformPointsForward(Tr_Data{s}.meanshape2tf{k}, pixel_a_x_imgcoord', pixel_a_y_imgcoord'); pixel_a_x_lmcoord = pixel_a_x_lmcoord';pixel_a_y_lmcoord = pixel_a_y_lmcoord';[pixel_b_x_lmcoord, pixel_b_y_lmcoord] = transformPointsForward(Tr_Data{s}.meanshape2tf{k}, pixel_b_x_imgcoord', pixel_b_y_imgcoord');pixel_b_x_lmcoord = pixel_b_x_lmcoord';pixel_b_y_lmcoord = pixel_b_y_lmcoord'; %轉化為絕對坐標pixel_a_x = int32(bsxfun(@plus, pixel_a_x_lmcoord, Tr_Data{s}.intermediate_shapes{stage}(lmarkID, 1, k)));pixel_a_y = int32(bsxfun(@plus, pixel_a_y_lmcoord, Tr_Data{s}.intermediate_shapes{stage}(lmarkID, 2, k)));pixel_b_x = int32(bsxfun(@plus, pixel_b_x_lmcoord, Tr_Data{s}.intermediate_shapes{stage}(lmarkID, 1, k)));pixel_b_y = int32(bsxfun(@plus, pixel_b_y_lmcoord, Tr_Data{s}.intermediate_shapes{stage}(lmarkID, 2, k)));width = (Tr_Data{s}.width);height = (Tr_Data{s}.height);pixel_a_x = max(1, min(pixel_a_x, width)); %意思是 pixel_a_x應該介于1和width之間pixel_a_y = max(1, min(pixel_a_y, height));pixel_b_x = max(1, min(pixel_b_x, width));pixel_b_y = max(1, min(pixel_b_y, height));%取像素兩種方法,一是img_gray(i,j);二是img_gray(k),k是按列數第k個元素pdfeats(:, i) = double(Tr_Data{s}.img_gray(pixel_a_y + (pixel_a_x-1)*height)) - double(Tr_Data{s}.img_gray(pixel_b_y + (pixel_b_x-1)*height));%./ double(Tr_Data{s}.img_gray(pixel_a_y + (pixel_a_x-1)*height)) + double(Tr_Data{s}.img_gray(pixel_b_y + (pixel_b_x-1)*height));% drawshapes(Tr_Data{s}.img_gray, [pixel_a_x pixel_a_y pixel_b_x pixel_b_y]);% hold off;shapes_residual(i, :) = Tr_Data{s}.shapes_residual(lmarkID, :, k);
endE_x_2 = mean(shapes_residual(:, 1).^2);
E_x = mean(shapes_residual(:, 1));E_y_2 = mean(shapes_residual(:, 2).^2);
E_y = mean(shapes_residual(:, 2));
% 整體方差,其中使用了方差的經典公式Dx=Ex^2-(Ex)^2
var_overall = length(ind_samples)*((E_x_2 - E_x^2) + (E_y_2 - E_y^2));% var_overall = length(ind_samples)*(var(shapes_residual(:, 1)) + var(shapes_residual(:, 2)));% max_step = min(length(ind_samples), params.max_numthreshs);
% step = floor(length(ind_samples)/max_step);
max_step = 1;var_reductions = zeros(params.max_numfeats(stage), max_step);
thresholds = zeros(params.max_numfeats(stage), max_step);[pdfeats_sorted] = sort(pdfeats, 2); %將數據打亂順序,防止過擬合% shapes_residual = shapes_residual(ind, :);for i = 1:params.max_numfeats(stage) %暴力選舉法,選出最合適的feature% for t = 1:max_stept = 1;ind = ceil(length(ind_samples)*(0.5 + 0.9*(rand(1) - 0.5)));threshold = pdfeats_sorted(i, ind); % pdfeats_sorted(i, t*step); % thresholds(i, t) = threshold;ind_lc = (pdfeats(i, :) < threshold); %邏輯數組ind_rc = (pdfeats(i, :) >= threshold);% figure, hold on, plot(shapes_residual(ind_lc, 1), shapes_residual(ind_lc, 2), 'r.')% plot(shapes_residual(ind_rc, 1), shapes_residual(ind_rc, 2), 'g.')% close;% compute E_x_2_lc = mean(shapes_residual(ind_lc, 1).^2); %選出邏輯數組中為1的那些殘差E_x_lc = mean(shapes_residual(ind_lc, 1));E_y_2_lc = mean(shapes_residual(ind_lc, 2).^2);E_y_lc = mean(shapes_residual(ind_lc, 2));var_lc = (E_x_2_lc + E_y_2_lc)- (E_x_lc^2 + E_y_lc^2);E_x_2_rc = (E_x_2*length(ind_samples) - E_x_2_lc*sum(ind_lc))/sum(ind_rc);E_x_rc = (E_x*length(ind_samples) - E_x_lc*sum(ind_lc))/sum(ind_rc);E_y_2_rc = (E_y_2*length(ind_samples) - E_y_2_lc*sum(ind_lc))/sum(ind_rc);E_y_rc = (E_y*length(ind_samples) - E_y_lc*sum(ind_lc))/sum(ind_rc);var_rc = (E_x_2_rc + E_y_2_rc)- (E_x_rc^2 + E_y_rc^2);var_reduce = var_overall - sum(ind_lc)*var_lc - sum(ind_rc)*var_rc;% var_reduce = var_overall - sum(ind_lc)*(var(shapes_residual(ind_lc, 1)) + var(shapes_residual(ind_lc, 2))) - sum(ind_rc)*(var(shapes_residual(ind_rc, 1)) + var(shapes_residual(ind_rc, 2)));var_reductions(i, t) = var_reduce;% end% plot(var_reductions(i, :));
end[~, ind_colmax] = max(var_reductions);%尋找最大差的序號
ind_max = 1;%{
if var_max <= 0isvalid = 0;
elseisvalid = 1;
end
%}
isvalid = 1;thresh = thresholds(ind_colmax(ind_max), ind_max); %當前閾值feat = [anglepairs(ind_colmax(ind_max), :) radiuspairs(ind_colmax(ind_max), :)];lcind = ind_samples(find(pdfeats(ind_colmax(ind_max), :) < thresh));
rcind = ind_samples(find(pdfeats(ind_colmax(ind_max), :) >= thresh));end
問題:訓練時默認一旦可以分割節點,則必然分割成兩部分。那么會不會出現選取一個閾值將剩余的樣本都歸于一類呢?
說明:
如圖所示外面有一個current 坐標系,里面有mean_shape的中心化歸一化的坐標。最里面是以一個特征點為中心取的極坐標。這份代碼取r,
假定在坐標系三下,取到一像素點坐標為(x,y),而特征點在坐標系二的坐標為(x0,y0),則像素點在坐標系二的坐標為(x?,y?),則有:
(x?,y?)=(x,y)+(x0,y0)
.
又由前面一篇文章 《face alignment by 3000 fps系列學習總結(二)》中間進行的相似性變換,我們知道,將當前坐標由mean_shape的歸一化中心化坐標轉換為current_shape的中心化坐標,需要使用meanshape2tf變換。
即:
(x?,y?)/cR
進一步的,取中心化后得
(x?,y?)/cR+mean(immediateshape)=(x,y)+(x0,y0)cR+mean(immediateshape)=(x,y)cR+(x0,y0)cR+mean(immediateshape)=(x,y)cR+immediate_shape_at(x0,y0)
我們又知道:
cR=c?R?/immediate_bbox
所以上式= (x,y)?immediate_bbox/{c?R?}+immediate_shape_at(x0,y0)
最后一句就解析清了代碼的步驟:
% calculate the relative location under the coordinate of meanshape %x1=angles_cos(:, 1)).*radiuspairs(:, 1)pixel_a_x_imgcoord = (angles_cos(:, 1)).*radiuspairs(:, 1)*params.max_raio_radius(stage)*Tr_Data{s}.intermediate_bboxes{stage}(k, 3);pixel_a_y_imgcoord = (angles_sin(:, 1)).*radiuspairs(:, 1)*params.max_raio_radius(stage)*Tr_Data{s}.intermediate_bboxes{stage}(k, 4);pixel_b_x_imgcoord = (angles_cos(:, 2)).*radiuspairs(:, 2)*params.max_raio_radius(stage)*Tr_Data{s}.intermediate_bboxes{stage}(k, 3);pixel_b_y_imgcoord = (angles_sin(:, 2)).*radiuspairs(:, 2)*params.max_raio_radius(stage)*Tr_Data{s}.intermediate_bboxes{stage}(k, 4);% no transformation%{pixel_a_x_lmcoord = pixel_a_x_imgcoord;pixel_a_y_lmcoord = pixel_a_y_imgcoord;pixel_b_x_lmcoord = pixel_b_x_imgcoord;pixel_b_y_lmcoord = pixel_b_y_imgcoord;%}% transform the pixels from image coordinate (meanshape) to coordinate of current shape%以下計算出的都是中心化的坐標[pixel_a_x_lmcoord, pixel_a_y_lmcoord] = transformPointsForward(Tr_Data{s}.meanshape2tf{k}, pixel_a_x_imgcoord', pixel_a_y_imgcoord'); pixel_a_x_lmcoord = pixel_a_x_lmcoord';pixel_a_y_lmcoord = pixel_a_y_lmcoord';[pixel_b_x_lmcoord, pixel_b_y_lmcoord] = transformPointsForward(Tr_Data{s}.meanshape2tf{k}, pixel_b_x_imgcoord', pixel_b_y_imgcoord');pixel_b_x_lmcoord = pixel_b_x_lmcoord';pixel_b_y_lmcoord = pixel_b_y_lmcoord'; %轉化為絕對坐標pixel_a_x = int32(bsxfun(@plus, pixel_a_x_lmcoord, Tr_Data{s}.intermediate_shapes{stage}(lmarkID, 1, k)));pixel_a_y = int32(bsxfun(@plus, pixel_a_y_lmcoord, Tr_Data{s}.intermediate_shapes{stage}(lmarkID, 2, k)));pixel_b_x = int32(bsxfun(@plus, pixel_b_x_lmcoord, Tr_Data{s}.intermediate_shapes{stage}(lmarkID, 1, k)));pixel_b_y = int32(bsxfun(@plus, pixel_b_y_lmcoord, Tr_Data{s}.intermediate_shapes{stage}(lmarkID, 2, k)));width = (Tr_Data{s}.width);height = (Tr_Data{s}.height);pixel_a_x = max(1, min(pixel_a_x, width)); %意思是 pixel_a_x應該介于1和width之間pixel_a_y = max(1, min(pixel_a_y, height));pixel_b_x = max(1, min(pixel_b_x, width));pixel_b_y = max(1, min(pixel_b_y, height));%取像素兩種方法,一是img_gray(i,j);二是img_gray(k),k是按列數第k個元素pdfeats(:, i) = double(Tr_Data{s}.img_gray(pixel_a_y + (pixel_a_x-1)*height)) - double(Tr_Data{s}.img_gray(pixel_b_y + (pixel_b_x-1)*height));
如此我們訓練全程就搞懂了。