最近看了一篇很有意思的文章?http://matthewearl.github.io/2015/07/28/switching-eds-with-python/ ,本來想自己復現一下,后來發現自己太菜,用了一整天只完成了不到一半,最近要找工作了,看書看的有點煩,本來想寫個有趣的代碼放松下。哎。
開始正文。原作者用的是dlib的庫完成關鍵點檢測,試著裝了一下,沒裝成,那就不用了。本來想用自己的庫,后來想了下自己封裝的太爛了,還是改用別人的吧,這樣程序的大小也會小很多,訓練好的文件還是比較大的。
首先去Face++注冊一個賬號,然后創建應用(得到訪問的密鑰),這里直接截圖了。
我用的是Python 接口,去github下載SDK ?https://github.com/FacePlusPlus/facepp-python-sdk/tree/v2.0
修改apikey.cfg的內容,可以運行hello.py的歷程。
注意服務器的地址不要選錯,還有就是官方的歷程里是這樣的,API_KEY = ‘‘,替換直接的密鑰時記得把<>也刪掉,不然也會報錯。
SDK的facepp.py文件的350行左右修改一下,添加?‘/detection/landmark‘,這句,不然的話人臉關鍵點檢測的接口無法調用。
這里稍微吐槽一下,Face++的SDK寫的真不怎么好,很多地方不夠詳細,而且程序有時會因為網絡問題出現bug。想上傳自己的圖片也找不到接口,官網只給了這么幾句,
感覺解釋的太簡單了吧,SDK里也沒有見到有相關的本地圖片上傳接口。
說了這么多,展示一下效果,然后貼代碼。
這是人類關鍵點檢測的效果。
由于開始選的兩個臉對齊的比較好,所以換了一張,這里完成的效果是把第二幅圖片的人臉縮放、旋轉和第一張人臉對其。之后的操作就是裁剪、覆蓋了。
下面貼一下代碼,只為完成了一半(文章里寫的這么多),后面的部分不太熟悉,要找工作了,沒心情寫代碼。。。
這里建議大家自己注冊一個賬號,每個賬號的開發者版本同時有3個線程,如果我這里修改了密鑰程序應該會報錯,這里可能程序無法運行。
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
# You need to register your App first, and enter you API key/secret.
# 您需要先注冊一個App,并將得到的API key和API secret寫在這里。API_KEY = 'de12a860239680fb8aba7c8283efffd9'
API_SECRET = '61MzhMy_j_L8T1-JAzVjlBSsKqy2pUap'
# Import system libraries and define helper functions
# 導入系統庫并定義輔助函數
import time
import os
import cv2
import numpy
import urllib
import re
from facepp import API
ALIGN_POINTS = list(range(0,25))
OVERLAY_POINTS=list(range(0,25))
RIGHT_EYE_POINTS = list(range(2, 6))
LEFT_EYE_POINTS = list(range(4, 8))
FEATHER_AMOUNT = 11
SCALE_FACTOR = 1
COLOUR_CORRECT_BLUR_FRAC = 0.6
def encode(obj):
if type(obj) is unicode:
return obj.encode('utf-8')
if type(obj) is dict:
return {encode(k): encode(v) for (k, v) in obj.iteritems()}
if type(obj) is list:
return [encode(i) for i in obj]
return obj
def getPoints(text):
a=encode(text)
a=str(a)
#print a
x = re.findall(r'\'x\':......',a)
for i in range(len(x)):
x[i]=re.findall(r'\d+\.\d\d',x[i])
y = re.findall(r'\'y\':......',a)
for i in range(len(y)):
y[i]=re.findall(r'\d+\.\d\d',y[i])
xy =zip(x,y)
return xy
def drawPoints(img,xy): #畫點,用于檢測程序運行情況
Img = img
tmp = numpy.array(img)
h,w,c = tmp.shape
for i,j in xy:
xp=float(i[0])*w/100.
yp=float(j[0])*h/100.
point = (int(xp),int(yp))
cv2.circle(Img,point,1,(0,255,0))
return Img
def get_landmarks(path,tmpPic):
result = api.detection.detect(url = path,mode = 'oneface')
ID = result['face'][0]['face_id']
points = api.detection.landmark(face_id=ID,type = '25p')
xy = getPoints(points)
print 'downloading the picture....'
urllib.urlretrieve(path,tmpPic) #為防止圖片內容有變化,每次都下載一遍,調試可以不用
tmp = cv2.imread(tmpPic)
#測試
Img = drawPoints(tmp,xy)
cv2.imwrite('point.jpg',Img)
tmp = numpy.array(tmp)
h,w,c = tmp.shape
points = numpy.empty([25,2],dtype=numpy.int16)
n=0
for i,j in xy:
xp=float(i[0])*w/100.
yp=float(j[0])*h/100.
points[n][0]=int(xp)
points[n][1]=int(yp)
n+=1
return numpy.matrix([[i[0], i[1]] for i in points])
#return points
def transformation_from_points(points1, points2):
"""
Return an affine transformation [s * R | T] such that:
sum ||s*R*p1,i + T - p2,i||^2
is minimized.
"""
# Solve the procrustes problem by subtracting centroids, scaling by the
# standard deviation, and then using the SVD to calculate the rotation. See
# the following for more details:
# https://en.wikipedia.org/wiki/Orthogonal_Procrustes_problem
points1 = points1.astype(numpy.float64)
points2 = points2.astype(numpy.float64)
c1 = numpy.mean(points1, axis=0)
c2 = numpy.mean(points2, axis=0)
points1 -= c1
points2 -= c2
s1 = numpy.std(points1)
s2 = numpy.std(points2)
points1 /= s1
points2 /= s2
U, S, Vt = numpy.linalg.svd(points1.T * points2)
# The R we seek is in fact the transpose of the one given by U * Vt. This
# is because the above formulation assumes the matrix goes on the right
# (with row vectors) where as our solution requires the matrix to be on the
# left (with column vectors).
R = (U * Vt).T
return numpy.vstack([numpy.hstack(((s2 / s1) * R,
c2.T - (s2 / s1) * R * c1.T)),
numpy.matrix([0., 0., 1.])])
def warp_im(im, M, dshape):
output_im = numpy.zeros(dshape, dtype=im.dtype)
cv2.warpAffine(im,
M[:2],
(dshape[1], dshape[0]),
dst=output_im,
borderMode=cv2.BORDER_TRANSPARENT,
flags=cv2.WARP_INVERSE_MAP)
return output_im
if __name__ == '__main__':
api = API(API_KEY, API_SECRET)
path1='http://www.faceplusplus.com/static/img/demo/17.jpg'
path2='http://www.faceplusplus.com/static/img/demo/7.jpg'
#path2='http://cimg.163.com/auto/2004/8/28/200408281055448e023.jpg'
tmp1='./tmp1.jpg'
tmp2='./tmp2.jpg'
landmarks1=get_landmarks(path1,tmp1)
landmarks2=get_landmarks(path2,tmp2)
im1 = cv2.imread(tmp1,cv2.IMREAD_COLOR)
im2 = cv2.imread(tmp2,cv2.IMREAD_COLOR)
M = transformation_from_points(landmarks1[ALIGN_POINTS],
landmarks2[ALIGN_POINTS])
warped_im2 = warp_im(im2, M, im1.shape)
cv2.imwrite('wrap.jpg',warped_im2)
版權聲明:本文為博主原創文章,未經博主允許不得轉載。