
效果展示
整體效果
下圖完全是由機器創造出來的二次元人物頭像,細看有些圖片足以以假亂真。
橫向對比
每次生成一組shape為[1,72]的隨機數,更改其中某個數值,依次生成20組隨機數,作為生成網絡的輸入,得到橫向對比圖片,觀察GAN帶來的神奇效果,如下所示。改變發色深淺

知識補充
GAN原理簡介
論文地址:https://arxiv.org/abs/1406.2661生成對抗網絡(Generative Adversarial Network ,簡稱GAN)是由一個生成網絡與一個判別網絡組成。生成網絡從潛在空間(latent space)中隨機采樣作為輸入,其輸出結果需要盡量模仿訓練集中的真實樣本。判別網絡的輸入為真實樣本或生成網絡的輸出,其目的是將生成網絡的輸出從真實樣本中盡可能分辨出來,而生成網絡則要盡可能地欺騙判別網絡。兩個網絡相互對抗、不斷調整參數,其目的是讓判別網絡無法判斷輸入是真實樣本還是生成網絡的輸出內容。生成對抗網絡常用于生成以假亂真的圖片 。此外,該方法還被用于生成視頻、三維物體模型等。以下簡單展示了GAN的訓練過程:
DCGAN介紹
論文地址:https://arxiv.org/abs/1511.06434DCGAN是深層卷積網絡與 GAN 的結合,其基本原理與 GAN 相同,只是將生成網絡和判別網絡用卷積網絡(CNN)替代。為了提高生成樣本的質量和網絡的收斂速度,論文中的 DCGAN 在網絡結構上進行了一些改進:取消pooling 層、加入batch normalization、使用全卷積網絡、去掉了FC層。激活函數:在生成網絡(G)最后一層使用Tanh函數,其余層采用 ReLu 函數 ; 判別網絡(D)中都采用LeakyReLu。
實現過程
本項目由Chainer項目Chainerで顔イラストの自動生成改寫為PaddlePaddle項目。
本項目對原項目進行了如下幾個方面的改進:- 將Adam優化器beta1參數設置為0.8,具體請參考Adam: A Method for Stochastic Optimization,以進一步緩解梯度消失/爆炸問題。
- 將BatchNorm批歸一化中momentum參數設置為0.5,調參后網絡訓練過程加快。
- 將判別網絡(D)的激活函數由elu改為leakyrelu,并將alpha參數設置為0.2。elu與leakyrelu相比效果并不明顯,這里改用計算復雜度更低的leakyrelu
- 在判別網絡(D)中增加Dropout層,并將dropout_prob設置為0.4,避免過擬合和梯度消失/爆炸問題
- 將生成網絡(G)中的第一層全連接層改為基本殘差模塊,加快收斂速度并使網絡學習到更豐富的特征。
開發環境
PaddlePaddle1.7.1、Python3.7、Scikit-image等以及線上平臺AI Studio數據集
數據集通過參考網絡上的爬蟲代碼結合openCV工具進行頭像截取,爬取著名的動漫圖庫網站的http://safebooru.donmai.us/和http://konachan.net/約6萬張圖片。項目所需數據集[二次元人物頭像]已經上傳并公開到AI Studio。損失函數:

實現過程:(AI Studio用Jupyter實現)
1. 安裝缺失庫、解壓數據集定義數據預處理
!pip?install?scikit-image
!unzip?data/data17962/二次元人物頭像.zip?-d?data/
!mkdir?./work/Output
!mkdir?./work/Generate?
2. 定義數據預處理-DataReaderimport?osimport?cv2import?numpy?as?npimport?paddle.dataset?as?datasetfrom?skimage?import?io,color,transformimport?matplotlib.pyplot?as?pltimport?mathimport?timeimport?paddleimport?paddle.fluid?as?fluidimport?six
img_dim?=?96'''準備數據,定義Reader()'''
PATH?=?'data/faces/'
TEST?=?'data/faces/'class?DataGenerater:def?__init__(self):'''初始化'''
????????self.datalist?=?os.listdir(PATH)
????????self.testlist?=?os.listdir(TEST)def?load(self,?image):'''讀取圖片'''
????????img?=?io.imread(image)
????????img?=?transform.resize(img,(img_dim,img_dim))
????????img?=?img.transpose()
????????img?=?img.astype('float32')return?imgdef?create_train_reader(self):'''給dataset定義reader'''def?reader():for?img?in?self.datalist:#print(img)try:
????????????????????i?=?self.load(PATH?+?img)yield?i.astype('float32')except?Exception?as?e:
????????????????????print(e)return?readerdef?create_test_reader(self,):'''給test定義reader'''def?reader():for?img?in?self.datalist:#print(img)try:
????????????????????i?=?self.load(PATH?+?img)yield?i.astype('float32')except?Exception?as?e:
????????????????????print(e)return?readerdef?train(batch_sizes?=?32):
????reader?=?DataGenerater().create_train_reader()return?readerdef?test():
????reader?=?DataGenerater().create_test_reader()return?reader
3. 定義網絡功能模塊包括卷積池化組、BatchNorm層、全連接層、反卷積層、BatchNorm卷積層。use_cudnn?=?True
use_gpu?=?True
n?=?0def?bn(x,?name=None,?act=None,momentum=0.5):return?fluid.layers.batch_norm(
????????x,
????????param_attr=name?+?'1',#?指定權重參數屬性的對象
????????bias_attr=name?+?'2',#?指定偏置的屬性的對象
????????moving_mean_name=name?+?'3',#?moving_mean的名稱
????????moving_variance_name=name?+?'4',#?moving_variance的名稱
????????name=name,
????????act=act,
????????momentum=momentum,
????)###卷積池化組def?conv(x,?num_filters,name=None,?act=None):return?fluid.nets.simple_img_conv_pool(
????????input=x,
????????filter_size=5,
????????num_filters=num_filters,
????????pool_size=2,#?池化窗口大小
????????pool_stride=2,#?池化滑動步長
????????param_attr=name?+?'w',
????????bias_attr=name?+?'b',
????????use_cudnn=use_cudnn,
????????act=act
????)###全連接層def?fc(x,?num_filters,?name=None,?act=None):return?fluid.layers.fc(
????????input=x,
????????size=num_filters,
????????act=act,
????????param_attr=name?+?'w',
????????bias_attr=name?+?'b'
????)###反卷積層def?deconv(x,?num_filters,?name=None,?filter_size=5,?stride=2,?dilation=1,?padding=2,?output_size=None,?act=None):return?fluid.layers.conv2d_transpose(
????????input=x,
????????param_attr=name?+?'w',
????????bias_attr=name?+?'b',
????????num_filters=num_filters,#?濾波器數量
????????output_size=output_size,#?輸出圖片大小
????????filter_size=filter_size,#?濾波器大小
????????stride=stride,#?步長
????????dilation=dilation,#?膨脹比例大小
????????padding=padding,
????????use_cudnn=use_cudnn,#?是否使用cudnn內核
????????act=act#?激活函數
????)###BatchNorm卷積層def?conv_bn_layer(input,
??????????????????ch_out,
??????????????????filter_size,
??????????????????stride,
??????????????????padding,
??????????????????act=None,
??????????????????groups=64,
??????????????????name=None):
????tmp?=?fluid.layers.conv2d(
????????input=input,
????????filter_size=filter_size,
????????num_filters=ch_out,
????????stride=stride,
????????padding=padding,
????????act=None,
????????bias_attr=name?+?'_conv_b',
????????param_attr=name?+?'_conv_w',
????)return?fluid.layers.batch_norm(
????????input=tmp,
????????act=act,
????????param_attr=name?+?'_bn_1',#?指定權重參數屬性的對象
????????bias_attr=name?+?'_bn_2',#?指定偏置的屬性的對象
????????moving_mean_name=name?+?'_bn_3',#?moving_mean的名稱
????????moving_variance_name=name?+?'_bn_4',#?moving_variance的名稱
????????name=name?+?'_bn_',
????????momentum=0.5,
????)
4. 定義基本殘差模塊
def?shortcut(input,?ch_in,?ch_out,?stride,name):if?ch_in?!=?ch_out:return?conv_bn_layer(input,?ch_out,?1,?stride,?0,?None,name=name)else:return?inputdef?basicblock(input,?ch_in,?ch_out,?stride,name,act):
????tmp?=?conv_bn_layer(input,?ch_out,?3,?stride,?1,?name=name?+?'_1_',act=act)
????tmp?=?conv_bn_layer(tmp,?ch_out,?3,?1,?1,?act=None,?name=name?+?'_2_')
????short?=?shortcut(input,?ch_in,?ch_out,?stride,name=name)return?fluid.layers.elementwise_add(x=tmp,?y=short,?act='relu')def?layer_warp(block_func,?input,?ch_in,?ch_out,?count,?stride,name,act='relu'):
????tmp?=?block_func(input,?ch_in,?ch_out,?stride,name=name?+?'1',act=act)for?i?in?range(1,?count):
????????tmp?=?block_func(tmp,?ch_out,?ch_out,?1,name=name?+?str(i?+?1),act=act)return?tmp
5. 判別網絡- 將BatchNorm批歸一化中momentum參數設置為0.5
- 將判別網絡(D)激活函數由elu改為leaky_relu,并將alpha參數設置為0.2
- 在判別器(D)中增加Dropout層,并將dropout_prob設置為0.4
###判別器def?D(x):#?(96?+?2?*?1?-?4)?/?2?+?1?=?48
????x?=?conv_bn_layer(x,?64,?4,?2,?1,?act=None,?name='conv_bn_1')
????x?=?fluid.layers.leaky_relu(x,alpha=0.2,name='leaky_relu_1')
????x?=?fluid.layers.dropout(x,0.4,name='dropout1')#?(48?+?2?*?1?-?4)?/?2?+?1?=?24
????x?=?conv_bn_layer(x,?128,?4,?2,?1,?act=None,?name='conv_bn_2')
????x?=?fluid.layers.leaky_relu(x,alpha=0.2,name='leaky_relu_2')
????x?=?fluid.layers.dropout(x,0.4,name='dropout2')#?(24?+?2?*?1?-?4)?/?2?+?1?=?12
????x?=?conv_bn_layer(x,?256,?4,?2,?1,?act=None,?name='conv_bn_3')
????x?=?fluid.layers.leaky_relu(x,alpha=0.2,name='leaky_relu_3')
????x?=?fluid.layers.dropout(x,0.4,name='dropout3')#?(12?+?2?*?1?-?4)?/?2?+?1?=?6
????x?=?conv_bn_layer(x,?512,?4,?2,?1,?act=None,?name='conv_bn_4')
????x?=?fluid.layers.leaky_relu(x,alpha=0.2,name='leaky_relu_4')
????x?=?fluid.layers.dropout(x,0.4,name='dropout4')
????x?=?fluid.layers.reshape(x,shape=[-1,?512?*?6?*?6])
????x?=?fc(x,?2,?name='fc1')return?x
6. 生成網絡將BatchNorm批歸一化中momentum參數設置為0.5。將生成器(G)中的第一層全連接層改為基本殘差模塊。輸入Tensor的Shape為[batch_size,72],其中每個數值大小都是0~1之間的float32隨機數。輸出為大小96x96RGB三通道圖片。###生成器
def?G(x):
????#x?=?fc(x,6?*?6?*?2,name='g_fc1',act='relu')
????#x?=?bn(x,?name='g_bn_1',?act='relu',momentum=0.5)
????x?=?fluid.layers.reshape(x,?shape=[-1,?2,?6,?6])
????x?=?layer_warp(basicblock,?x,?2,?256,?1,?1,?name='g_res1',?act='relu')
????#?2?*?(6?-?1)?-?2?*?1??+?4?=?12
????x?=?deconv(x,?num_filters=256,?filter_size=4,?stride=2,?padding=1,?name='g_deconv_1')
????x?=?bn(x,?name='g_bn_2',?act='relu',momentum=0.5)
????#?2?*?(12?-?1)?-?2?*?1??+?4?=?24
????x?=?deconv(x,?num_filters=128,?filter_size=4,?stride=2,?padding=1,?name='g_deconv_2')
????x?=?bn(x,?name='g_bn_3',?act='relu',momentum=0.5)
????#?2?*?(24?-?1)?-?2?*?1??+?4?=?48
????x?=?deconv(x,?num_filters=64,?filter_size=4,?stride=2,?padding=1,?name='g_deconv_3')
????x?=?bn(x,?name='g_bn_4',?act='relu',momentum=0.5)
????#?2?*?(48?-?1)?-?2?*?1??+?4?=?96
????x?=?deconv(x,?num_filters=3,?filter_size=4,?stride=2,?padding=1,?name='g_deconv_4',act='relu')
????return?x
損失函數選用softmax_with_cross_entropy,公式如下:7. 訓練網絡設置的超參數為:- 學習率:2e-4
- Epoch: 90
- Mini-Batch:100
- 單個隨機張量大小:72
import?IPython.display?as?displayimport?warnings
warnings.filterwarnings('ignore')
img_dim?=?96
LEARENING_RATE?=?2e-4
SHOWNUM?=?12
epoch?=?90
output?=?"work/Output/"
batch_size?=?100
G_DIMENSION?=?72
d_program?=?fluid.Program()
dg_program?=?fluid.Program()###定義判別網絡program#?program_guard()接口配合with語句將with?block中的算子和變量添加指定的全局主程序(main_program)和啟動程序(start_progrom)with?fluid.program_guard(d_program):#?輸入圖片大小為28*28
????img?=?fluid.layers.data(name='img',?shape=[None,3,img_dim,img_dim],?dtype='float32')#?標簽shape=1
????label?=?fluid.layers.data(name='label',?shape=[None,1],?dtype='int64')
????d_logit?=?D(img)
????d_loss?=?loss(x=d_logit,?label=label)###定義生成網絡programwith?fluid.program_guard(dg_program):
????noise?=?fluid.layers.data(name='noise',?shape=[None,G_DIMENSION],?dtype='float32')#label?=?np.ones(shape=[batch_size,?G_DIMENSION],?dtype='int64')#?噪聲數據作為輸入得到生成照片
????g_img?=?G(x=noise)
????g_program?=?dg_program.clone()
????g_program_test?=?dg_program.clone(for_test=True)#?判斷生成圖片為真實樣本的概率
????dg_logit?=?D(g_img)#?計算生成圖片被判別為真實樣本的loss
????dg_loss?=?loss(
????????x=dg_logit,
????????label=fluid.layers.fill_constant_batch_size_like(input=noise,?dtype='int64',?shape=[-1,1],?value=1)
????)###優化函數
opt?=?fluid.optimizer.Adam(learning_rate=LEARENING_RATE,beta1=0.5)
opt.minimize(loss=d_loss)
parameters?=?[p.name?for?p?in?g_program.global_block().all_parameters()]
opt.minimize(loss=dg_loss,?parameter_list=parameters)
train_reader?=?paddle.batch(
????paddle.reader.shuffle(
????????reader=train(),?buf_size=50000
????),
????batch_size=batch_size
)
test_reader?=?paddle.batch(
????paddle.reader.shuffle(
????????reader=test(),?buf_size=10000
????),
????batch_size=10
)###執行器if?use_gpu:
????exe?=?fluid.Executor(fluid.CUDAPlace(0))else:
????exe?=?fluid.Executor(fluid.CPUPlace())
start_program?=?fluid.default_startup_program()
exe.run(start_program)#加載模型#fluid.io.load_persistables(exe,'work/Model/D/',d_program)#fluid.io.load_persistables(exe,'work/Model/G/',dg_program)###訓練過程
t_time?=?0
losses?=?[[],?[]]#?判別器迭代次數
NUM_TRAIN_TIME_OF_DG?=?2#?最終生成的噪聲數據
const_n?=?np.random.uniform(
????low=0.0,?high=1.0,
????size=[batch_size,?G_DIMENSION]).astype('float32')
test_const_n?=?np.random.uniform(
????low=0.0,?high=1.0,
????size=[100,?G_DIMENSION]).astype('float32')#plt.ion()
now?=?0for?pass_id?in?range(epoch):
????fluid.io.save_persistables(exe,?'work/Model/G',?dg_program)
????fluid.io.save_persistables(exe,?'work/Model/D',?d_program)for?batch_id,?data?in?enumerate(train_reader()):??#?enumerate()函數將一個可遍歷的數據對象組合成一個序列列表if?len(data)?!=?batch_size:continue#?生成訓練過程的噪聲數據
????????noise_data?=?np.random.uniform(
????????????low=0.0,?high=1.0,
????????????size=[batch_size,?G_DIMENSION]).astype('float32')#?真實圖片
????????real_image?=?np.array(data)#?真實標簽
????????real_labels?=?np.ones(shape=[batch_size,1],?dtype='int64')#?real_labels?=?real_labels?*?10#?虛假標簽
????????fake_labels?=?np.zeros(shape=[batch_size,1],?dtype='int64')
????????s_time?=?time.time()#print(np.max(noise_data))#?虛假圖片
????????generated_image?=?exe.run(g_program,
??????????????????????????????????feed={'noise':?noise_data},
??????????????????????????????????fetch_list=[g_img])[0]###訓練判別器#?D函數判斷虛假圖片為假的loss
????????d_loss_1?=?exe.run(d_program,
???????????????????????????feed={'img':?generated_image,'label':?fake_labels,
???????????????????????????},
???????????????????????????fetch_list=[d_loss])[0][0]#?D函數判斷真實圖片為真的loss
????????d_loss_2?=?exe.run(d_program,
???????????????????????????feed={'img':?real_image,'label':?real_labels,
???????????????????????????},
???????????????????????????fetch_list=[d_loss])[0][0]
????????d_loss_n?=?d_loss_1?+?d_loss_2
????????losses[0].append(d_loss_n)###訓練生成器for?_?in?six.moves.xrange(NUM_TRAIN_TIME_OF_DG):
????????????noise_data?=?np.random.uniform(??#?uniform()方法從一個均勻分布[low,high)中隨機采樣
????????????????low=0.0,?high=1.0,
????????????????size=[batch_size,?G_DIMENSION]).astype('float32')
????????????dg_loss_n?=?exe.run(dg_program,
????????????????????????????????feed={'noise':?noise_data},
????????????????????????????????fetch_list=[dg_loss])[0][0]
????????losses[1].append(dg_loss_n)
????????t_time?+=?(time.time()?-?s_time)if?batch_id?%?500?==?0:if?not?os.path.exists(output):
????????????????os.makedirs(output)#?每輪的生成結果
????????????generated_image?=?exe.run(g_program_test,?feed={'noise':?test_const_n},?fetch_list=[g_img])[0]#print(generated_image[1])
????????????imgs?=?[]
????????????plt.figure(figsize=(15,15))try:for?i?in?range(100):
????????????????????image?=?generated_image[i].transpose()
????????????????????plt.subplot(10,?10,?i?+?1)
????????????????????plt.imshow(image)
????????????????????plt.axis('off')
????????????????????plt.xticks([])
????????????????????plt.yticks([])
????????????????????plt.subplots_adjust(wspace=0.1,?hspace=0.1)#?plt.subplots_adjust(wspace=0.1,hspace=0.1)
????????????????msg?=?'Epoch?ID={0}?Batch?ID={1}?\n?D-Loss={2}?G-Loss={3}'.format(pass_id?+?92,?batch_id,?d_loss_n,?dg_loss_n)#print(msg)
????????????????plt.suptitle(msg,fontsize=20)
????????????????plt.draw()#if?batch_id?%?10000?==?0:
????????????????plt.savefig('{}/{:04d}_{:04d}.png'.format(output,?pass_id?+?92,?batch_id),bbox_inches='tight')
????????????????plt.pause(0.01)
????????????????display.clear_output(wait=True)#plt.pause(0.01)except?IOError:
????????????????print(IOError)#plt.ioff()
plt.close()
plt.figure(figsize=(15,?6))
x?=?np.arange(len(losses[0]))
plt.title('Loss')
plt.xlabel('Number?of?Batch')
plt.plot(x,np.array(losses[0]),'r-',label='D?Loss')
plt.plot(x,np.array(losses[1]),'b-',label='G?Loss')
plt.legend()
plt.savefig('work/Train?Process')
plt.show()
得到的損失變化曲線為:
項目總結
簡單介紹了一下DCGAN的原理,通過對原項目的改進和優化,一步一步依次對生成網絡和判別網絡以及訓練過程進行介紹。通過橫向對比某個輸入元素對生成圖片的影響。平均更改其中某個數值,依次生成20組隨機數,輸入生成器,得到橫向對比圖片,得到GAN神奇的過渡。DCGAN生成的二次元頭像仔細看有些圖片確實是足以以假亂真的,通過DCGAN了解到GAN的強大“魔力”。不足之處是生成的圖片分辨率比較低(96X96),在以后的項目我會通過改進網絡使得生成的二次元頭像有更高的分辨率和更豐富的細節。個人AI Studio主頁:https://aistudio.baidu.com/aistudio/personalcenter/thirdview/56447如在使用過程中有問題,可加入飛槳官方QQ群進行交流:703252161。如果您想詳細了解更多飛槳的相關內容,請參閱以下文檔。飛槳生成對抗網絡項目地址:GitHub:?https://github.com/PaddlePaddle/models/tree/release/1.8/PaddleCV/ganGitee:https://gitee.com/paddlepaddle/models/tree/develop/PaddleCV/gan官網地址:https://www.paddlepaddle.org.cn飛槳開源框架項目地址:GitHub:https://github.com/PaddlePaddle/PaddleGitee:?https://gitee.com/paddlepaddle/PaddleEND
