CNN訓練模型 花卉

一、CNN訓練模型

模型尺寸分析:卷積層全都采用了補0,所以經過卷積層長和寬不變,只有深度加深。池化層全都沒有補0,所以經過池化層長和寬均減小,深度不變。http://download.tensorflow.org/example_images/flower_photos.tgz

模型尺寸變化:100×100×3->100×100×32->50×50×32->50×50×64->25×25×64->25×25×128->12×12×128->12×12×128->6×6×128

CNN訓練代碼如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
from skimage import io,transform
import glob
import os
import tensorflow as tf
import numpy as np
import time
#數據集地址
path='E:/data/datasets/flower_photos/'
#模型保存地址
model_path='E:/data/model/flower/model.ckpt'
#將所有的圖片resize成100*100
w=100
h=100
c=3
#讀取圖片
def read_img(path):
????cate=[path+x for x in os.listdir(path) if os.path.isdir(path+x)]
????imgs=[]
????labels=[]
????for idx,folder in enumerate(cate):
????????for im in glob.glob(folder+'/*.jpg'):
????????????print('reading the images:%s'%(im))
????????????img=io.imread(im)
????????????img=transform.resize(img,(w,h))
????????????imgs.append(img)
????????????labels.append(idx)
????return np.asarray(imgs,np.float32),np.asarray(labels,np.int32)
data,label=read_img(path)
#打亂順序
num_example=data.shape[0]
arr=np.arange(num_example)
np.random.shuffle(arr)
data=data[arr]
label=label[arr]
#將所有數據分為訓練集和驗證集
ratio=0.8
s=np.int(num_example*ratio)
x_train=data[:s]
y_train=label[:s]
x_val=data[s:]
y_val=label[s:]
#-----------------構建網絡----------------------
#占位符
x=tf.placeholder(tf.float32,shape=[None,w,h,c],name='x')
y_=tf.placeholder(tf.int32,shape=[None,],name='y_')
def inference(input_tensor, train, regularizer):
????with tf.variable_scope('layer1-conv1'):
????????conv1_weights = tf.get_variable("weight",[5,5,3,32],initializer=tf.truncated_normal_initializer(stddev=0.1))
????????conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0))
????????conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding='SAME')
????????relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))
????with tf.name_scope("layer2-pool1"):
????????pool1 = tf.nn.max_pool(relu1, ksize = [1,2,2,1],strides=[1,2,2,1],padding="VALID")
????with tf.variable_scope("layer3-conv2"):
????????conv2_weights = tf.get_variable("weight",[5,5,32,64],initializer=tf.truncated_normal_initializer(stddev=0.1))
????????conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0))
????????conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding='SAME')
????????relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))
????with tf.name_scope("layer4-pool2"):
????????pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
????with tf.variable_scope("layer5-conv3"):
????????conv3_weights = tf.get_variable("weight",[3,3,64,128],initializer=tf.truncated_normal_initializer(stddev=0.1))
????????conv3_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0))
????????conv3 = tf.nn.conv2d(pool2, conv3_weights, strides=[1, 1, 1, 1], padding='SAME')
????????relu3 = tf.nn.relu(tf.nn.bias_add(conv3, conv3_biases))
????with tf.name_scope("layer6-pool3"):
????????pool3 = tf.nn.max_pool(relu3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
????with tf.variable_scope("layer7-conv4"):
????????conv4_weights = tf.get_variable("weight",[3,3,128,128],initializer=tf.truncated_normal_initializer(stddev=0.1))
????????conv4_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0))
????????conv4 = tf.nn.conv2d(pool3, conv4_weights, strides=[1, 1, 1, 1], padding='SAME')
????????relu4 = tf.nn.relu(tf.nn.bias_add(conv4, conv4_biases))
????with tf.name_scope("layer8-pool4"):
????????pool4 = tf.nn.max_pool(relu4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
????????nodes = 6*6*128
????????reshaped = tf.reshape(pool4,[-1,nodes])
????with tf.variable_scope('layer9-fc1'):
????????fc1_weights = tf.get_variable("weight", [nodes, 1024],
??????????????????????????????????????initializer=tf.truncated_normal_initializer(stddev=0.1))
????????if regularizer != None: tf.add_to_collection('losses', regularizer(fc1_weights))
????????fc1_biases = tf.get_variable("bias", [1024], initializer=tf.constant_initializer(0.1))
????????fc1 = tf.nn.relu(tf.matmul(reshaped, fc1_weights) + fc1_biases)
????????if train: fc1 = tf.nn.dropout(fc1, 0.5)
????with tf.variable_scope('layer10-fc2'):
????????fc2_weights = tf.get_variable("weight", [1024, 512],
??????????????????????????????????????initializer=tf.truncated_normal_initializer(stddev=0.1))
????????if regularizer != None: tf.add_to_collection('losses', regularizer(fc2_weights))
????????fc2_biases = tf.get_variable("bias", [512], initializer=tf.constant_initializer(0.1))
????????fc2 = tf.nn.relu(tf.matmul(fc1, fc2_weights) + fc2_biases)
????????if train: fc2 = tf.nn.dropout(fc2, 0.5)
????with tf.variable_scope('layer11-fc3'):
????????fc3_weights = tf.get_variable("weight", [512, 5],
??????????????????????????????????????initializer=tf.truncated_normal_initializer(stddev=0.1))
????????if regularizer != None: tf.add_to_collection('losses', regularizer(fc3_weights))
????????fc3_biases = tf.get_variable("bias", [5], initializer=tf.constant_initializer(0.1))
????????logit = tf.matmul(fc2, fc3_weights) + fc3_biases
????return logit
#---------------------------網絡結束---------------------------
regularizer = tf.contrib.layers.l2_regularizer(0.0001)
logits = inference(x,False,regularizer)
#(小處理)將logits乘以1賦值給logits_eval,定義name,方便在后續調用模型時通過tensor名字調用輸出tensor
b = tf.constant(value=1,dtype=tf.float32)
logits_eval = tf.multiply(logits,b,name='logits_eval')
loss=tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_)
train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.int32), y_)???
acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#定義一個函數,按批次取數據
def minibatches(inputs=None, targets=None, batch_size=None, shuffle=False):
????assert len(inputs) == len(targets)
????if shuffle:
????????indices = np.arange(len(inputs))
????????np.random.shuffle(indices)
????for start_idx in range(0, len(inputs) - batch_size + 1, batch_size):
????????if shuffle:
????????????excerpt = indices[start_idx:start_idx + batch_size]
????????else:
????????????excerpt = slice(start_idx, start_idx + batch_size)
????????yield inputs[excerpt], targets[excerpt]
#訓練和測試數據,可將n_epoch設置更大一些
n_epoch=10??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
batch_size=64
saver=tf.train.Saver()
sess=tf.Session()?
sess.run(tf.global_variables_initializer())
for epoch in range(n_epoch):
????start_time = time.time()
????#training
????train_loss, train_acc, n_batch = 0, 0, 0
????for x_train_a, y_train_a in minibatches(x_train, y_train, batch_size, shuffle=True):
????????_,err,ac=sess.run([train_op,loss,acc], feed_dict={x: x_train_a, y_: y_train_a})
????????train_loss += err; train_acc += ac; n_batch += 1
????print("?? train loss: %f" % (np.sum(train_loss)/ n_batch))
????print("?? train acc: %f" % (np.sum(train_acc)/ n_batch))
????#validation
????val_loss, val_acc, n_batch = 0, 0, 0
????for x_val_a, y_val_a in minibatches(x_val, y_val, batch_size, shuffle=False):
????????err, ac = sess.run([loss,acc], feed_dict={x: x_val_a, y_: y_val_a})
????????val_loss += err; val_acc += ac; n_batch += 1
????print("?? validation loss: %f" % (np.sum(val_loss)/ n_batch))
????print("?? validation acc: %f" % (np.sum(val_acc)/ n_batch))
saver.save(sess,model_path)
sess.close()

二、調用模型進行預測

調用模型進行花卉的預測,代碼如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
from skimage import io,transform
import tensorflow as tf
import numpy as np
path1 = "E:/data/datasets/flower_photos/daisy/5547758_eea9edfd54_n.jpg"
path2 = "E:/data/datasets/flower_photos/dandelion/7355522_b66e5d3078_m.jpg"
path3 = "E:/data/datasets/flower_photos/roses/394990940_7af082cf8d_n.jpg"
path4 = "E:/data/datasets/flower_photos/sunflowers/6953297_8576bf4ea3.jpg"
path5 = "E:/data/datasets/flower_photos/tulips/10791227_7168491604.jpg"
flower_dict = {0:'dasiy',1:'dandelion',2:'roses',3:'sunflowers',4:'tulips'}
w=100
h=100
c=3
def read_one_image(path):
????img = io.imread(path)
????img = transform.resize(img,(w,h))
????return np.asarray(img)
with tf.Session() as sess:
????data = []
????data1 = read_one_image(path1)
????data2 = read_one_image(path2)
????data3 = read_one_image(path3)
????data4 = read_one_image(path4)
????data5 = read_one_image(path5)
????data.append(data1)
????data.append(data2)
????data.append(data3)
????data.append(data4)
????data.append(data5)
????saver = tf.train.import_meta_graph('E:/data/model/flower/model.ckpt.meta')
????saver.restore(sess,tf.train.latest_checkpoint('E:/data/model/flower/'))
????graph = tf.get_default_graph()
????x = graph.get_tensor_by_name("x:0")
????feed_dict = {x:data}
????logits = graph.get_tensor_by_name("logits_eval:0")
????classification_result = sess.run(logits,feed_dict)
????#打印出預測矩陣
????print(classification_result)
????#打印出預測矩陣每一行最大值的索引
????print(tf.argmax(classification_result,1).eval())
????#根據索引通過字典對應花的分類
????output = []
????output = tf.argmax(classification_result,1).eval()
????for i in range(len(output)):
????????print("第",i+1,"朵花預測:"+flower_dict[output[i]])

運行結果:

?
1
2
3
4
5
6
7
8
9
10
11
[[? 5.76620245?? 3.18228579? -3.89464641? -2.81310582?? 1.40294015]
?[ -1.01490593?? 3.55570269? -2.76053429?? 2.93104005? -3.47138596]
?[ -8.05292606? -7.26499033? 11.70479774?? 0.59627819?? 2.15948296]
?[ -5.12940931?? 2.18423128? -3.33257103?? 9.0591135??? 5.03963232]
?[ -4.25288343? -0.95963973? -2.33347392?? 1.54485476?? 5.76069307]]
[0 1 2 3 4]
1 朵花預測:dasiy
2 朵花預測:dandelion
3 朵花預測:roses
4 朵花預測:sunflowers
5 朵花預測:tulips

預測結果和調用模型代碼中的五個路徑相比較是完全準確的。

本文的模型對于花卉的分類準確率大概在70%左右,采用遷移學習調用Inception-v3模型對本文中的花卉數據集分類準確率在95%左右。主要的原因在于本文的CNN模型較于簡單,而且花卉數據集本身就比mnist手寫數字數據集分類難度就要大一點,同樣的模型在mnist手寫數字的識別上準確率要比花卉數據集準確率高不少。

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/387233.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/387233.shtml
英文地址,請注明出處:http://en.pswp.cn/news/387233.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Linux re

正則表達式并不是一個工具程序,而是一個字符串處理的標準依據,如果想要以正則表達式的方式處理字符串,就得使用支持正則表達式的工具,例如grep、vi、sed、asw等。 注意:ls不支持正則表達式。 grep 正則表達式: 注意gr…

操作系統02進程管理Process_Description_and_Control

作業的基本概念:用戶再一次計算過程中或一次事務處理過程中,要求計算機系統所做的工作的集合。 包含多個程序、多個數據、作業控制說明書 系統調用時操作系統提供給編程人員的唯一接口。 1、文件操作類; 2、進程控制類; 3、資…

藍橋杯 方格填數(全排列+圖形補齊)

方格填數 如下的10個格子 填入0~9的數字,同一數字不能重復填。要求:連續的兩個數字不能相鄰。(左右、上下、對角都算相鄰) 一共有多少種可能的填數方案? 請填寫表示方案數目的整數。注意:你提交的應該是一個…

操作系統03進程管理Process_Scheduling

2 Process Scheduling >Type of scheduling >Scheduling Criteria (準則) >Scheduling Algorithm >Real-Time Scheduling (嵌入式系統) 2.1 Learning Objectives By the end of this lecture you should be able to Explain what is Response Time 響應時間-…

花卉分類CNN

tensorflow升級到1.0之后,增加了一些高級模塊: 如tf.layers, tf.metrics, 和tf.losses,使得代碼稍微有些簡化。 任務:花卉分類 版本:tensorflow 1.3 數據:http://download.tensorflow.org/example_images/f…

【模板】可持久化線段樹

可持久化線段樹/主席樹: 顧名思義,該數據結構是可以訪問歷史版本的線段樹。用于解決需要查詢歷史信息的區間問題。 在功能與時間復雜度上與開n棵線段樹無異,然而空間復雜度從$O(n\times nlogn)$降到了$O(nlogn)$。 實現方法: 每次…

skimage庫需要依賴 numpy+mkl 和scipy

skimage庫需要依賴 numpymkl 和scipy1、打開運行,輸入cmd回車,輸入python回車,查看python版本2、在https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy 中,根據自己python版本下載需要的包 (因為我的是python 2.7.13 …

操作系統04進程同步與通信

4.1 進程間的相互作用 4.1.1 進程間的聯系資源共享關系相互合作關系臨界資源應互斥訪問。臨界區:不論是硬件臨界資源,還是軟件臨界資源,多個進程必須互斥地對它們進行訪問。把在每個進程中訪問臨界資源的那段代碼稱為臨界資源區。顯然&#x…

oracle遷移到greenplum的方案

oracle數據庫是一種關系型數據庫管理系統,在數據庫領域一直處于領先的地位,適合于大型項目的開發;銀行、電信、電商、金融等各領域都大量使用Oracle數據庫。 greenplum是一款開源的分布式數據庫存儲解決方案,主要關注數據倉庫和BI…

CNN框架的搭建及各個參數的調節

本文代碼下載地址:我的github本文主要講解將CNN應用于人臉識別的流程,程序基于PythonnumpytheanoPIL開發,采用類似LeNet5的CNN模型,應用于olivettifaces人臉數據庫,實現人臉識別的功能,模型的誤差降到了5%以…

操作系統05死鎖

進程管理4--Deadlock and Starvation Concurrency: Deadlock and Starvation 內容提要 >產生死鎖與饑餓的原因 >解決死鎖的方法 >死鎖/同步的經典問題:哲學家進餐問題 Deadlock 系統的一種隨機性錯誤 Permanent blocking of a set of processes that eith…

CNN tensorflow 人臉識別

數據材料這是一個小型的人臉數據庫,一共有40個人,每個人有10張照片作為樣本數據。這些圖片都是黑白照片,意味著這些圖片都只有灰度0-255,沒有rgb三通道。于是我們需要對這張大圖片切分成一個個的小臉。整張圖片大小是1190 942&am…

數據結構01緒論

第一章緒論 1.1 什么是數據結構 數據結構是一門研究非數值計算的程序設計問題中,計算機的操作對象以及他們之間的關系和操作的學科。 面向過程程序數據結構算法 數據結構是介于數學、計算機硬件、計算機軟件三者之間的一門核心課程。 數據結構是程序設計、編譯…

css3動畫、2D與3D效果

1.兼容性 css3針對同一樣式在不同瀏覽器的兼容 需要在樣式屬性前加上內核前綴; 谷歌(chrome) -webkit-transition: Opera(歐鵬) -o-transition: Firefox(火狐) -moz-transition Ie -ms-tr…

ES6學習筆記(六)數組的擴展

1.擴展運算符 1.1含義 擴展運算符(spread)是三個點(...)。它好比 rest 參數的逆運算,將一個數組轉為用逗號分隔的參數序列。 console.log(...[1, 2, 3]) // 1 2 3console.log(1, ...[2, 3, 4], 5) // 1 2 3 4 5[...doc…

數據結構02線性表

第二章 線性表 C中STL順序表:vector http://blog.csdn.net/weixin_37289816/article/details/54710677鏈表:list http://blog.csdn.net/weixin_37289816/article/details/54773406在數據元素的非空有限集中: (1)存在唯一一個被稱作“第…

訓練一個神經網絡 能讓她認得我

寫個神經網絡,讓她認得我(?????)(Tensorflow,opencv,dlib,cnn,人臉識別) 這段時間正在學習tensorflow的卷積神經網絡部分,為了對卷積神經網絡能夠有一個更深的了解,自己動手實現一個例程是比較好的方式,所以就選了一個這樣比…

數據結構03棧和隊列

第三章棧和隊列 STL棧:stack http://blog.csdn.net/weixin_37289816/article/details/54773495隊列:queue http://blog.csdn.net/weixin_37289816/article/details/54773581priority_queue http://blog.csdn.net/weixin_37289816/article/details/5477…

Java動態編譯執行

在某些情況下,我們需要動態生成java代碼,通過動態編譯,然后執行代碼。JAVA API提供了相應的工具(JavaCompiler)來實現動態編譯。下面我們通過一個簡單的例子介紹,如何通過JavaCompiler實現java代碼動態編譯…

樹莓派pwm驅動好盈電調及伺服電機

本文講述如何通過樹莓派的硬件PWM控制好盈電調來驅動RC車子的前進后退,以及如何驅動伺服電機來控制車子轉向。 1. 好盈電調簡介 車子上的電調型號為:WP-10BLS-A-RTR,在好盈官網并沒有搜到對應手冊,但找到一份通用RC競速車的電調使…