mac本地采用GPU啟動keras運算
- 一、問題背景
- 二、技術背景
- 三、實驗驗證
- 本機配置
- 安裝PlaidML
- 安裝plaidml-keras
- 配置默認顯卡
- 運行采用 CPU運算的代碼
- step1 先導入keras包,導入數據cifar10,這里可能涉及外網下載,有問題可以參考[keras使用基礎問題](https://editor.csdn.net/md/?articleId=140331142)
- step2 導入計算模型,如果本地不存在該模型數據,會自動進行下載,有問題可以參考[keras使用基礎問題](https://editor.csdn.net/md/?articleId=140331142)
- step3 模型編譯
- step4 進行一次預測
- step5 進行10次預測
- 運行采用 GPU運算的代碼
- 采用顯卡metal_intel(r)_uhd_graphics_630.0
- step0 通過plaidml導入keras,之后再做keras相關操作
- step1 先導入keras包,導入數據cifar10
- step2 導入計算模型,如果本地不存在該模型數據,會自動進行下載
- step3 模型編譯
- step4 進行一次預測
- step5 進行10次預測
- 采用顯卡metal_amd_radeon_pro_5300m.0
- step0 通過plaidml導入keras,之后再做keras相關操作
- step1 先導入keras包,導入數據cifar10
- step2 導入計算模型,如果本地不存在該模型數據,會自動進行下載
- step3 模型編譯
- step4 進行一次預測
- step5 進行10次預測
- 四、評估討論
一、問題背景
從上一篇文章中,我們已經發現在大模型的運算中,采用cpu進行運算時,對于cpu的使用消耗很大。因此我們這里會想到如果機器中有GPU顯卡,如何能夠發揮顯卡在向量運算中的優勢,將機器學習相關的運算做的又快又好。
二、技術背景
我們知道當前主流的支持機器學習比較好的顯卡為 Nvida系列的顯卡,俗稱 N卡,但是在mac機器上,通常集成的都是 AMD系列的顯卡。兩種不同的硬件指令集的差異導致上層需要有不同的實現技術。
但是在 AMD顯卡中,有一種PlaidML的技術,通過該插件,可以封裝不同顯卡的差異。
PlaidML項目地址:https://github.com/plaidml/plaidml
目前 PlaidML 已經支持 Keras、ONNX 和 nGraph 等工具,直接用 Keras 建個模,MacBook 輕輕松松調用 GPU。
通過這款名為 PlaidML 的工具,不論英偉達、AMD 還是英特爾顯卡都可以輕松搞定深度學習訓練了。
參考:Mac使用PlaidML加速強化學習訓練
三、實驗驗證
本次操作,對于一個常規的keras的算法,分別在cpu和 gpu下進行多輪計算,統計耗時。進行對比統計。
本機配置
本次采用的mac機器的軟件、硬件參數如下
安裝PlaidML
由于安裝依賴包的過程,需要有命令交互,因此安裝plaidML包操作在命令行進行,代碼執行在 jupyter中進行。
由于采用虛擬環境時會jupyter會有需要創建 kernal的技術點,因此這里建議暫時直接用原python環境進行驗證。了解jupyter在虛擬環境的配置特點的同學可以嘗試在虛擬環境中操作。
安裝plaidml-keras
pip3 install plaidml-keras
筆者在采用命令 pip3 install plaidml-keras 安裝最新版本plaidml-keras為0.7.0后,在執行初始化操作時遇到bug,后續降為0.6.4執行正常。但是后續再次安裝為 0.7.0,又正常了。
plaidml in github
配置默認顯卡
在命令行執行
plaidml-setup
交互內容如下
(venv) tsingj@tsingjdeMacBook-Pro-2 ~ # plaidml-setupPlaidML Setup (0.6.4)Thanks for using PlaidML!Some Notes:* Bugs and other issues: https://github.com/plaidml/plaidml* Questions: https://stackoverflow.com/questions/tagged/plaidml* Say hello: https://groups.google.com/forum/#!forum/plaidml-dev* PlaidML is licensed under the Apache License 2.0Default Config Devices:metal_intel(r)_uhd_graphics_630.0 : Intel(R) UHD Graphics 630 (Metal)metal_amd_radeon_pro_5300m.0 : AMD Radeon Pro 5300M (Metal)Experimental Config Devices:llvm_cpu.0 : CPU (LLVM)metal_intel(r)_uhd_graphics_630.0 : Intel(R) UHD Graphics 630 (Metal)opencl_amd_radeon_pro_5300m_compute_engine.0 : AMD AMD Radeon Pro 5300M Compute Engine (OpenCL)opencl_cpu.0 : Intel CPU (OpenCL)opencl_intel_uhd_graphics_630.0 : Intel Inc. Intel(R) UHD Graphics 630 (OpenCL)metal_amd_radeon_pro_5300m.0 : AMD Radeon Pro 5300M (Metal)Using experimental devices can cause poor performance, crashes, and other nastiness.Enable experimental device support? (y,n)[n]:
列舉現實當前可以支持的顯卡列表,選擇默認支持支持的2個顯卡,還是試驗階段所有支持的6 種硬件。
可以看到默認支持的 2 個顯卡即最初截圖中現實的兩個顯卡。為了測試穩定起見,這里先選擇N,回車。
Multiple devices detected (You can override by setting PLAIDML_DEVICE_IDS).
Please choose a default device:1 : metal_intel(r)_uhd_graphics_630.02 : metal_amd_radeon_pro_5300m.0Default device? (1,2)[1]:1Selected device:metal_intel(r)_uhd_graphics_630.0
對于默認選擇的設置,設置一個默認設備,這里我們先將metal_intel?_uhd_graphics_630.0設置為默認設備,當然這個設備其實性能比較差,后續我們會再將metal_amd_radeon_pro_5300m.0設置為默認設備進行對比。
寫入 1 之后,回車。
Almost done. Multiplying some matrices...
Tile code:function (B[X,Z], C[Z,Y]) -> (A) { A[x,y : X,Y] = +(B[x,z] * C[z,y]); }
Whew. That worked.Save settings to /Users/tsingj/.plaidml? (y,n)[y]:y
Success!
回車,將配置信息寫入默認配置文件中,完成配置。
運行采用 CPU運算的代碼
本節中采用jupyter進行一個簡單算法代碼的運行,統計其時間。
step1 先導入keras包,導入數據cifar10,這里可能涉及外網下載,有問題可以參考keras使用基礎問題
#!/usr/bin/env python
import numpy as np
import os
import time
import keras
import keras.applications as kapp
from keras.datasets import cifar10
(x_train, y_train_cats), (x_test, y_test_cats) = cifar10.load_data()
batch_size = 8
x_train = x_train[:batch_size]
x_train = np.repeat(np.repeat(x_train, 7, axis=1), 7, axis=2)
注意,這里默認的keral的運算后端應該是采用了tenserflow,查看輸出
2024-07-11 14:36:02.753107: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
step2 導入計算模型,如果本地不存在該模型數據,會自動進行下載,有問題可以參考keras使用基礎問題
model = kapp.VGG19()
step3 模型編譯
model.compile(optimizer='sgd', loss='categorical_crossentropy',metrics=['accuracy'])
step4 進行一次預測
print("Running initial batch (compiling tile program)")
y = model.predict(x=x_train, batch_size=batch_size)
Running initial batch (compiling tile program)
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 1s/step
step5 進行10次預測
# Now start the clock and run 10 batchesprint("Timing inference...")
start = time.time()
for i in range(10):y = model.predict(x=x_train, batch_size=batch_size)print("Ran in {} seconds".format(time.time() - start))
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 891ms/step
Ran in 0.9295139312744141 seconds
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 923ms/step
Ran in 1.8894760608673096 seconds
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 893ms/step
Ran in 2.818492889404297 seconds
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 932ms/step
Ran in 3.7831668853759766 seconds
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 892ms/step
Ran in 4.71358585357666 seconds
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 860ms/step
Ran in 5.609835863113403 seconds
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 878ms/step
Ran in 6.5182459354400635 seconds
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 871ms/step
Ran in 7.423128128051758 seconds
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 896ms/step
Ran in 8.352543830871582 seconds
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 902ms/step
Ran in 9.288795948028564 seconds
運行采用 GPU運算的代碼
采用顯卡metal_intel?_uhd_graphics_630.0
step0 通過plaidml導入keras,之后再做keras相關操作
# Importing PlaidML. Make sure you follow this order
import plaidml.keras
plaidml.keras.install_backend()
import os
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
注:
1、在采用plaidml=0.7.0 版本時,plaidml.keras.install_backend()操作會發生報錯
2、這步操作,會通過plaidml導入keras,將后臺運算引擎設置為plaidml,而不再采用tenserflow
step1 先導入keras包,導入數據cifar10
#!/usr/bin/env python
import numpy as np
import os
import time
import keras
import keras.applications as kapp
from keras.datasets import cifar10
(x_train, y_train_cats), (x_test, y_test_cats) = cifar10.load_data()
batch_size = 8
x_train = x_train[:batch_size]
x_train = np.repeat(np.repeat(x_train, 7, axis=1), 7, axis=2)
step2 導入計算模型,如果本地不存在該模型數據,會自動進行下載
model = kapp.VGG19()
首次運行輸出顯卡信息
INFO:plaidml:Opening device “metal_intel?_uhd_graphics_630.0”
step3 模型編譯
model.compile(optimizer='sgd', loss='categorical_crossentropy',metrics=['accuracy'])
step4 進行一次預測
print("Running initial batch (compiling tile program)")
y = model.predict(x=x_train, batch_size=batch_size)
Running initial batch (compiling tile program)
由于輸出較快,內容打印只有一行。
step5 進行10次預測
# Now start the clock and run 10 batchesprint("Timing inference...")
start = time.time()
for i in range(10):y = model.predict(x=x_train, batch_size=batch_size)print("Ran in {} seconds".format(time.time() - start))
Ran in 4.241918087005615 seconds
Ran in 8.452141046524048 seconds
Ran in 12.665411949157715 seconds
Ran in 16.849968910217285 seconds
Ran in 21.025720834732056 seconds
Ran in 25.212764024734497 seconds
Ran in 29.405478954315186 seconds
Ran in 33.594977140426636 seconds
Ran in 37.7886438369751 seconds
Ran in 41.98136305809021 seconds
采用顯卡metal_amd_radeon_pro_5300m.0
在plaidml-setup設置的選擇顯卡的階段,不再選擇顯卡metal_intel?_uhd_graphics_630.0,而是選擇metal_amd_radeon_pro_5300m.0
(venv) tsingj@tsingjdeMacBook-Pro-2 ~ # plaidml-setupPlaidML Setup (0.6.4)Thanks for using PlaidML!Some Notes:* Bugs and other issues: https://github.com/plaidml/plaidml* Questions: https://stackoverflow.com/questions/tagged/plaidml* Say hello: https://groups.google.com/forum/#!forum/plaidml-dev* PlaidML is licensed under the Apache License 2.0Default Config Devices:metal_intel(r)_uhd_graphics_630.0 : Intel(R) UHD Graphics 630 (Metal)metal_amd_radeon_pro_5300m.0 : AMD Radeon Pro 5300M (Metal)Experimental Config Devices:llvm_cpu.0 : CPU (LLVM)metal_intel(r)_uhd_graphics_630.0 : Intel(R) UHD Graphics 630 (Metal)opencl_amd_radeon_pro_5300m_compute_engine.0 : AMD AMD Radeon Pro 5300M Compute Engine (OpenCL)opencl_cpu.0 : Intel CPU (OpenCL)opencl_intel_uhd_graphics_630.0 : Intel Inc. Intel(R) UHD Graphics 630 (OpenCL)metal_amd_radeon_pro_5300m.0 : AMD Radeon Pro 5300M (Metal)Using experimental devices can cause poor performance, crashes, and other nastiness.Enable experimental device support? (y,n)[n]:nMultiple devices detected (You can override by setting PLAIDML_DEVICE_IDS).
Please choose a default device:1 : metal_intel(r)_uhd_graphics_630.02 : metal_amd_radeon_pro_5300m.0Default device? (1,2)[1]:2Selected device:metal_amd_radeon_pro_5300m.0Almost done. Multiplying some matrices...
Tile code:function (B[X,Z], C[Z,Y]) -> (A) { A[x,y : X,Y] = +(B[x,z] * C[z,y]); }
Whew. That worked.Save settings to /Users/tsingj/.plaidml? (y,n)[y]:y
Success!
step0 通過plaidml導入keras,之后再做keras相關操作
# Importing PlaidML. Make sure you follow this order
import plaidml.keras
plaidml.keras.install_backend()
import os
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
注:
1、在采用plaidml=0.7.0 版本時,plaidml.keras.install_backend()操作會發生報錯
2、這步操作,會通過plaidml導入keras,將后臺運算引擎設置為plaidml,而不再采用tenserflow
step1 先導入keras包,導入數據cifar10
#!/usr/bin/env python
import numpy as np
import os
import time
import keras
import keras.applications as kapp
from keras.datasets import cifar10
(x_train, y_train_cats), (x_test, y_test_cats) = cifar10.load_data()
batch_size = 8
x_train = x_train[:batch_size]
x_train = np.repeat(np.repeat(x_train, 7, axis=1), 7, axis=2)
step2 導入計算模型,如果本地不存在該模型數據,會自動進行下載
model = kapp.VGG19()
INFO:plaidml:Opening device “metal_amd_radeon_pro_5300m.0”
注意,這里首次執行輸入了顯卡信息。
step3 模型編譯
model.compile(optimizer='sgd', loss='categorical_crossentropy',metrics=['accuracy'])
step4 進行一次預測
print("Running initial batch (compiling tile program)")
y = model.predict(x=x_train, batch_size=batch_size)
Running initial batch (compiling tile program)
由于輸出較快,內容打印只有一行。
step5 進行10次預測
# Now start the clock and run 10 batchesprint("Timing inference...")
start = time.time()
for i in range(10):y = model.predict(x=x_train, batch_size=batch_size)print("Ran in {} seconds".format(time.time() - start))
查看輸出
Ran in 0.43606019020080566 seconds
Ran in 0.8583459854125977 seconds
Ran in 1.2787911891937256 seconds
Ran in 1.70143723487854 seconds
Ran in 2.1235032081604004 seconds
Ran in 2.5464580059051514 seconds
Ran in 2.9677979946136475 seconds
Ran in 3.390064001083374 seconds
Ran in 3.8117799758911133 seconds
Ran in 4.236911058425903 seconds
四、評估討論
顯卡metal_intel?_uhd_graphics_630.0的內存值為1536 MB,雖然作為顯卡,其在進行運算中性能不及本機的 6核CPU;
顯卡metal_amd_radeon_pro_5300m.0,內存值為4G,其性能與本機 CPU對比,提升將近 1 倍數。
由此可以看到對于采用 GPU在進行機器學習運算中的強大優勢。