最近,我在服務器上起基于PyTorch分布式框架的預訓練實驗,起初實驗都在順利進行,但是當我們把模型的深度與寬度調大之后,模型在訓練幾代之后便會出現如下的報錯:
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41495 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41497 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41498 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41500 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41502 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41504 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41506 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 41496)
of binary: /home/user/anaconda3/envs/conda-envs/bin/python
Traceback (most recent call last):File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/runpy.py", line 194, in _run_module_as_mainreturn _run_code(code, main_globals, None,File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/runpy.py", line 87, in _run_codeexec(code, run_globals)File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/launch.py", l
ine 193, in <module>main()File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/launch.py", l
ine 189, in mainlaunch(args)File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/launch.py", l
ine 174, in launchrun(args)File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/run.py", line710, in runelastic_launch(File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/launcher/api.
py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/launcher/api.
py", line 259, in launch_agentraise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
run_pretraining.py FAILED
------------------------------------------------------------
Failures:<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:time : 2024-08-30_09:05:52host : ae83085e5bc2rank : 1 (local_rank: 1)exitcode : 1 (pid: 41496)error_file: <N/A>traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
起初,我認為是batch size太大的問題,導致GPU顯存不夠,但是當我調小之后,問題照常發生。之后,我更新了PyTorch框架到2.0,但還是出現這樣的問題。
?
后續,在觀察實驗日志的時候發現,訓練期間我的梯度范數(grad_norm)變化非常不穩定,于是我順著這條線去查,遂把原因歸結為優化方面的問題。
之后,我發現對于學習率的設置,我是使用了學習率擴張法則,我的總batch為800,遠遠大于設定的256,因此導致實際訓練中,我的初始學習率由我設置的3e-4轉變為1e-3,從而導致學習率太大,進而造成了訓練坍塌。
基于上述結論,我將初始學習率調整為2e-4,模型恢復正常訓練。
上述bug出現的原因各不相同,我把我的報錯原因分享給大家,僅供參考。