PyTorch documentation(PyTorch 文檔)
PyTorch is an optimized tensor library for deep learning using GPUs and CPUs.
(PyTorch是一個優化的張量庫,用于使用GPU和CPU進行深度學習。)
Features described in this documentation are classified by release status:
(此留檔中描述的功能按發布狀態分類:)
Stable: These features will be maintained long-term and there should generally be no major performance limitations or gaps in documentation. We also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time).
穩定:這些特性將長期保持,通常不應該有主要的性能限制或留檔差距。我們還希望保持向后兼容性(盡管可能會發生重大更改,并且會提前通知一個版本)。
Beta: These features are tagged as Beta because the API may change based on user feedback, because the performance needs to improve, or because coverage across operators is not yet complete. For Beta features, we are committing to seeing the feature through to the Stable classification. We are not, however, committing to backwards compatibility.
測試版:這些功能被標記為測試版,因為應用編程接口可能會根據用戶反饋而改變,因為性能需要提高,或者因為跨運營商的覆蓋尚未完成。對于測試版功能,我們承諾將該功能納入穩定分類。但是,我們不承諾向后兼容。
Prototype: These features are typically not available as part of binary distributions like PyPI or Conda, except sometimes behind run-time flags, and are at an early stage for feedback and testing.
原型:這些功能通常不能作為PyPI或Conda等二進制發行版的一部分提供,除非有時在運行時標志之后,并且處于反饋和測試的早期階段。
Community(社區)
- PyTorch Governance | Build + CI
- PyTorch Contribution Guide
- PyTorch Design Philosophy
- PyTorch Governance | Mechanics
- PyTorch Governance | Maintainers
Developer Notes(開發者筆記)
- Automatic Mixed Precision examples
- Autograd mechanics
- Broadcasting semantics
- CPU threading and TorchScript inference
- CUDA semantics
- PyTorch Custom Operators Landing Page
- Distributed Data Parallel
- Extending PyTorch