學習深度學習需要哪些知識
有關深層學習的FAU講義 (FAU LECTURE NOTES ON DEEP LEARNING)
Corona was a huge challenge for many of us and affected our lives in a variety of ways. I have been teaching a class on Deep Learning at Friedrich-Alexander-University Erlangen Nuremberg, Germany for several years now. This summer, our university decided to go “virtual” completely. Therefore, I started recording my lecture in short clips fo 15 minutes each.
對我們許多人而言,電暈是一個巨大的挑戰,它以多種方式影響著我們的生活。 幾年來,我一直在德國的弗里德里希-亞歷山大大學紐倫堡大學教授深度學習課程。 今年夏天,我們的大學決定完全走向“虛擬”。 因此,我開始以每段15分鐘的短片記錄我的演講。

For a topic such as “Deep Learning”, you have to update the content of a lecture every semester. Therefore, I was not able to provide lecture notes so far. However, with video recordings and the help of automatic speech recognition, I was able to transcribe the entire lecture. This is why I decided to post a corresponding, manually corrected transcript for every video here on Medium. I was very glad that “Towards Data Science” published all of them in their esteemed publication. They even asked me to create a column “FAU Lecture Notes”. So, I’d like to take this opportunity to thank Towards Data Science for their great support of this project!
對于諸如“深度學習”之類的主題,您必須每學期更新一次演講的內容。 因此,到目前為止,我無法提供講義。 但是,借助視頻記錄和自動語音識別的幫助,我可以轉錄整個講座。 這就是為什么我決定為Medium上的每個視頻發布一個相應的,手動更正的成績單。 我很高興《 邁向數據科學 》在他們尊敬的出版物中發表了所有這些文章。 他們甚至要求我創建一列“ FAU講義 ”。 因此,我想借此機會感謝Towards Data Science對這個項目的大力支持!

To streamline the creation of the blog posts, I create a small tool chain “autoblog” that I also make available free of charge. All content here is also released unter CC BY 4.0 unless stated otherwise. So, you are also free to reuse this content.
為了簡化博客文章的創建,我創建了一個小的工具鏈“ autoblog ”,我也免費提供它。 除非另有說明,否則此處所有內容也將在CC BY 4.0下發布。 因此,您也可以自由地重用此內容。
In the following, I list the individual posts grouped by Chapter with a link to the respective videos. In case, you prefer the video, you can also watch the entire lecture as playlist. Note that I upgraded my recording equipment twice this semester. You should see that the video quality improve from Chapter 7— Architectures and Chapter 9 — Visualization & Attention.
在下文中,我列出了按章節分組的各個帖子,以及指向相應視頻的鏈接。 如果您喜歡視頻,還可以將整個演講作為播放列表觀看 。 請注意,本學期我兩次升級了錄音設備。 您應該看到視頻質量從第7章“體系結構”和第9章“可視化與注意”得到改善。
So, I hope you find these posts and videos useful. I case you like, them please leave a comment, or recommend this project to your friends.
因此,我希望您發現這些帖子和視頻有用。 如果您喜歡,他們請留下評論,或將此項目推薦給您的朋友。
第一章簡介 (Chapter 1 — Introduction)

In these videos, we introduce the topic of Deep Learning and show some highlights in terms of literature and applications
在這些視頻中,我們介紹了深度學習的主題,并在文學和應用方面展示了一些亮點
Part 1: Motivation & High Profile Applications (Video)Part 2: Highlights at FAU (Video)Part 3: Limitations of Deep Learning and Future Directions (Video)Part 4: A short course in Pattern Recognition (Video)Part 5: Exercises & Outlook (Video)
第1部分: 動機和高調應用 ( 視頻 )第2部分: FAU的要點 ( 視頻 )第3部分: 深度學習的局限性和未來方向 ( 視頻 )第4部分: 模式識別短期課程 ( 視頻 )第5部分: 練習和Outlook ( 視頻 )
第2章-前饋網絡 (Chapter 2 — Feedforward Networks)

Here, we present the basics of pattern recognition, and simple feedforward networks included the concept of layer abstraction.
在這里,我們介紹了模式識別的基礎,簡單的前饋網絡包括了層抽象的概念。
Part 1: Why do we need Deep Learning? (Video)Part 2: How can Networks actually be trained? (Video)Part 3: The Backpropagation Algorithm (Video)Part 4: Layer Abstraction (Video)
第1部分: 為什么我們需要深度學習? ( 視頻 )第2部分: 如何實際訓練網絡? ( 視頻 )第3部分: 反向傳播算法 ( 視頻 )第4部分: 層抽象 ( 視頻 )
第三章損失與優化 (Chapter 3 — Loss & Optimization)

Some background on loss functions and the relation of deep learning to classical methods such as the Support Vector Machine (SVM).
關于損失函數以及深度學習與經典方法(例如支持向量機(SVM))的關系的一些背景。
Part 1: Classification and Regression Losses (Video)Part 2: Do SVMs beat Deep Learning? (Video)Part 3: Optimization with ADAM and beyond… (Video)
第1部分: 分類和回歸損失 ( 視頻 )第2部分: SVM是否擊敗了深度學習? ( 視頻 )第3部分: 使用ADAM和其他功能進行優化… ( 視頻 )
第4章-激活,卷積和池化 (Chapter 4 — Activations, Convolution & Pooling)

In this chapter, we discuss classical activation functions, modern versions, the concept of convolutional layers as well as pooling mechanisms.
在本章中,我們討論經典的激活函數,現代版本,卷積層的概念以及池化機制。
Part 1: Classical Activations (Video)Part 2: Modern Activations (Video)Part 3: Convolutional Layers (Video)Part 4: Pooling Mechanisms (Video)
第1部分: 經典激活 ( 視頻 )第2部分: 現代激活 ( 視頻 )第3部分: 卷積層 ( 視頻 )第4部分: 池化機制 ( 視頻 )
第五章正則化 (Chapter 5 — Regularization)

This chapter looks into the problem of overfitting and discusses several common methods to avoid it.
本章探討了過度擬合的問題,并討論了避免這種情況的幾種常用方法。
Part 1: The Bias-Variance Trade-off (Video)Part 2: Classical Techniques (Video)Part 3: Normalization & Dropout (Video)Part 4: Initialization & Transfer Learning (Video)Part 5: Multi-task Learning (Video)
第1部分: 偏差-偏差權衡 ( 視頻 )第2部分: 經典技巧 ( 視頻 )第3部分: 標準化和輟學 ( 視頻 )第4部分: 初始化和轉移學習 ( 視頻 )第5部分: 多任務學習 ( 視頻 )
第6章-常規做法 (Chapter 6 — Common Practices)

This chapter is dedicated to common problems that you will face in practice ranging from hyperparameters to performance evaluation and significance testing.
本章專門討論您將在實踐中遇到的常見問題,從超參數到性能評估和重要性測試。
Part 1: Optimizers & Learning Rates (Video)Part 2: Hyperparameters and Ensembling (Video)Part 3: Class Imbalance (Video)Part 4: Performance Evaluation (Video)
第1部分: 優化器和學習率 ( 視頻 )第2部分:超參數和集合 ( 視頻 )第3部分: 類不平衡 ( 視頻 )第4部分: 性能評估 ( 視頻 )
第7章-體系結構 (Chapter 7 — Architectures)

In this chapter, we present the most common and popular architectures in deep learning.
在本章中,我們介紹了深度學習中最常見和最受歡迎的架構。
Part 1: From LeNet to GoogLeNet (Video)Part 2: Deeper Architectures (Video)Part 3: Residual Networks (Video)Part 4: The Rise of the Residual Connections (Video)Part 5: Learning Architectures (Video)
第1部分: 從LeNet到GoogLeNet ( 視頻 )第2部分: 更深的體系結構 ( 視頻 )第3部分: 殘留網絡 ( 視頻 )第4部分: 殘留連接的興起 ( 視頻 )第5部分: 學習體系結構 ( 視頻 )
第8章遞歸神經網絡 (Chapter 8 — Recurrent Neural Networks)

Recurrent neural networks allow the processing and generation of time-dependent data.
遞歸神經網絡允許處理和生成時間相關的數據。
Part 1: The Elman Cell (Video)Part 2: Backpropagation through Time (Video)Part 3: A Tribute to Schmidhuber — LSTMs (Video)Part 4: Gated Recurrent Units (Video)Part 5: Sequence Generation (Video)
第1部分: Elman單元 ( 視頻 )第2部分: 通過時間的反向傳播 ( 視頻 )第3部分: 向Schmidhuber致敬-LSTM ( 視頻 )第4部分: 門控循環單元 ( 視頻 )第5部分: 序列生成 ( 視頻 )
第9章-可視化和注意 (Chapter 9 — Visualization & Attention)

Visualization methods are used to explore weaknesses of deep nets and to provide better ways of understanding them.
可視化方法用于探索深層網絡的弱點并提供更好的理解它們的方式。
Part 1: Architecture & Training Visualization (Video)Part 2: Confounders & Adversarial Attacks (Video)Part 3: Direct Visualization Methods (Video)Part 4: Gradient and Optimisation-based Methods (Video)Part 5: Attention Mechanisms (Video)
第1部分: 體系結構和培訓可視化 ( 視頻 )第2部分: 混雜因素和對抗攻擊 ( 視頻 )第3部分: 直接可視化方法 ( 視頻 )第4部分: 基于梯度和基于優化的方法 ( 視頻 )第5部分: 注意機制 ( 視頻 )
第十章強化學習 (Chapter 10 — Reinforcement Learning)

Reinforcement learning allows training of agent systems that can act on their own and control games and processes.
強化學習允許培訓可以獨立執行并控制游戲和流程的代理系統。
Part 1: Sequential Decision Making (Video)Part 2: Markov Decision Processes (Video)Part 3: Policy Iteration (Video)Part 4: Alternative Approaches (Video)Part 5: Deep Q-Learning (Video)
第1部分: 順序決策 ( 視頻 )第2部分: Markov決策過程 ( 視頻 )第3部分: 策略迭代 ( 視頻 )第4部分: 替代方法 ( 視頻 )第5部分: 深度Q學習 ( 視頻 )
第十一章—無監督學習 (Chapter 11 — Unsupervised Learning)

Unsupervised learning does not require training data and can be used to generate new observations.
無監督學習不需要訓練數據,可以用來產生新的觀察結果。
Part 1: Motivation & Restricted Boltzmann Machines (Video)Part 2: Autoencoders (Video)Part 3: Generative Adversarial Networks — The Basics (Video)Part 4: Conditional & Cycle GANs (Video)Part 5: Advanced GAN Methods (Video)
第1部分: 動機和受限制的Boltzmann機器 ( 視頻 )第2部分:自動編碼器 ( 視頻 )第3部分: 生成對抗網絡-基礎 ( 視頻 )第4部分: 有條件和循環GAN ( 視頻 )第5部分: 高級GAN方法 ( 視頻 )
第十二章—分段與對象檢測 (Chapter 12 — Segmentation & Object Detection)

Segmentation and Detection are common problems in which deep learning is used.
分割和檢測是使用深度學習的常見問題。
Part 1: Segmentation Basics (Video)Part 2: Skip Connections & More (Video)Part 3: A Family of Regional CNNs (Video)Part 4: Single Shot Detectors (Video)Part 5: Instance Segmentation (Video)
第1部分: 分段基礎知識 ( 視頻 )第2部分: 跳過連接及更多內容 ( 視頻 )第3部分: 區域CNN系列 ( 視頻 )第4部分: 單發檢測器 ( 視頻 )第5部分: 實例分段 ( 視頻 )
第十三章—弱者和自我監督學習 (Chaper 13 — Weakly and Self-supervised Learning)

Weak supervision tries to minimize required label effort while self-supervision tries to get rid of labels completely.
弱監督試圖最大程度地減少所需的標簽工作量,而自我監督則試圖完全擺脫標簽問題。
Part 1: From Class to Pixels (Video)Part 2: From 2-D to 3-D Annotations (Video)Part 3: Self-Supervised Labels (Video)Part 4: Contrastive Losses (Video)
第1部分: 從類到像素 ( 視頻 )第2部分: 從2D到3D注釋 ( 視頻 )第3部分: 自我監督的標簽 ( 視頻 )第4部分: 對比損失 ( 視頻 )
第十四章圖深度學習 (Chapter 14 — Graph Deep Learning)

Graph deep learning is used to process data available in graphs and meshes.
圖深度學習用于處理圖和網格中可用的數據。
Part 1: Spectral Convolutions (Video)Part 2: From Spectral to Spatial Domain (Video)
第1部分: 譜卷積 ( 視頻 )第2部分: 從譜域到空間域 ( 視頻 )
第十五章—已知的操作員學習 (Chapter 15 — Known Operator Learning)

Known operators allow the insertion of prior knowledge into deep networks reducing the number of unknown parameters and improving generalization properties of deep networks.
已知的運算符允許將先驗知識插入到深度網絡中,從而減少未知參數的數量并改善深度網絡的泛化特性。
Part 1: Don’t re-invent the Wheel (Video)Part 2: Boundaries on Learning (Video)Part 3: CT Reconstruction Revisited (Video)Part 4: Deep Design Patterns (Video)
第1部分: 不要重新發明輪子 ( 視頻 )第2部分: 學習的邊界 ( 視頻 )第3部分:重新研究CT重建 ( 視頻 )第4部分: 深層設計模式 ( 視頻 )
致謝 (Acknowledgements)
Many thanks to Katharina Breininger, Weilin Fu, Tobias Würfl, Vincent Christlein, Florian Thamm, Felix Denzinger, Florin Ghesu, Yan Xia, Yixing Huang Christopher Syben, Marc Aubreville, and all our student tutors for their support in this and the last semesters, for creating these slides and corresponding exercises, teaching the class in presence and virtually, and the great team work over the last few years!
非常感謝Katharina Breininger,Weilin Fu,TobiasWürfl,Vincent Christlein,Florian Thamm,Felix Denzinger,Florin Ghesu,Yan Xia,Yixing Huang Christopher Syben,Marc Aubreville,以及我們所有的學生導師在本學期和上學期的支持,制作這些幻燈片和相應的練習,在場和虛擬授課,以及過去幾年中出色的團隊合作!
In case, you are not a subscriber to Medium, and have trouble accessing the material, we also host all of the blog posts on the Pattern Recognition Lab’s Website.
如果您不是Medium的訂戶,并且在訪問材料時遇到問題,我們還將在Pattern Recognition Lab的網站上托管所有博客文章。
If you liked this post, you can find more essays here, more educational material on Machine Learning here, or have a look at our Deep Learning Lecture. I would also appreciate a follow on YouTube, Twitter, Facebook, or LinkedIn in case you want to be informed about more essays, videos, and research in the future. This article is released under the Creative Commons 4.0 Attribution License and can be reprinted and modified if referenced. If you are interested in generating transcripts from video lectures try AutoBlog.
如果你喜歡這篇文章,你可以找到這里更多的文章 ,更多的教育材料,機器學習在這里 ,或看看我們的深入 學習 講座 。 如果您希望將來了解更多文章,視頻和研究信息,也歡迎關注YouTube , Twitter , Facebook或LinkedIn 。 本文是根據知識共享4.0署名許可發布的 ,如果引用,可以重新打印和修改。 如果您對從視頻講座中生成成績單感興趣,請嘗試使用AutoBlog 。
翻譯自: https://towardsdatascience.com/all-you-want-to-know-about-deep-learning-8d68dcffc258
學習深度學習需要哪些知識
本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。 如若轉載,請注明出處:http://www.pswp.cn/news/392071.shtml 繁體地址,請注明出處:http://hk.pswp.cn/news/392071.shtml 英文地址,請注明出處:http://en.pswp.cn/news/392071.shtml
如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!