圖深度學習-第2部分

有關深層學習的FAU講義 (FAU LECTURE NOTES ON DEEP LEARNING)

These are the lecture notes for FAU’s YouTube Lecture “Deep Learning”. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. Try it yourself! If you spot mistakes, please let us know!

這些是FAU YouTube講座“ 深度學習 ”的 講義 這是演講視頻和匹配幻燈片的完整記錄。 我們希望您喜歡這些視頻。 當然,此成績單是使用深度學習技術自動創建的,并且僅進行了較小的手動修改。 自己嘗試! 如果發現錯誤,請告訴我們!

導航 (Navigation)

Previous Lecture / Watch this Video / Top Level / Next Lecture

上一個講座 / 觀看此視頻 / 頂級 / 下一個講座

Image for post
Graph deep learning and physical simulation go well together. Image created using gifify. Source: YouTube.
圖深度學習和物理模擬結合得很好。 使用gifify創建的圖像 。 資料來源: YouTube 。

Welcome back to deep learning. So today, we want to continue talking about graph convolutions. We will look into the second part where we now see whether we have to stay in this spectral domain or whether we can also go back to the spatial domain. So let’s look at what I have for you.

歡迎回到深度學習。 所以今天,我們要繼續討論圖卷積。 我們將研究第二部分,現在我們看看是必須保留在此頻譜域中還是必須回到空間域。 因此,讓我們看看我為您準備的。

Image for post
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。

Remember we had this polynomial to define a convolution in the spectral domain. We’ve seen that by computing the eigenvectors of the Laplacian matrix, we were able to find an appropriate Fourier transform that would then give us a spectral representation of the graph configuration. Then, we could do our convolution in the spectral domain and transform it back. Now, this was kind of very expensive because we have to compute U. For U, we have to do the eigenvalue decomposition for this entire symmetric matrix. Also, we’ve seen that we can’t use tricks of the fast Fourier transform because this doesn’t necessarily hold for our U.

記住我們有這個多項式來定義譜域中的卷積。 我們已經看到,通過計算拉普拉斯矩陣的特征向量,我們能夠找到合適的傅立葉變換,然后該傅立葉變換將為我們提供圖配置的頻譜表示。 然后,我們可以在頻譜域中進行卷積并將其轉換回去。 現在,這非常昂貴,因為我們必須計算U。 對于U,我們必須對整個對稱矩陣進行特征值分解。 另外,我們已經看到我們不能使用快速傅立葉變換的技巧,因為這不一定適用于我們的U。

So, how can we choose now our k and θ in order to get rid of U? Well, so if we choose k equals to 1, θ subscript 0 to 2θ, and θ subscript 1 to -θ, we get the following polynomial. So, we still have the configuration that we have x transformed in the Fourier space times our polynomial expressed as matrix times the inverse Fourier transform here. Now, let’s look into the configuration of G hat. G hat can actually be expressed as 2 times θ times Λ to the power of 0. Remember Λ is a diagonal matrix. So, we take every element to the power of 0. This is actually a unity matrix and we subtract θ times Λ to the power of 1. Well, this is actually just Λ. Then, we can express our complete matrix G hat in this way. Of course, we can then pull in our U from the left-hand side and the right-hand side which is giving us the following expression. Now, we use the property that θ is actually a scalar. So, we can pull it to the front. The Λ to the power of 0 cancels out because this is essentially an identity matrix. The Λ on the right-hand side term still remains, but we can also pull out the θ. Well the UU transpose just cancels out. So, this is again the identity matrix and we can use our definition of the symmetric version of our graph Laplacian. You can see that we’ve just found it, here in our equation. So, we can also replace it with this one. You see now U is suddenly gone. So, we can pull out θ again and all that remains is that we have two times the identity matrix minus the symmetric version of the graph Laplacian. If we now plug in the definition of the symmetric version associated with the original adjacency matrix and the degree matrix, we can see that we still can plug this definition in. Then, one of the identity matrices cancels out and we finally get identity plus D to the power of -0.5 times Atimes D to the power of -0.5. So, remember D is a diagonal matrix. We can easily invert the elements on the diagonal and we can also take element-wise the square root. So, this is perfectly fine. This way we don’t have U at all coming up here. We can express our entire graph convolution in this very nice way using the graph Laplacian matrix.

那么,我們現在如何選擇k和θ來擺脫U呢? 好,因此,如果我們選擇k等于1,θ下標0到2θ,θ下標1到-θ,我們得到以下多項式。 因此,我們仍然具有以下配置:在傅立葉空間中將x變換為乘以矩陣表示的多項式乘以此處的傅立葉逆變換。 現在,讓我們研究一下G hat的配置。 如2倍θ倍λ?0的功率記住Λ是對角矩陣G帽子實際上可以表達。 因此,我們將每個元素取0的冪。這實際上是一個單位矩陣,我們將θ乘以Λ乘以1的冪。好吧,這實際上只是Λ 。 然后,我們可以用這種方式表示完整的矩陣G hat。 當然,然后我們可以從左側和右側拉入U ,這使我們得到以下表達式。 現在,我們使用θ實際上是一個標量的屬性。 因此,我們可以將其拉到最前面。 冪為0的Λ被抵消,因為這本質上是一個單位矩陣。 右側項中的Λ仍然保留,但我們也可以拉出θ。 好吧, UU轉置只是抵消了。 因此,這再次是單位矩陣,我們可以使用我們的圖拉普拉斯算子的對稱版本的定義。 您可以在方程式中看到我們剛剛找到它。 因此,我們也可以用此替代它。 您現在看到U突然消失了。 因此,我們可以再次拉出θ,剩下的就是我們有兩倍的單位矩陣減去圖拉普拉斯圖的對稱形式。 如果現在插入與原始鄰接矩陣和度矩陣關聯的對稱版本的定義,我們可以看到仍然可以插入此定義。然后,一個標識矩陣被抵消,最終得到標識加D達到-0.5的乘方A乘以D的-0.5的乘方。 因此,請記住D是對角矩陣。 我們可以輕松地將對角線上的元素反轉,也可以對元素取平方根。 所以,這很好。 這樣,我們這里根本沒有U。 我們可以使用圖拉普拉斯矩陣以這種非常好的方式表示整個圖卷積。

Image for post
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。

Now let’s analyze this term a little more. So, we can see this identity on the left-hand side, we see we can convolve in the spectral domain, and we can construct G hat as a polynomial of Laplacian filters. Then, we can see with a particular choice k equals 1, θ subscript 0 equals to 2θ and θ subscript 1 equals to -θ. Then, this term suddenly only depends on the scalar value θ. With all these tricks, we got rid of the Fourier transform U transpose. So, we suddenly can express graph convolutions in this simplified way.

現在讓我們再分析一下這個術語。 因此,我們可以在左側看到該標識,可以在譜域中進行卷積,并且可以將G hat構造為Laplacian濾波器的多項式。 然后,我們可以看到在特定選擇下k等于1,θ下標0等于2θ,θ下標1等于-θ。 然后,該項突然僅取決于標量值θ。 通過所有這些技巧,我們擺脫了傅立葉變換U轉置。 因此,我們突然可以用這種簡化的方式表示圖卷積。

Image for post
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。

Well, this is the basic graph convolutional operation and you can find this actually shown in reference [1]. You can essentially do this to scalar values, you use your degree matrix and plug it in here. You use your adjacency matrix and you plug it in here. Then, you can optimize with respect to θ in order to find the weight for your convolutions.

好吧,這是基本的圖卷積運算,您可以在參考文獻[1]中找到它。 您基本上可以對標量值執行此操作,您可以使用度矩陣并將其插入此處。 您使用鄰接矩陣,并將其插入此處。 然后,您可以針對θ進行優化,以找到卷積的權重。

Image for post
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。

Well, now the question is “Is it really necessary to motivate the graph convolution from the spectral domain?” and the answer is “No.”. So, we can also motivate it spatially.

好吧,現在的問題是“真的有必要從譜域中激發圖卷積嗎?” 答案是“否”。 因此,我們還可以在空間上激發它。

Image for post
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。

Well, let’s look at the following concept. For a mathematician, a graph is a manifold, but a discrete one. We can discretize the manifold and do a spectral convolution using the Laplacian matrix. So, this led us to spectral graph convolutions. But as a computer scientist, you can interpret a graph as a set of nodes and vertices connected through edges. We now need to define how to aggregate the information of one vertex through its neighbors. If we do so, we get the spatial graph convolution.

好吧,讓我們看一下以下概念。 對于數學家來說,圖是流形,但是離散的。 我們可以離散流形,并使用拉普拉斯矩陣進行頻譜卷積。 因此,這導致我們進行頻譜圖卷積。 但是作為計算機科學家,您可以將圖解釋為通過邊連接的一組節點和頂點。 現在,我們需要定義如何通過一個頂點的鄰居聚合一個頂點的信息。 如果這樣做,我們將獲得空間圖卷積。

Image for post
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。

Well, how is this done? One approach shown in [2] is GraphSAGE. Here, we essentially define a vertex of interest and we define how neighbors contribute to the vertex of interest. So technically, we implement this using a feature vector at the node v and the k-th layer. This can be described as h k subscript v. So, for the zeroth layer, this contains the input. This is just the original configuration of your graph. Then, we need to be able to aggregate in order to compute the next layer. This is done by a spatial aggregation function over the previous layer. Therefore, you use all of the neighbors and typically you define this neighborhood such that every node that is connected to the node under consideration is included in this neighborhood.

好吧,這是怎么做的? [2]中顯示的一種方法是GraphSAGE。 在這里,我們本質上定義了關注的頂點,并且定義了鄰居如何對關注的頂點做出貢獻。 因此,從技術上講,我們在節點v和第k層使用特征向量來實現這一點。 這可以描述為h k下標v。因此,對于第0層,它包含輸入。 這只是圖形的原始配置。 然后,我們需要能夠聚合以便計算下一層。 這是通過上一層的空間聚合功能完成的。 因此,您將使用所有鄰居,并且通常會定義該鄰域,以便連接到所考慮節點的每個節點都包含在此鄰域中。

Image for post
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。

So this line brings us to the GraphSAGE algorithm. Here, you start with a graph and input features. Then, you do the following algorithm: You initialize at h 0 with simply the input of the graph configuration. Then, you iterate over the layers. You iterate over the nodes. For every node, you run the aggregation function that somehow computes a summary over all of your neighbors. Then, the result is a vector of a certain dimension and you then take the aggregated vector and the current configuration of the vector, you concatenate them and multiply them with a weight matrix. This is then run through a non-linearity. Lastly, you scale by the magnitude of your activations. This is then iterated over all of the layers and finally, you get the output z that is the result of your graph convolution.

因此,這條線將我們帶到了GraphSAGE算法。 在這里,您將從圖形和輸入要素開始。 然后,執行以下算法:僅使用圖配置的輸入在h 0進行初始化。 然后,您遍歷各層。 您遍歷節點。 對于每個節點,您都可以運行聚合函數,以某種方式計算所有鄰居的匯總。 然后,結果是一個特定維度的向量,然后取聚合向量和向量的當前配置,將它們連接起來,然后將它們與權重矩陣相乘。 然后,這通過非線性進行。 最后,您可以根據激活的大小進行擴展。 然后在所有層上進行迭代,最后得到圖卷積結果的輸出z

Image for post
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。

The concept of aggregators is key to develop this algorithm because in every node you may have a different number of neighbors. A very simple aggregator would then be simply computing the mean. Of course, you can also take the GCN aggregator and that brings us back to the spectral representation. This way, the connection between spatial and spectral domains can be established. Furthermore, you can take a pooling aggregator which then uses, for example, maximum pooling or you use recurrent networks like LSTM aggregators.

聚合器的概念是開發此算法的關鍵,因為在每個節點中,您可能具有不同數量的鄰居。 然后,一個非常簡單的聚合器將簡單地計算均值。 當然,您也可以使用GCN聚合器,這使我們回到了光譜表示形式。 這樣,可以建立空間域和光譜域之間的連接。 此外,您可以使用一個池聚合器,然后使用例如最大池化,或者使用像LSTM聚合器這樣的循環網絡。

Image for post
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。

You already see that there is a broad variety of aggregators. This then also is the reason why there are so many different graph deep learning approaches. You can subdivide them into certain kinds because there are spectral ones, there are spatial ones, and there are the recurrent ones. So, this is essentially the key how you can tackle the graph convolutional neural networks. So, what do we actually want to do? Well, you can take one of these algorithms and apply it to some mesh. Of course, this can also be done on very complex meshes and I will put a couple of references below that you can see what kind of applications can be done. For example, you can use these methods in order to process the information on coronary arteries.

您已經看到了各種各樣的聚合器。 這也是為什么有這么多不同的圖深度學習方法的原因。 您可以將它們細分為某些種類,因為有頻譜種類,空間種類以及循環種類。 因此,這實際上是解決圖卷積神經網絡的關鍵。 那么,我們實際上想做什么? 好了,您可以采用這些算法之一,并將其??應用于某些網格。 當然,這也可以在非常復雜的網格上完成,我將在下面放置一些參考,以了解可以完成哪種應用程序。 例如,您可以使用這些方法來處理冠狀動脈信息。

Image for post
CC BY 4.0 from the 深度學習講座中 Deep Learning Lecture.CC BY 4.0下的圖像。

Well next time in deep learning, there’s only a couple of topics left. One thing that I want to show to you is how you can embed prior knowledge into deep networks. This is also a quite nice idea because it allows us to fuse much of the things that we know from theory and signal processing with our deep learning approaches. Of course, I also have a couple of references and if you have some time please read through them. They elaborate much more closely on the ideas that we presented here. There are also image references that I’ll put into the description of this video. So, thank you very much for listening and see you in the next lecture. Bye-bye!

下次在深度學習中,只剩下了幾個主題。 我想向您展示的一件事是如何將先驗知識嵌入到深層網絡中。 這也是一個很好的主意,因為它使我們能夠將我們從理論和信號處理中了解到的許多知識與我們的深度學習方法融合在一起。 當然,我也有一些參考資料,如果您有時間請仔細閱讀。 他們更加詳細地闡述了我們在此處提出的想法。 我還將在本視頻的說明中加入圖像參考。 因此,非常感謝您的收聽,并在下一堂課中與您相見。 再見!

Image for post
Many more important concepts had to be omitted here. Therefore enjoy further reading on Graph Deep Learning below. Image created using gifify. Source: YouTube.
這里必須省略許多更重要的概念。 因此,請享受下面的圖深度學習的進一步閱讀。 使用gifify創建的圖像 。 資料來源: YouTube 。

If you liked this post, you can find more essays here, more educational material on Machine Learning here, or have a look at our Deep LearningLecture. I would also appreciate a follow on YouTube, Twitter, Facebook, or LinkedIn in case you want to be informed about more essays, videos, and research in the future. This article is released under the Creative Commons 4.0 Attribution License and can be reprinted and modified if referenced. If you are interested in generating transcripts from video lectures try AutoBlog.

如果你喜歡這篇文章,你可以找到這里更多的文章 ,更多的教育材料,機器學習在這里 ,或看看我們的深入 學習 講座 。 如果您希望將來了解更多文章,視頻和研究信息,也歡迎關注YouTube , Twitter , Facebook或LinkedIn 。 本文是根據知識共享4.0署名許可發布的 ,如果引用,可以重新打印和修改。 如果您對從視頻講座中生成成績單感興趣,請嘗試使用AutoBlog 。

謝謝 (Thanks)

Many thanks to the great introduction by Michael Bronstein on MISS 2018! and special thanks to Florian Thamm for preparing this set of slides.

非常感謝Michael Bronstein在MISS 2018上的精彩介紹! 特別感謝Florian Thamm準備了這組幻燈片。

翻譯自: https://towardsdatascience.com/graph-deep-learning-part-2-c6110d49e63c

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/390809.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/390809.shtml
英文地址,請注明出處:http://en.pswp.cn/news/390809.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Linux下 安裝Redis并配置服務

一、簡介 1、 Redis為單進程單線程模式,采用隊列模式將并發訪問變成串行訪問。 2、 Redis不僅僅支持簡單的k/v類型的數據,同時還提供list,set,zset,hash等數據結構的存儲。 3、 Redis支持數據的備份,即mas…

大omega記號_什么是大歐米茄符號?

大omega記號Similar to big O notation, big Omega(Ω) function is used in computer science to describe the performance or complexity of an algorithm.與大O表示法相似,大Omega(Ω)函數在計算機科學中用于描述算法的性能或復雜性。 If a running time is Ω…

leetcode 477. 漢明距離總和(位運算)

theme: healer-readable 題目 兩個整數的 漢明距離 指的是這兩個數字的二進制數對應位不同的數量。 計算一個數組中,任意兩個數之間漢明距離的總和。 示例: 輸入: 4, 14, 2 輸出: 6 解釋: 在二進制表示中,4表示為0100,14表示為1110&…

什么是跨域及跨域請求資源的方法?

1、由于瀏覽器同源策略,凡是發送請求url的協議、域名、端口三者之間任意一與當前頁面地址不同即為跨域。 2、跨域請求資源的方法: (1)、porxy代理(反向服務器代理) 首先將用戶發送的請求發送給中間的服務器,然后通過中間服務器再發送給后臺服…

量子信息與量子計算_量子計算為23美分。

量子信息與量子計算On Aug 13, 2020, AWS announced the General Availability of Amazon Braket. Braket is their fully managed quantum computing service. It is available on an on-demand basis, much like SageMaker. That means the everyday developer and data scie…

全面理解Java內存模型

Java內存模型即Java Memory Model,簡稱JMM。JMM定義了Java 虛擬機(JVM)在計算機內存(RAM)中的工作方式。JVM是整個計算機虛擬模型,所以JMM是隸屬于JVM的。 如果我們要想深入了解Java并發編程,就要先理解好Java內存模型。Java內存模型定義了多…

React Native指南

React本機 (React Native) React Native is a cross-platform framework for building mobile applications that can run outside of the browser?—?most commonly iOS and Android applicationsReact Native是一個跨平臺框架,用于構建可在瀏覽器外部運行的移動…

leetcode 1074. 元素和為目標值的子矩陣數量(map+前綴和)

給出矩陣 matrix 和目標值 target&#xff0c;返回元素總和等于目標值的非空子矩陣的數量。 子矩陣 x1, y1, x2, y2 是滿足 x1 < x < x2 且 y1 < y < y2 的所有單元 matrix[x][y] 的集合。 如果 (x1, y1, x2, y2) 和 (x1’, y1’, x2’, y2’) 兩個子矩陣中部分坐…

失物招領php_新奧爾良圣徒隊是否增加了失物招領?

失物招領phpOver the past couple of years, the New Orleans Saints’ offense has been criticized for its lack of wide receiver options. Luckily for Saints’ fans like me, this area has been addressed by the signing of Emmanuel Sanders back in March — or has…

教你分分鐘使用Retrofit+Rxjava實現網絡請求

擼代碼之前&#xff0c;先簡單了解一下為什么Retrofit這么受大家青睞吧。 Retrofit是Square公司出品的基于OkHttp封裝的一套RESTful&#xff08;目前流行的一套api設計的風格&#xff09;網絡請求框架。它內部使用了大量的設計模式&#xff0c;以達到高度解耦的目的&#xff1b…

線程與進程區別

一.定義&#xff1a; 進程&#xff08;process&#xff09;是一塊包含了某些資源的內存區域。操作系統利用進程把它的工作劃分為一些功能單元。 進程中所包含的一個或多個執行單元稱為線程&#xff08;thread&#xff09;。進程還擁有一個私有的虛擬地址空間&#xff0c;該空間…

基本SQL命令-您應該知道的數據庫查詢和語句列表

SQL stands for Structured Query Language. SQL commands are the instructions used to communicate with a database to perform tasks, functions, and queries with data.SQL代表結構化查詢語言。 SQL命令是用于與數據庫通信以執行任務&#xff0c;功能和數據查詢的指令。…

leetcode 5756. 兩個數組最小的異或值之和(狀態壓縮dp)

題目 給你兩個整數數組 nums1 和 nums2 &#xff0c;它們長度都為 n 。 兩個數組的 異或值之和 為 (nums1[0] XOR nums2[0]) (nums1[1] XOR nums2[1]) … (nums1[n - 1] XOR nums2[n - 1]) &#xff08;下標從 0 開始&#xff09;。 比方說&#xff0c;[1,2,3] 和 [3,2,1…

客戶細分模型_Avarto金融解決方案的客戶細分和監督學習模型

客戶細分模型Lets assume that you are a CEO of a company which have some X amount of customers in a city with 1000 *X population. Analyzing the trends/features of your customer and segmenting the population of the city to land new potential customers would …

用 Go 編寫一個簡單的 WebSocket 推送服務

用 Go 編寫一個簡單的 WebSocket 推送服務 本文中代碼可以在 github.com/alfred-zhon… 獲取。 背景 最近拿到需求要在網頁上展示報警信息。以往報警信息都是通過短信&#xff0c;微信和 App 推送給用戶的&#xff0c;現在要讓登錄用戶在網頁端也能實時接收到報警推送。 依稀記…

leetcode 231. 2 的冪

給你一個整數 n&#xff0c;請你判斷該整數是否是 2 的冪次方。如果是&#xff0c;返回 true &#xff1b;否則&#xff0c;返回 false 。 如果存在一個整數 x 使得 n 2x &#xff0c;則認為 n 是 2 的冪次方。 示例 1&#xff1a; 輸入&#xff1a;n 1 輸出&#xff1a;tr…

Java概述、環境變量、注釋、關鍵字、標識符、常量

Java語言的特點 有很多小特點&#xff0c;重點有兩個開源&#xff0c;跨平臺 Java語言是跨平臺的 Java語言的平臺 JavaSE JavaME--Android JavaEE DK,JRE,JVM的作用及關系(掌握) (1)作用 JVM&#xff1a;保證Java語言跨平臺 &#xff0…

寫游戲軟件要學什么_為什么要寫關于您所知道的(或所學到的)的內容

寫游戲軟件要學什么Im either comfortably retired or unemployed, I havent decided which. What I do know is that I am not yet ready for decades of hard-won knowledge to lie fallow. Still driven to learn new technologies and to develop new projects, I see the …

leetcode 342. 4的冪

給定一個整數&#xff0c;寫一個函數來判斷它是否是 4 的冪次方。如果是&#xff0c;返回 true &#xff1b;否則&#xff0c;返回 false 。 整數 n 是 4 的冪次方需滿足&#xff1a;存在整數 x 使得 n 4x 示例 1&#xff1a; 輸入&#xff1a;n 16 輸出&#xff1a;true …

梯度反傳_反事實政策梯度解釋

梯度反傳Among many of its challenges, multi-agent reinforcement learning has one obstacle that is overlooked: “credit assignment.” To explain this concept, let’s first take a look at an example…在許多挑戰中&#xff0c;多主體強化學習有一個被忽略的障礙&a…