一,論文來源
論文pdf
Variational graph auto-encoders
論文代碼
github代碼
二,論文解讀
理論部分參考:
Variational Graph Auto-Encoders(VGAE)理論參考和源碼解析
VGAE(Variational graph auto-encoders)論文詳解
簡要介紹: 本文是將變分自編碼器(Variational Auto-Encoders, VAE
)遷移到了圖領域,基本思路是:用已知的圖經過編碼(圖卷積)學到節點向量表示的分布,在分布中采樣得到節點的向量表示,然后進行解碼(鏈路預測)重新構建圖。
簡要介紹引用來自: 7. Variational Graph Auto-Encoders論文閱讀筆記
細節方面感覺這個小姐姐講的非常好:
【GNN五大類 VGAE】(變分圖自編碼器):Variational Graph Auto-Encoders
理論方面其實前面幾位大佬說的很好了,這里我只是來闡述兩點:
1.測試方法:
模型在數據集的不完整版本上訓練,其中部分引用鏈接(邊)已被移除,而所有節點特征被保留。 我們從先前移除的邊和相同數量的隨機采樣的未連接節點對(非邊)形成驗證和測試集。我們根據模型正確分類邊緣和非邊緣的能力來比較模型。 驗證和測試集分別包含5%和10%的引用鏈接。 驗證集用于優化超參數。 我們對比了兩個流行的基線:譜聚類(SC) [5]和深度行走(DW) [6]。 SC和DW都提供節點嵌入z。 我們用Eq.4(左側)計算重構鄰接矩陣元素的得分。
2.熵
參考
交叉熵、相對熵(KL散度)、JS散度和Wasserstein距離(推土機距離)
正態分布N(mu,sigma)和N(0,1)之間的KL散度推導
三,代碼解讀
preprocessing.py
import numpy as np
import scipy.sparse as spdef sparse_to_tuple(sparse_mx):if not sp.isspmatrix_coo(sparse_mx): #是否為csr_matrix類型sparse_mx = sparse_mx.tocoo() #實現csc矩陣轉換為coo矩陣coords = np.vstack((sparse_mx.row, sparse_mx.col)).transpose()# np.vstack按垂直方向(行順序)堆疊數組構成一個新的數組,堆疊的數組需要具有相同的維度,transpose()作用是轉置values = sparse_mx.datashape = sparse_mx.shapereturn coords, values, shapedef preprocess_graph(adj):adj = sp.coo_matrix(adj) #csr_matrix轉成coo_matrixadj_ = adj + sp.eye(adj.shape[0]) #S=A+I #注意adj_的類型為csr_matrixrowsum = np.array(adj_.sum(1)) #rowsum的shape=(節點數,1),對于cora數據集來說就是(2078,1),sum(1)求每一行的和degree_mat_inv_sqrt = sp.diags(np.power(rowsum, -0.5).flatten()) #計算D^{-0.5}#p.diags:提取輸入矩陣(大小為m×n)的所有非零對角列。輸出的大小為 min(m,n)×p,其中p表示輸入矩陣的p個非零對角列#numpy.power():用于數組元素求n次方#flatten():返回一個折疊成一維的數組。adj_normalized = adj_.dot(degree_mat_inv_sqrt).transpose().dot(degree_mat_inv_sqrt).tocoo()# adj_.dot(degree_mat_inv_sqrt)得到 SD^{-0.5}# adj_.dot(degree_mat_inv_sqrt).transpose()得到(D^{-0.5})^{T}S^{T}=D^{-0.5}S,因為D和S都是對稱矩陣# adj_normalized即為D^{-0.5}SD^{-0.5}return sparse_to_tuple(adj_normalized)def mask_test_edges(adj):# Function to build test set with 10% positive links# NOTE: Splits are randomized and results might slightly deviate from reported numbers in the paper.# TODO: Clean up.# Remove diagonal elementsadj = adj - sp.dia_matrix((adj.diagonal()[np.newaxis, :], [0]), shape=adj.shape)adj.eliminate_zeros()# Check that diag is zero:assert np.diag(adj.todense()).sum() == 0#assert斷言是聲明其布爾值必須為真的判定,如果發生異常就說明表達示為假。#todense()方法將稀疏矩陣b轉換成稠密矩陣cadj_triu = sp.triu(adj) #取出稀疏矩陣的上三角部分的非零元素,返回的是coo_matrix類型adj_tuple = sparse_to_tuple(adj_triu)edges = adj_tuple[0]# 取除去節點自環的所有邊(注意,由于adj_tuple僅包含原始鄰接矩陣上三角的邊,所以edges中的邊雖然只記錄了邊<src,dis>,而不冗余記錄邊<dis,src>),shape=(邊數,2)每一行記錄一條邊的起始節點和終點節點的編號edges_all = sparse_to_tuple(adj)[0]# 取原始graph中的所有邊,shape=(邊數,2)每一行記錄一條邊的起始節點和終點節點的編號num_test = int(np.floor(edges.shape[0] / 10.)) #劃分測試集# np.floor返回數字的下舍整數num_val = int(np.floor(edges.shape[0] / 20.)) #劃分驗證集all_edge_idx = list(range(edges.shape[0]))np.random.shuffle(all_edge_idx) #打亂all_edge_idx的順序val_edge_idx = all_edge_idx[:num_val] #劃分驗證集test_edge_idx = all_edge_idx[num_val:(num_val + num_test)] #劃分測試集test_edges = edges[test_edge_idx] #edges是除去節點自環的所有邊(因為數據集中的邊都是無向的,edges只是存儲了<src,dis>,沒有存儲<dis,src>,因為沒必要浪費內存),shape=(邊數,2)每一行記錄一條邊的起始節點和終點節點的編號val_edges = edges[val_edge_idx]train_edges = np.delete(edges, np.hstack([test_edge_idx, val_edge_idx]), axis=0)# np.vstack():在豎直方向上堆疊,np.hstack():在水平方向上平鋪。# np.hstack([test_edge_idx, val_edge_idx])將兩個list水平方向拼接成一維數組# np.delete的參數axis=0,表示刪除多行,刪除的行號由第一個參數確定def ismember(a, b, tol=5):rows_close = np.all(np.round(a - b[:, None], tol) == 0, axis=-1)# np.round返回浮點數x的四舍五入值,第二參數是保留的小數的位數# b[:, None]使b從shape=(邊數,2)變為shape=(邊數,1,2),而a是長度為2的list,a - b[:, None]觸發numpy的廣播機制# np.all()判斷給定軸向上的所有元素是否都為True,axis=-1(此時等同于axis=2)表示3維數組最里層的2維數組的每一行的元素是否都為Truereturn np.any(rows_close)# np.any()判斷給定軸向上是否有一個元素為True,現在不設置axis參數則是判斷所有元素中是否有一個True,有一個就返回True。# rows_close的shape=(邊數,1)# 至此,可以知道,ismember( )方法用于判斷隨機生成的<a,b>這條邊是否是已經真實存在的邊,如果是,則返回True,否則返回Falsetest_edges_false = []while len(test_edges_false) < len(test_edges):idx_i = np.random.randint(0, adj.shape[0]) #生成負樣本idx_j = np.random.randint(0, adj.shape[0])if idx_i == idx_j:continueif ismember([idx_i, idx_j], edges_all):continueif test_edges_false:if ismember([idx_j, idx_i], np.array(test_edges_false)):continueif ismember([idx_i, idx_j], np.array(test_edges_false)):continuetest_edges_false.append([idx_i, idx_j])val_edges_false = []while len(val_edges_false) < len(val_edges):idx_i = np.random.randint(0, adj.shape[0])idx_j = np.random.randint(0, adj.shape[0])if idx_i == idx_j:continueif ismember([idx_i, idx_j], train_edges):continueif ismember([idx_j, idx_i], train_edges):continueif ismember([idx_i, idx_j], val_edges):continueif ismember([idx_j, idx_i], val_edges):continueif val_edges_false:if ismember([idx_j, idx_i], np.array(val_edges_false)):continueif ismember([idx_i, idx_j], np.array(val_edges_false)):continueval_edges_false.append([idx_i, idx_j])assert ~ismember(test_edges_false, edges_all) #assert(斷言)用于判斷一個表達式,在表達式條件為 false 的時候觸發異常。所以,這里是想要edges_all不含有test_edges_false,否則拋異常assert ~ismember(val_edges_false, edges_all)assert ~ismember(val_edges, train_edges)assert ~ismember(test_edges, train_edges)assert ~ismember(val_edges, test_edges)data = np.ones(train_edges.shape[0])# 重建出用于訓練階段的鄰接矩陣adj_train = sp.csr_matrix((data, (train_edges[:, 0], train_edges[:, 1])), shape=adj.shape)adj_train = adj_train + adj_train.T# #注意:這些邊列表只包含一個方向的邊(adj_train是矩陣,不是edge lists)return adj_train, train_edges, val_edges, val_edges_false, test_edges, test_edges_false
1.hstack()/vstack()分析
參考:Numpy中stack(),hstack(),vstack()函數詳解
np.vstack():在豎直方向上堆疊,np.hstack():在水平方向上平鋪。
np.hstack([test_edge_idx, val_edge_idx])將兩個list水平方向拼接成一維數組
2.preprocess_graph
函數中格式轉換:
參考:scipy中稀疏矩陣coo_matrix, csr_matrix 的使用
3.b[:, None]
維度擴充
這個我做了測試:
import numpy as npa = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
b = a[:,None]
c = a[None,:]
d = a[:,:,None]
print(a.shape)
print(b.shape)
print(c.shape)
print(d.shape)
結果:
(4, 4)
(4, 1, 4)
(1, 4, 4)
(4, 4, 1)
可以發現None在哪,哪擴充了一維
model.py
import torch
import torch.nn as nn
import torch.nn.functional as F
import os
import numpy as npimport argsclass VGAE(nn.Module):def __init__(self, adj):super(VGAE,self).__init__()self.base_gcn = GraphConvSparse(args.input_dim, args.hidden1_dim, adj)self.gcn_mean = GraphConvSparse(args.hidden1_dim, args.hidden2_dim, adj, activation=lambda x:x)#lambda是匿名函數,冒號左邊是參數,多個參數用逗號隔開,右邊是表達式self.gcn_logstddev = GraphConvSparse(args.hidden1_dim, args.hidden2_dim, adj, activation=lambda x:x)def encode(self, X):hidden = self.base_gcn(X)self.mean = self.gcn_mean(hidden)self.logstd = self.gcn_logstddev(hidden)gaussian_noise = torch.randn(X.size(0), args.hidden2_dim)sampled_z = gaussian_noise*torch.exp(self.logstd) + self.mean# 這里使用torch.exp是因為論文中log(sigma)=GCN_{sigma}(X,A),torch.exp(self.logstd)即torch.exp(log(sigma))得到的是sigma;另外還有mu=GCN_{mu}(X,A).# 由于每個節點向量經過GCN后都有且僅有一個節點向量表示,所以呢,方差的對數log(sigma)和節點向量表示的均值mu分別是節點經過GCN_{sigma}(X,A)和GCN_{mu}(X,A)后得到的向量表示本身。# 從N(mu,sigma^2)中采樣一個樣本Z相當于在N(0,1)中采樣一個xi,然后Z = mu + xi×sigmareturn sampled_zdef forward(self, X):Z = self.encode(X)A_pred = dot_product_decode(Z)return A_predclass GraphConvSparse(nn.Module):def __init__(self, input_dim, output_dim, adj, activation = F.relu, **kwargs):super(GraphConvSparse, self).__init__(**kwargs)self.weight = glorot_init(input_dim, output_dim) self.adj = adjself.activation = activationdef forward(self, inputs):x = inputsx = torch.mm(x,self.weight)#torch.mm(a, b)是矩陣a和b矩陣相乘x = torch.mm(self.adj, x)outputs = self.activation(x)return outputsdef dot_product_decode(Z):A_pred = torch.sigmoid(torch.matmul(Z,Z.t()))return A_preddef glorot_init(input_dim, output_dim):init_range = np.sqrt(6.0/(input_dim + output_dim))initial = torch.rand(input_dim, output_dim)*2*init_range - init_rangereturn nn.Parameter(initial)class GAE(nn.Module):def __init__(self,adj):super(GAE,self).__init__()self.base_gcn = GraphConvSparse(args.input_dim, args.hidden1_dim, adj)self.gcn_mean = GraphConvSparse(args.hidden1_dim, args.hidden2_dim, adj, activation=lambda x:x)def encode(self, X):hidden = self.base_gcn(X)z = self.mean = self.gcn_mean(hidden)return zdef forward(self, X):Z = self.encode(X)A_pred = dot_product_decode(Z)return A_pred# class GraphConv(nn.Module):
# def __init__(self, input_dim, hidden_dim, output_dim):
# super(VGAE,self).__init__()
# self.base_gcn = GraphConvSparse(args.input_dim, args.hidden1_dim, adj)
# self.gcn_mean = GraphConvSparse(args.hidden1_dim, args.hidden2_dim, adj, activation=lambda x:x)
# self.gcn_logstddev = GraphConvSparse(args.hidden1_dim, args.hidden2_dim, adj, activation=lambda x:x)# def forward(self, X, A):
# out = A*X*self.w0
# out = F.relu(out)
# out = A*X*self.w0
# return out
1.lambda匿名函數
參考:python的匿名函數lambda解釋及用法
這里注意:匿名函數不需要return來返回值,表達式本身結果就是返回值。
train.py
import torch
import torch.nn.functional as F
from torch.optim import Adam
from sklearn.metrics import roc_auc_score, average_precision_score
import scipy.sparse as sp
import numpy as np
import os
import timefrom input_data import load_data
from preprocessing import *
import args
import model# Train on CPU (hide GPU) due to memory constraints
os.environ['CUDA_VISIBLE_DEVICES'] = ""adj, features = load_data(args.dataset)# 存儲原始鄰接矩陣(無對角線條目)以備后用
adj_orig = adj
adj_orig = adj_orig - sp.dia_matrix((adj_orig.diagonal()[np.newaxis, :], [0]), shape=adj_orig.shape)
# Scipy的dia_matrix函數見1.其中offsets數組中0表示對角線,-1表示對角線下面,正數表示對角線上面
# np.newaxis的作用是增加一個維度。[np.newaxis,:]是在np.newaxis這里增加1維。這樣改變維度的作用往往是將一維的數據轉變成一個矩陣
#diagonal()是獲得矩陣對角線
#adj_orig.diagonal()[np.newaxis, :], [0]代碼意思是先將對角線提取出來然后增加一維變為矩陣,方便后續計算
adj_orig.eliminate_zeros()
#eliminite_zeros() 存儲去掉0元素,返回的是稀疏存儲adj_train, train_edges, val_edges, val_edges_false, test_edges, test_edges_false = mask_test_edges(adj)
adj = adj_train #用于訓練的鄰接矩陣,類型為csr_matrix# Some preprocessing
adj_norm = preprocess_graph(adj)
#返回D^{-0.5}SD^{-0.5}的coords(坐標), data, shape,其中S=A+Inum_nodes = adj.shape[0]features = sparse_to_tuple(features.tocoo())
#features的類型原為lil_matrix,sparse_to_tuple返回features的coords, data, shape
num_features = features[2][1]
features_nonzero = features[1].shape[0]# Create Model
pos_weight = float(adj.shape[0] * adj.shape[0] - adj.sum()) / adj.sum()
#注意,adj的每個元素非1即0。pos_weight是用于訓練的鄰接矩陣中負樣本邊(既不存在的邊)和正樣本邊的倍數(即比值),這個數值在二分類交叉熵損失函數中用到,
#如果正樣本邊所占的比例和負樣本邊所占比例失衡,比如正樣本邊很多,負樣本邊很少,那么在求loss的時候可以提供weight參數,將正樣本邊的weight設置小一點,負樣本邊的weight設置大一點,
#此時能夠很好的平衡兩類在loss中的占比,任務效果可以得到進一步提升。參考:https://www.zhihu.com/question/383567632
norm = adj.shape[0] * adj.shape[0] / float((adj.shape[0] * adj.shape[0] - adj.sum()) * 2)adj_label = adj_train + sp.eye(adj_train.shape[0]) #adj_train是用于訓練的鄰接矩陣,類型為csr_matrix
adj_label = sparse_to_tuple(adj_label)adj_norm = torch.sparse.FloatTensor(torch.LongTensor(adj_norm[0].T), #其中adj_norm是D^{-0.5}SD^{-0.5}的coords, data, shapetorch.FloatTensor(adj_norm[1]), torch.Size(adj_norm[2]))
adj_label = torch.sparse.FloatTensor(torch.LongTensor(adj_label[0].T), torch.FloatTensor(adj_label[1]), torch.Size(adj_label[2]))
features = torch.sparse.FloatTensor(torch.LongTensor(features[0].T), torch.FloatTensor(features[1]), torch.Size(features[2]))
#torch.sparse.FloatTensor見詳解weight_mask = adj_label.to_dense().view(-1) == 1
# view的參數-1 表示做自適應性調整,如果參數只有一個參數-1,則表示將Tensor變成一維張量。
weight_tensor = torch.ones(weight_mask.size(0))
weight_tensor[weight_mask] = pos_weight
#用于在binary_cross_entropy中設置正樣本邊的weight。負樣本邊的weight都為1,正樣本邊的weight都為pos_weight# init model and optimizer
model = getattr(model,args.model)(adj_norm)
#getattr() 函數用于返回一個對象屬性值。optimizer = Adam(model.parameters(), lr=args.learning_rate)def get_scores(edges_pos, edges_neg, adj_rec):def sigmoid(x):return 1 / (1 + np.exp(-x))# Predict on test set of edgespreds = []pos = []for e in edges_pos:# print(e)# print(adj_rec[e[0], e[1]])preds.append(sigmoid(adj_rec[e[0], e[1]].item()))#item()取出單元素張量的元素值并返回該值,保持原元素類型不變,從而能夠保留原來的精度。所以在求loss,以及accuracy rate的時候一般用item()pos.append(adj_orig[e[0], e[1]])preds_neg = []neg = []for e in edges_neg:preds_neg.append(sigmoid(adj_rec[e[0], e[1]].data))neg.append(adj_orig[e[0], e[1]])preds_all = np.hstack([preds, preds_neg])labels_all = np.hstack([np.ones(len(preds)), np.zeros(len(preds_neg))])roc_score = roc_auc_score(labels_all, preds_all)ap_score = average_precision_score(labels_all, preds_all)return roc_score, ap_scoredef get_acc(adj_rec, adj_label):labels_all = adj_label.to_dense().view(-1).long() #long()將數字或字符串轉換為一個長整型preds_all = (adj_rec > 0.5).view(-1).long()accuracy = (preds_all == labels_all).sum().float() / labels_all.size(0)return accuracy# train model
for epoch in range(args.num_epoch):t = time.time()A_pred = model(features) #得到的A_pred每個元素不再是非1即0optimizer.zero_grad()loss = log_lik = norm*F.binary_cross_entropy(A_pred.view(-1), adj_label.to_dense().view(-1), weight = weight_tensor)# binary_cross_entropy參考:https://www.zhihu.com/question/383567632if args.model == 'VGAE':kl_divergence = 0.5/ A_pred.size(0) * (1 + 2*model.logstd - model.mean**2 - torch.exp(model.logstd)**2).sum(1).mean()# kl_divergence就是正態分布的KL散度,即n個(0.5*(1+log(sigma^2)-mu^2-sigma^2))的和,n為圖中節點的數量,也就是這里的A_pred.size(0)# 2*model.logstd即為2*log(sigma),根據運算法則,log(sigma^2)=2*log(sigma);model.mean**2即為mu^2;torch.exp(model.logstd)**2即為sigma^2# 1+log(sigma^2)-mu^2-sigma^2# sum(1)表示矩陣每一行內元素求和loss -= kl_divergenceloss.backward()optimizer.step()train_acc = get_acc(A_pred,adj_label)val_roc, val_ap = get_scores(val_edges, val_edges_false, A_pred)print("Epoch:", '%04d' % (epoch + 1), "train_loss=", "{:.5f}".format(loss.item()),"train_acc=", "{:.5f}".format(train_acc), "val_roc=", "{:.5f}".format(val_roc),"val_ap=", "{:.5f}".format(val_ap),"time=", "{:.5f}".format(time.time() - t))test_roc, test_ap = get_scores(test_edges, test_edges_false, A_pred)
print("End of training!", "test_roc=", "{:.5f}".format(test_roc),"test_ap=", "{:.5f}".format(test_ap))
1.pos_weight = float(adj.shape[0] * adj.shape[0] - adj.sum()) / adj.sum()
分析
參考:pytorch中binary_cross_entropy損失函數中weight參數是如何設置的?
2.torch.sparse.FloatTensor
詳解
參考:tensor torch 構造_TORCH.SPARSE
3.np.newaxis
函數
參考:np.newaxis作用
4.Scipy的dia_matrix
函數
參考:Python scipy中的dia_matrix詳解
這里進行進一步的講解:
例如代碼:
from scipy.sparse import dia_matrix
import numpy as np
if __name__ == '__main__':data = np.array([[1,2,3,4],[4,2,3,8],[7,2,4,5]])offsets = np.array([0,-1,2])a = dia_matrix((data,offsets),shape=(4,4)).toarray()print(a)
這樣排列是按照:offsets中第一個元素與data第一行元素對應,并且按照data中第一行第一個元素為起始點,對角線排列。同理:
然后以0為對角線開始節點,向下記錄,按照shape的格式,得到最后結果:
[[1 0 4 0]
[4 2 0 5]
[0 2 3 0]
[0 0 3 4]]