鏈路預測算法MATLAB實現
鏈路預測是復雜網絡分析中的重要任務,旨在預測網絡中尚未連接的兩個節點之間未來產生連接的可能性。
程序概述
MATLAB程序實現了以下鏈路預測算法:
- 基于局部信息的相似性指標(Common Neighbors, Jaccard, Adamic-Adar等)
- 基于路徑的相似性指標(Katz指數)
- 基于隨機游走的相似性指標(Rooted PageRank, SimRank)
- 矩陣分解方法
代碼
classdef LinkPrediction%LINKPREDICTION 鏈路預測算法實現% 包含多種鏈路預測算法propertiesA; % 鄰接矩陣train_mask; % 訓練掩碼矩陣test_mask; % 測試掩碼矩陣methods; % 可用的預測方法endmethodsfunction obj = LinkPrediction(adj_matrix, train_ratio)%LINKPREDICTION 構造函數% adj_matrix - 網絡鄰接矩陣% train_ratio - 訓練集比例(0-1)obj.A = adj_matrix;obj.methods = {'CommonNeighbors', 'Jaccard', 'AdamicAdar', ...'PreferentialAttachment', 'Katz', 'RootedPageRank', ...'SimRank', 'MatrixFactorization'};% 劃分訓練集和測試集if nargin > 1obj = obj.split_dataset(train_ratio);elseobj.train_mask = ones(size(obj.A));obj.test_mask = zeros(size(obj.A));endendfunction obj = split_dataset(obj, train_ratio)%SPLIT_DATASET 劃分訓練集和測試集% 隨機隱藏一部分邊作為測試集[n, ~] = size(obj.A);obj.train_mask = ones(n);obj.test_mask = zeros(n);% 獲取所有邊的索引[rows, cols] = find(triu(obj.A, 1)); % 只取上三角避免重復num_edges = length(rows);num_train = round(num_edges * train_ratio);% 隨機選擇訓練邊idx = randperm(num_edges);train_idx = idx(1:num_train);test_idx = idx(num_train+1:end);% 創建掩碼矩陣for i = 1:length(test_idx)r = rows(test_idx(i));c = cols(test_idx(i));obj.train_mask(r, c) = 0;obj.train_mask(c, r) = 0;obj.test_mask(r, c) = 1;obj.test_mask(c, r) = 1;end% 確保對角線為0obj.train_mask(1:n+1:end) = 0;obj.test_mask(1:n+1:end) = 0;endfunction scores = common_neighbors(obj)%COMMON_NEIGHBORS 共同鄰居算法scores = (obj.A * obj.A) .* obj.train_mask;endfunction scores = jaccard(obj)%JACCARD Jaccard相似系數[n, ~] = size(obj.A);scores = zeros(n);for i = 1:nfor j = i+1:nif obj.train_mask(i, j) == 0continue;endneighbors_i = find(obj.A(i, :));neighbors_j = find(obj.A(j, :));intersection = length(intersect(neighbors_i, neighbors_j));union = length(union(neighbors_i, neighbors_j));if union > 0scores(i, j) = intersection / union;elsescores(i, j) = 0;endscores(j, i) = scores(i, j);endendendfunction scores = adamic_adar(obj)%ADAMIC_ADAR Adamic-Adar指數[n, ~] = size(obj.A);scores = zeros(n);% 計算每個節點的度degrees = sum(obj.A, 2);for i = 1:nfor j = i+1:nif obj.train_mask(i, j) == 0continue;endcommon_neighbors = find(obj.A(i, :) & obj.A(j, :));score = 0;for k = common_neighborsif degrees(k) > 1 % 避免除以0score = score + 1 / log(degrees(k));endendscores(i, j) = score;scores(j, i) = score;endendendfunction scores = preferential_attachment(obj)%PREFERENTIAL_ATTACHMENT 優先連接算法degrees = sum(obj.A, 2);scores = (degrees * degrees') .* obj.train_mask;endfunction scores = katz(obj, beta)%KATZ Katz指數% beta - 衰減因子,默認0.01if nargin < 2beta = 0.01;end[n, ~] = size(obj.A);I = eye(n);scores = beta * obj.A; % 長度為1的路徑% 計算Katz指數:S = βA + β2A2 + β3A3 + ...% 使用矩陣求逆近似:S = (I - βA)^-1 - Iscores = inv(I - beta * obj.A) - I;scores = scores .* obj.train_mask;endfunction scores = rooted_pagerank(obj, alpha, max_iter, tol)%ROOTED_PAGERANK Rooted PageRank算法% alpha - 隨機游走概率,默認0.85% max_iter - 最大迭代次數,默認100% tol - 收斂容差,默認1e-6if nargin < 2alpha = 0.85;endif nargin < 3max_iter = 100;endif nargin < 4tol = 1e-6;end[n, ~] = size(obj.A);scores = zeros(n);% 創建列隨機矩陣(轉移概率矩陣)P = obj.A ./ sum(obj.A, 1);P(isnan(P)) = 0; % 處理度為0的節點% 對每個節點計算Rooted PageRankfor i = 1:nr = zeros(n, 1);r(i) = 1;for iter = 1:max_iterr_new = alpha * P * r + (1 - alpha) * r;if norm(r_new - r, 1) < tolbreak;endr = r_new;endscores(:, i) = r;endscores = scores .* obj.train_mask;endfunction scores = simrank(obj, C, max_iter, tol)%SIMRANK SimRank算法% C - 衰減因子,默認0.8% max_iter - 最大迭代次數,默認10% tol - 收斂容差,默認1e-4if nargin < 2C = 0.8;endif nargin < 3max_iter = 10;endif nargin < 4tol = 1e-4;end[n, ~] = size(obj.A);S = eye(n); % 初始化SimRank矩陣% 計算入鄰居in_neighbors = cell(n, 1);for i = 1:nin_neighbors{i} = find(obj.A(:, i));end% 迭代計算SimRankfor iter = 1:max_iterS_old = S;for i = 1:nfor j = 1:nif i == jS(i, j) = 1;continue;endin_i = in_neighbors{i};in_j = in_neighbors{j};if isempty(in_i) || isempty(in_j)S(i, j) = 0;continue;endsum_sim = 0;for a = 1:length(in_i)for b = 1:length(in_j)sum_sim = sum_sim + S_old(in_i(a), in_j(b));endendS(i, j) = C * sum_sim / (length(in_i) * length(in_j));endendif norm(S - S_old, 'fro') < tolbreak;endendscores = S .* obj.train_mask;endfunction scores = matrix_factorization(obj, dim, lambda, max_iter, learning_rate)%MATRIX_FACTORIZATION 矩陣分解方法% dim - 潛在特征維度,默認10% lambda - 正則化參數,默認0.01% max_iter - 最大迭代次數,默認100% learning_rate - 學習率,默認0.01if nargin < 2dim = 10;endif nargin < 3lambda = 0.01;endif nargin < 4max_iter = 100;endif nargin < 5learning_rate = 0.01;end[n, ~] = size(obj.A);% 初始化用戶和物品特征矩陣U = randn(n, dim) * 0.01;V = randn(n, dim) * 0.01;% 獲取訓練集中的非零元素(即存在的邊)[rows, cols] = find(obj.train_mask);values = ones(length(rows), 1);% 隨機梯度下降for iter = 1:max_itertotal_error = 0;for idx = 1:length(rows)i = rows(idx);j = cols(idx);% 計算預測值和誤差prediction = U(i, :) * V(j, :)';error = values(idx) - prediction;total_error = total_error + error^2;% 更新特征向量U_i_old = U(i, :);U(i, :) = U(i, :) + learning_rate * (error * V(j, :) - lambda * U(i, :));V(j, :) = V(j, :) + learning_rate * (error * U_i_old - lambda * V(j, :));end% 添加正則化項total_error = total_error + lambda * (norm(U, 'fro')^2 + norm(V, 'fro')^2);if mod(iter, 10) == 0fprintf('迭代 %d, 誤差: %.4f\n', iter, total_error);endend% 計算得分矩陣scores = U * V';scores = scores .* obj.train_mask;endfunction [precision, recall, auc] = evaluate(obj, scores, top_k)%EVALUATE 評估預測結果% scores - 預測得分矩陣% top_k - 計算precision@k和recall@k的k值if nargin < 3top_k = 100;end% 獲取測試集中的正樣本[test_rows, test_cols] = find(obj.test_mask);positive_pairs = [test_rows, test_cols];num_positives = size(positive_pairs, 1);% 獲取所有未連接的節點對(負樣本+測試正樣本)negative_mask = (obj.train_mask == 0) & (obj.A == 0) & (eye(size(obj.A)) == 0);[negative_rows, negative_cols] = find(negative_mask);negative_pairs = [negative_rows, negative_cols];% 隨機選擇與正樣本數量相同的負樣本idx = randperm(size(negative_pairs, 1), num_positives);negative_pairs = negative_pairs(idx, :);% 合并正負樣本all_pairs = [positive_pairs; negative_pairs];labels = [ones(num_positives, 1); zeros(num_positives, 1)];% 獲取預測得分pred_scores = zeros(size(all_pairs, 1), 1);for i = 1:size(all_pairs, 1)pred_scores(i) = scores(all_pairs(i, 1), all_pairs(i, 2));end% 計算AUC[~, ~, ~, auc] = perfcurve(labels, pred_scores, 1);% 計算Precision@K和Recall@K% 獲取得分最高的top_k個節點對[~, sorted_idx] = sort(pred_scores(1:num_positives), 'descend');top_predictions = sorted_idx(1:min(top_k, length(sorted_idx)));true_positives = sum(ismember(top_predictions, 1:num_positives));precision = true_positives / top_k;recall = true_positives / num_positives;endfunction results = compare_methods(obj, methods, top_k)%COMPARE_METHODS 比較不同算法的性能% methods - 要比較的方法列表% top_k - 計算precision@k和recall@k的k值if nargin < 2methods = obj.methods;endif nargin < 3top_k = 100;endresults = struct();for i = 1:length(methods)method = methods{i};fprintf('正在計算 %s...\n', method);try% 調用相應的方法tic;scores = obj.(lower(method))();time = toc;% 評估性能[precision, recall, auc] = obj.evaluate(scores, top_k);% 保存結果results.(method).scores = scores;results.(method).precision = precision;results.(method).recall = recall;results.(method).auc = auc;results.(method).time = time;fprintf('%s: Precision@%d=%.4f, Recall@%d=%.4f, AUC=%.4f, 時間=%.2fs\n', ...method, top_k, precision, top_k, recall, auc, time);catch MEfprintf('計算 %s 時出錯: %s\n', method, ME.message);results.(method).error = ME.message;endendendfunction plot_results(obj, results)%PLOT_RESULTS 可視化比較結果methods = fieldnames(results);num_methods = length(methods);precisions = zeros(num_methods, 1);recalls = zeros(num_methods, 1);aucs = zeros(num_methods, 1);times = zeros(num_methods, 1);for i = 1:num_methodsif isfield(results.(methods{i}), 'error')continue;endprecisions(i) = results.(methods{i}).precision;recalls(i) = results.(methods{i}).recall;aucs(i) = results.(methods{i}).auc;times(i) = results.(methods{i}).time;end% 創建圖形figure('Position', [100, 100, 1200, 800]);% 繪制精確度subplot(2, 2, 1);bar(precisions);set(gca, 'XTickLabel', methods, 'XTickLabelRotation', 45);title('Precision@K');ylabel('Precision');grid on;% 繪制召回率subplot(2, 2, 2);bar(recalls);set(gca, 'XTickLabel', methods, 'XTickLabelRotation', 45);title('Recall@K');ylabel('Recall');grid on;% 繪制AUCsubplot(2, 2, 3);bar(aucs);set(gca, 'XTickLabel', methods, 'XTickLabelRotation', 45);title('AUC');ylabel('AUC');grid on;% 繪制運行時間subplot(2, 2, 4);bar(times);set(gca, 'XTickLabel', methods, 'XTickLabelRotation', 45);title('運行時間');ylabel('時間 (秒)');grid on;% 調整布局set(gcf, 'Color', 'w');endend
end% 示例使用代碼
function example_usage()% 生成示例網絡(無標度網絡)n = 100; % 節點數量A = create_scale_free_network(n);% 創建鏈路預測對象,使用80%的邊作為訓練集lp = LinkPrediction(A, 0.8);% 比較所有方法的性能results = lp.compare_methods();% 可視化結果lp.plot_results(results);% 單獨使用某個方法scores = lp.common_neighbors();[precision, recall, auc] = lp.evaluate(scores);fprintf('\nCommon Neighbors: Precision=%.4f, Recall=%.4f, AUC=%.4f\n', precision, recall, auc);
endfunction A = create_scale_free_network(n)%CREATE_SCALE_FREE_NETWORK 生成無標度網絡(Barabási-Albert模型)% n - 網絡節點數% 初始完全圖m0 = 5; % 初始節點數A = zeros(n);A(1:m0, 1:m0) = ones(m0) - eye(m0);% 添加新節點for i = m0+1:n% 計算現有節點的度degrees = sum(A(1:i-1, 1:i-1), 2);total_degree = sum(degrees);% 根據度分布選擇連接節點if total_degree > 0prob = degrees / total_degree;targets = randsample(1:i-1, m0, true, prob);elsetargets = randperm(i-1, min(m0, i-1));end% 添加連接for j = targetsA(i, j) = 1;A(j, i) = 1;endend
end% 運行示例
example_usage();
說明
這個MATLAB鏈路預測程序提供了以下功能:
1. 核心類 LinkPrediction
包含多種鏈路預測算法的實現,以及評估和比較功能。
2. 實現的算法
- Common Neighbors (共同鄰居):基于兩個節點共同鄰居的數量
- Jaccard Coefficient:共同鄰居數除以總鄰居數
- Adamic-Adar:考慮共同鄰居的度,度越小權重越大
- Preferential Attachment:基于兩個節點的度乘積
- Katz Index:考慮所有路徑,路徑越短權重越大
- Rooted PageRank:基于隨機游走的相似性度量
- SimRank:基于結構上下文的相似性度量
- Matrix Factorization:基于矩陣分解的潛在特征方法
3. 評估指標
- Precision@K:前K個預測中正確預測的比例
- Recall@K:正確預測的正樣本占所有正樣本的比例
- AUC:ROC曲線下面積,衡量分類器整體性能
4. 可視化功能
提供四種評估指標的可視化比較,便于分析不同算法的性能。
推薦代碼 鏈路預測程序,主程序,包含31個鏈路預測的函數代碼 www.youwenfan.com/contentcsg/52463.html
使用
程序最后提供了一個示例使用代碼:
- 生成一個無標度網絡(Barabási-Albert模型)
- 創建鏈路預測對象,劃分訓練集和測試集
- 比較所有算法的性能
- 可視化比較結果
- 單獨使用Common Neighbors算法并進行評估