書接上文:
信號處理學習——文獻精讀與code復現之TFN——嵌入時頻變換的可解釋神經網絡(上)-CSDN博客
接下來是重要的代碼復現!!!GitHub - ChenQian0618/TFN: this is the open code of paper entitled "TFN: An Interpretable Neural Network With Time Frequency Transform Embedded for Intelligent Fault Diagnosis".


一. 準備工作
因為我的論文中所使用的數據的樣本量是2048,而不是TFN文獻中的1024,所以有些地方需要調整一下。
先看TFN-main\Models\BackboneCNN.py中的代碼,查看是否需要調整。(不需要)
再看TFN-main\Models\TFconvlayer.py中的代碼,也同樣的(不需要)。
具體可去github上查看作者大大們的完整代碼。
模塊 | 是否依賴輸入長度? | 原因 |
---|---|---|
TFconv_* 類中的 forward | ? 不依賴 | 它接受的輸入是 [B, C, L] ,不限制 L(你的2048沒問題) |
weightforward() | ? 不依賴 | 只與 kernel_size 和 superparams 有關,與你輸入的信號長度無關 |
AdaptiveMaxPool1d in CNN | ? 不依賴輸入長度 | 自動調整為固定輸出維度,兼容任意長度 |
T = torch.arange(...) | ??但與 kernel_size 相關 | 這個是內部卷積核構造,與輸入數據 2048 無關 |
二. 設置
參考文獻中的部分設置,分類損失函數采用交叉熵,訓練優化器選用Adam,動量參數設為0.9,初始學習率為0.001,總訓練周期為50次。總共重復10次平均實驗。
三.Code
1.數據劃分
這里用到的數據集是比CWRU要稍微難識別一些的變速軸承信號數據集,加拿大渥太華數據集。
import scipy.io as sio
import numpy as np
import random
import os# 定義基礎路徑
base_path = 'D:/0A_Gotoyourdream/00_BOSS_WHQ/A_Code/A_Data/'# 定義各類別對應的mat文件
file_mapping = {'H': 'H-B-1.mat','I': 'I-B-1.mat','B': 'B-B-1.mat','O': 'O-B-2.mat','C': 'C-B-2.mat'
}# 定義每個類別需要抽取的數量
sample_limit = {'H': 200,'I': 200,'B': 200,'O': 200,'C': 200
}# 保存最終數據
X_list = []
y_list = []# 固定參數
fs = 200000
window_size = 2048
step_size = int(fs * 0.015) # 步長 0.015秒# 類別編碼
#label_mapping = {'H': 0, 'I': 1, 'B': 3, 'O': 2, 'C': 4} # 注意和你之前保持一致label_mapping = {'H': 0, 'I': 1, 'O': 2, 'B': 3, 'C': 4}
inverse_mapping = {v: k for k, v in label_mapping.items()}
labels = [inverse_mapping[i] for i in range(len(inverse_mapping))]
# 再替換這些縮寫為全名
label_fullnames = {'H': 'Health','I': 'F_Inner','O': 'F_Outer','B': 'F_Ball','C': 'F_Combined'
}
labels = [label_fullnames[c] for c in labels]# 創建保存目錄(可選)
output_dir = os.path.join(base_path, "ClassBD-Processed_Samples")
os.makedirs(output_dir, exist_ok=True)# 遍歷每一類數據
for label_name, file_name in file_mapping.items():print(f"正在處理類別 {label_name}...")mat_path = os.path.join(base_path, file_name)dataset = sio.loadmat(mat_path)# 提取振動信號并去直流分量vib_data = np.array(dataset["Channel_1"].flatten().tolist()[:fs * 10])vib_data = vib_data - np.mean(vib_data)# 滑窗切分樣本vib_samples = []start = 0while start + window_size <= len(vib_data):sample = vib_data[start:start + window_size].astype(np.float32) # 降低內存占用vib_samples.append(sample)start += step_sizevib_samples = np.array(vib_samples)print(f"共切分得到 {vib_samples.shape[0]} 個樣本")# 抽樣if vib_samples.shape[0] < sample_limit[label_name]:raise ValueError(f"類別 {label_name} 樣本不足(僅 {vib_samples.shape[0]}),無法抽取 {sample_limit[label_name]} 個")selected_indices = random.sample(range(vib_samples.shape[0]), sample_limit[label_name])selected_X = vib_samples[selected_indices]selected_y = np.full(sample_limit[label_name], label_mapping[label_name], dtype=np.int64)# 保存save_path_X = os.path.join(output_dir, f"X_{label_name}.mat")save_path_y = os.path.join(output_dir, f"y_{label_name}.mat")sio.savemat(save_path_X, {'X': selected_X})sio.savemat(save_path_y, {'y': selected_y})print(f"已保存類別 {label_name} 的數據:{save_path_X}, {save_path_y}")
2. 存儲為dataloder
import os
import scipy.io as sio
import numpy as np
from sklearn.model_selection import train_test_split
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader# ========== 1. 讀取四類數據 ==========
base_path = "D:/0A_Gotoyourdream/00_BOSS_WHQ/A_Code/A_Data/ClassBD-Processed_Samples"def load_data(label):X = sio.loadmat(os.path.join(base_path, f"X_{label}.mat"))["X"]y = sio.loadmat(os.path.join(base_path, f"y_{label}.mat"))["y"].flatten()return X.astype(np.float32), y.astype(np.int64)X_H, y_H = load_data("H")
X_I, y_I = load_data("I")
X_B, y_B = load_data("B")
X_O, y_O = load_data("O")
X_C, y_C = load_data("C")# ========== 2. 合并數據 + reshape ==========
X_all = np.concatenate([X_H, X_I, X_B, X_O, X_C], axis=0)
y_all = np.concatenate([y_H, y_I, y_B, y_O, y_C], axis=0)
X_all = X_all[:, np.newaxis, :] ?# (N, 1, 200000)# ========== 3. 劃分訓練/測試集 ==========
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, test_size=0.4, stratify=y_all, random_state=42)# ========== 4. DataLoader ==========
train_dataset = TensorDataset(torch.tensor(X_train), torch.tensor(y_train))
test_dataset = TensorDataset(torch.tensor(X_test), torch.tensor(y_test))train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)
3. 定義模型及一些設置
需要注意的部分在代碼的注釋中有寫
from Models.TFN import TFN_STTF ?# 你也可以換成 TFN_Chirplet、TFN_Morlet
model = TFN_STTF(in_channels=1, out_channels=5, kernel_size=15) ?# out_channels = 類別數device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
4. 訓練及測試
# 開始訓練
for epoch in range(1, 51):model.train()running_loss = 0.0correct = 0total = 0for inputs, labels in train_loader:inputs, labels = inputs.to(device), labels.to(device)optimizer.zero_grad()# TFN模型支持返回多個輸出(output, _, _)outputs = model(inputs)if isinstance(outputs, tuple):outputs = outputs[0]loss = criterion(outputs, labels)loss.backward()optimizer.step()running_loss += loss.item()_, predicted = outputs.max(1)total += labels.size(0)correct += predicted.eq(labels).sum().item()#scheduler.step()train_acc = correct / total * 100# 測試集評估model.eval()correct_test = 0total_test = 0with torch.no_grad():for inputs, labels in test_loader:inputs, labels = inputs.to(device), labels.to(device)outputs = model(inputs)if isinstance(outputs, tuple):outputs = outputs[0]_, predicted = outputs.max(1)total_test += labels.size(0)correct_test += predicted.eq(labels).sum().item()test_acc = correct_test / total_test * 100print(f"Epoch {epoch:03d}: Loss={running_loss:.4f}, Train Acc={train_acc:.2f}%, Test Acc={test_acc:.2f}%")
四. 結果
代碼復現成功!!!
接著后面就是拿來做對比實驗啦~~~
(感恩大佬們提供github代碼!!!)
?
?