基于深度學習的無線電調制識別系統
本項目實現了一個基于深度學習的無線電調制識別系統,使用LSTM(長短期記憶網絡)模型對不同類型的
無線電信號進行自動分類識別。該系統能夠在不同信噪比(SNR)條件下,準確識別多種調制類型,如BPSK、
QPSK、QAM16等。無線電調制識別是認知無線電、頻譜監測和信號情報等領域的關鍵技術。傳統方法依賴
專家設計的特征提取,而深度學習方法可以自動學習信號特征,提高識別準確率和魯棒性。
數據集:本項目使用RadioML2016.10a數據集,這是一個廣泛用于無線電調制識別研究的標準數據集。
數據集特點:包含10種調制類型,20種信噪比條件(從-20dB到18dB)
信號特征:每個樣本包含128個時間步長,每個時間步長有I/Q兩個通道
數據格式:輸入形狀為[樣本數, 128, 2]
數據分割:數據集被分為訓練集、驗證集和測試集
在本項目中,選擇以下調制類型進行識別:
GFSK (2FSK)、PAM4 (2ASK)、BPSK、QPSK、QAM16、QAM64、CPFSK、8PSK
模型架構
本項目使用了兩層LSTM網絡結構,具體如下:
CopyInsert
輸入層: [128, 2](128個時間步長,每步2個特征:I/Q兩個通道)
LSTM層1: 128個單元,return_sequences=True
LSTM層2: 128個單元
全連接層: 神經元數量等于調制類型數量
Softmax激活: 輸出各調制類型的概率
LSTM模型的優勢在于能夠捕捉信號中的時序特征,這對于調制識別非常重要,因為不同調制方式的時域特征差異明顯。
模型定義代碼:
CopyInsert
def LSTMModel(weights=None, input_shape=[128, 2], classes=11):if weights is not None and not os.path.exists(weights):raise ValueError('Invalid weights path.')
input_layer = Input(shape=input_shape, name='input')# 替代 CuDNNLSTM
x = LSTM(128, return_sequences=True, activation='tanh', recurrent_activation='sigmoid')(input_layer)
x = LSTM(128, activation='tanh', recurrent_activation='sigmoid')(x)output_layer = Dense(classes, activation='softmax', name='softmax')(x)model = Model(inputs=input_layer, outputs=output_layer)if weights:model.load_weights(weights)return model
5. 數據預處理
數據預處理是提高模型性能的關鍵步驟:
數據加載:從pickle文件加載RadioML2016.10a數據集
數據標準化:對每個樣本進行L2歸一化,提高模型的泛化能力
def norm_pad_zeros(X_train, nsamples):for i in range(X_train.shape[0]):X_train[i,:,0] = X_train[i,:,0]/la.norm(X_train[i,:,0],2)return X_train
幅度-相位轉換:將I/Q數據轉換為幅度和相位表示
def to_amp_phase(X_train, X_val, X_test, nsamples):X_train_cmplx = X_train[:,0,:] + 1j* X_train[:,1,:]X_train_amp = np.abs(X_train_cmplx)X_train_ang = np.arctan2(X_train[:,1,:], X_train[:,0,:]) / np.pi# ...
數據篩選:從所有調制類型中篩選出目標調制類型
selected_mods = ['GFSK', 'PAM4', 'BPSK', 'QPSK', 'QAM16', 'QAM64', 'CPFSK', '8PSK']
train_selected = [i for i in range(len(Y_train)) if mods[np.argmax(Y_train[i])] in selected_mods]
X_train_selected = X_train[train_selected]
標簽重編碼:將標簽轉換為獨熱編碼(one-hot)形式
Y_train_selected_new = np.zeros((len(train_selected), len(selected_mods)))
for i, idx in enumerate(train_selected):mod = mods[np.argmax(Y_train[idx])]Y_train_selected_new[i, selected_mods_dict[mod]] = 1
6. 訓練過程
損失函數:分類交叉熵(categorical_crossentropy)
優化器:Adam優化器
批量大小:400
訓練輪數:最多100輪,配合早停策略
回調函數:
ModelCheckpoint:保存最佳模型
ReduceLROnPlateau:在驗證損失停滯時降低學習率
EarlyStopping:在驗證損失長時間不改善時提前停止訓練
訓練代碼:
model = LSTMModel(weights=None, input_shape=[128, 2], classes=len(selected_mods))
model.compile(loss='categorical_crossentropy', metrics=['acc'], optimizer='adam')filepath = 'weights/weights.h5'
history = model.fit(X_train_selected,Y_train_selected_new,batch_size=batch_size,epochs=nb_epoch,verbose=2,validation_data=(X_val_selected, Y_val_selected_new),callbacks=[ModelCheckpoint(filepath, monitor='val_loss', save_best_only=True),ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-6),EarlyStopping(monitor='val_loss', patience=50)])
7. 性能評估
模型評估采用多種方法:
混淆矩陣:直觀展示各調制類型的識別準確率和錯誤類型
confnorm, _, _ = mltools.calculate_confusion_matrix(Y_test_selected_new, test_Y_hat, display_classes)
mltools.plot_confusion_matrix(confnorm, labels=display_classes, save_filename='picture/lstm_total_confusion.png')
不同信噪比下的性能:分析模型在不同信噪比條件下的表現
for i, snr in enumerate(snrs):indices = [j for j, s in enumerate(test_SNRs_selected) if s == snr]test_X_i = X_test_selected[indices]test_Y_i = Y_test_selected_new[indices]test_Y_i_hat = model.predict(test_X_i)confnorm_i, cor, ncor = mltools.calculate_confusion_matrix(test_Y_i, test_Y_i_hat, display_classes)acc[snr] = cor / (cor + ncor)
# 構建數據集
X = []
lbl = []
train_idx = []
val_idx = []
np.random.seed(2016)
a = 0# 遍歷所有調制類型和信噪比
for mod in mods:for snr in snrs:X.append(Xd[(mod,snr)])for i in range(Xd[(mod,snr)].shape[0]):lbl.append((mod,snr))# 劃分訓練集和驗證集train_idx += list(np.random.choice(range(a*1000,(a+1)*1000), size=600, replace=False))val_idx += list(np.random.choice(list(set(range(a*1000,(a+1)*1000))-set(train_idx)), size=200, replace=False))a += 1# 堆疊數據
X = np.vstack(X)
n_examples = X.shape[0]# 劃分測試集
test_idx = list(set(range(0,n_examples))-set(train_idx)-set(val_idx))
np.random.shuffle(train_idx)
np.random.shuffle(val_idx)
np.random.shuffle(test_idx)# 提取數據子集
X_train = X[train_idx]
X_val = X[val_idx]
X_test = X[test_idx]# 轉換為獨熱編碼
def to_onehot(yy):yy1 = np.zeros([len(yy), len(mods)])yy1[np.arange(len(yy)), yy] = 1return yy1# 生成標簽
Y_train = to_onehot(list(map(lambda x: mods.index(lbl[x][0]), train_idx)))
Y_val = to_onehot(list(map(lambda x: mods.index(lbl[x][0]), val_idx)))
Y_test = to_onehot(list(map(lambda x: mods.index(lbl[x][0]), test_idx)))# 轉換為幅度-相位表示
X_train, X_val, X_test = to_amp_phase(X_train, X_val, X_test, 128)# 截斷到最大長度
X_train = X_train[:,:maxlen,:]
X_val = X_val[:,:maxlen,:]
X_test = X_test[:,:maxlen,:]# 標準化
X_train = norm_pad_zeros(X_train, maxlen)
X_val = norm_pad_zeros(X_val, maxlen)
X_test = norm_pad_zeros(X_test, maxlen)return (mods, snrs, lbl), (X_train, Y_train), (X_val, Y_val), (X_test, Y_test), (train_idx, val_idx, test_idx)
selected_mods = ['GFSK', 'PAM4', 'BPSK', 'QPSK', 'QAM16', 'QAM64', 'CPFSK', '8PSK']
selected_mods_dict = {mod: i for i, mod in enumerate(selected_mods)}
display_classes = selected_mods
篩選與重新編碼訓練集
train_selected = [i for i in range(len(Y_train)) if mods[np.argmax(Y_train[i])] in selected_mods]
X_train_selected = X_train[train_selected]
Y_train_selected_new = np.zeros((len(train_selected), len(selected_mods)))
for i, idx in enumerate(train_selected):mod = mods[np.argmax(Y_train[idx])]Y_train_selected_new[i, selected_mods_dict[mod]] = 1
創建模型
model = culstm.LSTMModel(weights=None, input_shape=[128, 2], classes=len(selected_mods))
model.compile(loss='categorical_crossentropy', metrics=['acc'], optimizer='adam')
訓練模型
filepath = 'weights/weights.h5'
history = model.fit(X_train_selected,Y_train_selected_new,batch_size=batch_size,epochs=nb_epoch,verbose=2,validation_data=(X_val_selected, Y_val_selected_new),callbacks=[ModelCheckpoint(filepath, monitor='val_loss', save_best_only=True),ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-6),EarlyStopping(monitor='val_loss', patience=50)])
模型評估
score = model.evaluate(X_test_selected, Y_test_selected_new, verbose=1, batch_size=batch_size)
print("測試集性能:", score)
預測與繪圖函數
def predict(model):# 加載最佳模型權重model.load_weights(filepath)# 預測測試集test_Y_hat = model.predict(X_test_selected, batch_size=batch_size)# 計算總體混淆矩陣confnorm, _, _ = mltools.calculate_confusion_matrix(Y_test_selected_new, test_Y_hat, display_classes)mltools.plot_confusion_matrix(confnorm,labels=display_classes,save_filename='picture/lstm_total_confusion.png')# 計算每個信噪比下的性能acc = {}acc_mod_snr = np.zeros((len(selected_mods), len(snrs)))test_SNRs_selected = [lbl[test_idx[i]][1] for i in test_selected]for i, snr in enumerate(snrs):# 篩選特定信噪比的樣本indices = [j for j, s in enumerate(test_SNRs_selected) if s == snr]test_X_i = X_test_selected[indices]test_Y_i = Y_test_selected_new[indices]# 預測test_Y_i_hat = model.predict(test_X_i)# 計算混淆矩陣和準確率confnorm_i, cor, ncor = mltools.calculate_confusion_matrix(test_Y_i, test_Y_i_hat, display_classes)acc[snr] = cor / (cor + ncor)# 保存準確率with open('acc111.csv', 'a', newline='') as f:csv.writer(f).writerow([acc[snr]])# 繪制混淆矩陣mltools.plot_confusion_matrix(confnorm_i,labels=display_classes,title="Confusion Matrix SNR={}".format(snr),save_filename="picture/Confusion(SNR={})(ACC={:.2f}).png".format(snr, 100.0 * acc[snr]))# 計算每種調制類型在當前信噪比下的準確率acc_mod_snr[:, i] = np.round(np.diag(confnorm_i) / np.sum(confnorm_i, axis=1), 3)# 繪制所有調制方式準確率曲線plt.figure(figsize=(12, 8))for i in range(len(selected_mods)):plt.plot(snrs, acc_mod_snr[i], marker='o', label=display_classes[i])for x, y in zip(snrs, acc_mod_snr[i]):plt.text(x, y, '{:.2f}'.format(y), fontsize=8, ha='center', va='bottom')plt.xlabel("SNR (dB)")plt.ylabel("Accuracy")plt.title("Per-Modulation Classification Accuracy vs SNR (All Mods)")plt.legend(loc='best')plt.grid(True)plt.tight_layout()plt.savefig("picture/all_mods_acc.png", dpi=300)plt.close()# 保存結果數據with open('predictresult/acc_for_mod_on_lstm.dat', 'wb') as f:pickle.dump(acc_mod_snr, f)with open('predictresult/lstm.dat', 'wb') as f:pickle.dump(acc, f)# 繪制總體準確率曲線plt.plot(snrs, [acc[snr] for snr in snrs])plt.xlabel("SNR")plt.ylabel("Overall Accuracy")plt.title("Overall Classification Accuracy on RadioML2016.10a")plt.grid()plt.tight_layout()plt.savefig('picture/each_acc.png')
主要貢獻:
使用深度學習方法自動提取信號特征,避免了傳統方法中復雜的特征工程在不同信噪比條件下對多種調制類型進行識別,并分析了各調制類型的識別難度提供了完整的數據處理、模型訓練和評估流程,便于后續研究和應用通過本項目,我們可以看到深度學習在信號處理領域的巨大潛力,它不僅簡化了傳統的特征工程過程,還能在復雜環境下取得更好的性能。隨著深度學習技術的不斷發展,我們可以期待更多創新應用在無線通信領域涌現。
https://pan.baidu.com/s/16FN0BR0LUkfpcxZizn43xw