🧠 深度學習混合模型:Transformer 與 LSTM 在時序回歸中的實踐與優化
在處理多特征輸入、多目標輸出的時序回歸任務時,結合 Transformer 和 LSTM 的混合模型已成為一種有效的解決方案。Transformer 擅長捕捉長距離依賴關系,而 LSTM 在處理序列數據時表現出色。通過將兩者結合,可以充分發揮各自的優勢,提高模型的預測性能。
📊 數據生成與預處理
首先,我們生成一個包含多個特征的時序數據集,并進行必要的預處理。
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split# 設置隨機種子以確保結果可復現
np.random.seed(42)# 生成時間序列數據
n_samples = 1000
time_steps = 10
n_features = 5
X = np.random.rand(n_samples, time_steps, n_features)
y = np.random.rand(n_samples, 1) # 假設我們有一個目標變量# 數據歸一化
scaler_X = MinMaxScaler()
scaler_y = MinMaxScaler()X_scaled = X.reshape(-1, n_features)
X_scaled = scaler_X.fit_transform(X_scaled)
X_scaled = X_scaled.reshape(n_samples, time_steps, n_features)y_scaled = scaler_y.fit_transform(y)# 劃分訓練集和測試集
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y_scaled, test_size=0.2, random_state=42)
🧩 模型架構設計
我們設計一個結合 Transformer 和 LSTM 的混合模型架構。
import tensorflow as tf
from tensorflow.keras import layers, modelsdef build_transformer_lstm_model(input_shape, lstm_units=64, transformer_units=64, num_heads=4, num_layers=2, dropout_rate=0.1):inputs = layers.Input(shape=input_shape)# LSTM 層x = layers.LSTM(lstm_units, return_sequences=True)(inputs)x = layers.Dropout(dropout_rate)(x)# Transformer 層for _ in range(num_layers):attention = layers.MultiHeadAttention(num_heads=num_heads, key_dim=transformer_units)(x, x)x = layers.Add()([x, attention])x = layers.LayerNormalization()(x)x = layers.Dropout(dropout_rate)(x)# 輸出層x = layers.GlobalAveragePooling1D()(x)x = layers.Dense(64, activation='relu')(x)x = layers.Dropout(dropout_rate)(x)outputs = layers.Dense(1)(x)model = models.Model(inputs, outputs)return model# 構建模型
input_shape = (X_train.shape[1], X_train.shape[2])
model = build_transformer_lstm_model(input_shape)
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mae'])
🏋??♂? 模型訓練與評估
from tensorflow.keras.callbacks import EarlyStopping# 定義早停機制
early_stopping = EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)# 訓練模型
history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_test, y_test), callbacks=[early_stopping])# 評估模型
loss, mae = model.evaluate(X_test, y_test)
print(f"Test Loss: {loss}, Test MAE: {mae}")
🔧 超參數調優
我們使用 Keras Tuner 進行超參數調優。
import keras_tuner as ktdef model_builder(hp):model = build_transformer_lstm_model(input_shape)model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=hp.Float('learning_rate', min_value=1e-5, max_value=1e-2, sampling='log')),loss='mean_squared_error',metrics=['mae'])return model# 定義調優器
tuner = kt.Hyperband(model_builder,objective='val_loss',max_epochs=10,factor=3,directory='hyperband',project_name='transformer_lstm'
)# 執行超參數調優
tuner.search(X_train, y_train, epochs=50, validation_data=(X_test, y_test), callbacks=[early_stopping])# 獲取最佳超參數
best_hps = tuner.get_best_hyperparameters()[0]
print(f"Best learning rate: {best_hps.get('learning_rate')}")
📈 結果可視化
import matplotlib.pyplot as plt# 繪制訓練過程中的損失和 MAE
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Val Loss')
plt.title('Loss Over Epochs')
plt.legend()plt.subplot(1, 2, 2)
plt.plot(history.history['mae'], label='Train MAE')
plt.plot(history.history['val_mae'], label='Val MAE')
plt.title('MAE Over Epochs')
plt.legend()plt.tight_layout()
plt.show()
📝 總結
通過結合 Transformer 和 LSTM 的混合模型,可以實現更好地捕捉時序數據中的長期依賴關系和復雜模式。本章所講述流程展示了從數據生成、模型設計到訓練和評估的完整過程,并引入了早停機制和超參數調優,以提高模型的性能和穩定性。
?