目錄
簡介
數據集
安裝準備
數據準備
定義數據集元數據
創建用于訓練和評估的 tf.data.Dataset
創建模型輸入
對輸入特征進行編碼
實施門控線性單元
實施門控余留網絡
實施變量選擇網絡
創建門控殘差和變量選擇網絡模型
編譯、訓練和評估模型
政安晨的個人主頁:政安晨
歡迎?👍點贊?評論?收藏
收錄專欄:?TensorFlow與Keras機器學習實戰
希望政安晨的博客能夠對您有所裨益,如有不足之處,歡迎在評論區提出指正!
本文目標:使用門控殘差和變量選擇網絡進行收入水平預測。
簡介
本示例演示了如何使用 Bryan Lim 等人在 Temporal Fusion Transformers (TFT) for Interpretable Multi-horizon Time Series Forecasting 中提出的門控殘差網絡(GRN)和變量選擇網絡(VSN)進行結構化數據分類。GRN 為模型提供了靈活性,只在需要時才進行非線性處理。VSN 允許模型軟移除可能對性能產生負面影響的任何不必要的噪聲輸入。這些技術有助于提高深度神經網絡模型的學習能力。
請注意,本示例只實現了論文中描述的 GRN 和 VSN 組件,而不是整個 TFT 模型,因為 GRN 和 VSN 本身就可以用于結構化數據學習任務。
(要運行代碼,您需要使用 TensorFlow 2.3 或更高版本。)
數據集
本示例使用加州大學歐文分校機器學習資料庫提供的美國人口普查收入數據集。任務是二元分類,以確定一個人的年收入是否超過 5 萬。
該數據集包含約 30 萬個實例和 41 個輸入特征:7 個數字特征和 34 個分類特征。
(親愛的讀者朋友們去這里自行查閱數據:UCI Machine Learning Repository?—— 政安晨)
安裝準備
import math
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
數據準備
首先,我們將 UCI 機器學習資源庫中的數據加載到 Pandas DataFrame 中。
# Column names.
CSV_HEADER = ["age","class_of_worker","detailed_industry_recode","detailed_occupation_recode","education","wage_per_hour","enroll_in_edu_inst_last_wk","marital_stat","major_industry_code","major_occupation_code","race","hispanic_origin","sex","member_of_a_labor_union","reason_for_unemployment","full_or_part_time_employment_stat","capital_gains","capital_losses","dividends_from_stocks","tax_filer_stat","region_of_previous_residence","state_of_previous_residence","detailed_household_and_family_stat","detailed_household_summary_in_household","instance_weight","migration_code-change_in_msa","migration_code-change_in_reg","migration_code-move_within_reg","live_in_this_house_1_year_ago","migration_prev_res_in_sunbelt","num_persons_worked_for_employer","family_members_under_18","country_of_birth_father","country_of_birth_mother","country_of_birth_self","citizenship","own_business_or_self_employed","fill_inc_questionnaire_for_veterans_admin","veterans_benefits","weeks_worked_in_year","year","income_level",
]data_url = "https://archive.ics.uci.edu/ml/machine-learning-databases/census-income-mld/census-income.data.gz"
data = pd.read_csv(data_url, header=None, names=CSV_HEADER)test_data_url = "https://archive.ics.uci.edu/ml/machine-learning-databases/census-income-mld/census-income.test.gz"
test_data = pd.read_csv(test_data_url, header=None, names=CSV_HEADER)print(f"Data shape: {data.shape}")
print(f"Test data shape: {test_data.shape}")
執行:
Data shape: (199523, 42)
Test data shape: (99762, 42)
我們將目標列從字符串轉換為整數。
data["income_level"] = data["income_level"].apply(lambda x: 0 if x == " - 50000." else 1
)
test_data["income_level"] = test_data["income_level"].apply(lambda x: 0 if x == " - 50000." else 1
)
然后,我們將數據集分成訓練集和驗證集。
random_selection = np.random.rand(len(data.index)) <= 0.85
train_data = data[random_selection]
valid_data = data[~random_selection]
最后,我們將訓練數據和測試數據分割成 CSV 文件存儲在本地。
train_data_file = "train_data.csv"
valid_data_file = "valid_data.csv"
test_data_file = "test_data.csv"train_data.to_csv(train_data_file, index=False, header=False)
valid_data.to_csv(valid_data_file, index=False, header=False)
test_data.to_csv(test_data_file, index=False, header=False)
定義數據集元數據
這里,我們定義了數據集的元數據,這些元數據將有助于將數據讀取和解析為輸入特征,并根據輸入特征的類型對其進行編碼。
# Target feature name.
TARGET_FEATURE_NAME = "income_level"
# Weight column name.
WEIGHT_COLUMN_NAME = "instance_weight"
# Numeric feature names.
NUMERIC_FEATURE_NAMES = ["age","wage_per_hour","capital_gains","capital_losses","dividends_from_stocks","num_persons_worked_for_employer","weeks_worked_in_year",
]
# Categorical features and their vocabulary lists.
# Note that we add 'v=' as a prefix to all categorical feature values to make
# sure that they are treated as strings.
CATEGORICAL_FEATURES_WITH_VOCABULARY = {feature_name: sorted([str(value) for value in list(data[feature_name].unique())])for feature_name in CSV_HEADERif feature_namenot in list(NUMERIC_FEATURE_NAMES + [WEIGHT_COLUMN_NAME, TARGET_FEATURE_NAME])
}
# All features names.
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys()
)
# Feature default values.
COLUMN_DEFAULTS = [[0.0]if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME, WEIGHT_COLUMN_NAME]else ["NA"]for feature_name in CSV_HEADER
]
創建用于訓練和評估的 tf.data.Dataset
我們創建了一個輸入函數來讀取和解析文件,并將特征和標簽轉換成一個 [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) 用于訓練和評估。
from tensorflow.keras.layers import StringLookupdef process(features, target):for feature_name in features:if feature_name in CATEGORICAL_FEATURES_WITH_VOCABULARY:# Cast categorical feature values to string.features[feature_name] = tf.cast(features[feature_name], tf.dtypes.string)# Get the instance weight.weight = features.pop(WEIGHT_COLUMN_NAME)return features, target, weightdef get_dataset_from_csv(csv_file_path, shuffle=False, batch_size=128):dataset = tf.data.experimental.make_csv_dataset(csv_file_path,batch_size=batch_size,column_names=CSV_HEADER,column_defaults=COLUMN_DEFAULTS,label_name=TARGET_FEATURE_NAME,num_epochs=1,header=False,shuffle=shuffle,).map(process)return dataset
創建模型輸入
def create_model_inputs():inputs = {}for feature_name in FEATURE_NAMES:if feature_name in NUMERIC_FEATURE_NAMES:inputs[feature_name] = layers.Input(name=feature_name, shape=(), dtype=tf.float32)else:inputs[feature_name] = layers.Input(name=feature_name, shape=(), dtype=tf.string)return inputs
對輸入特征進行編碼
對于分類特征,我們使用圖層嵌入(layer.Embedding)對其進行編碼,并將編碼大小作為嵌入維度。對于數字特征,我們使用圖層密度(layer.Dense)進行線性變換,將每個特征投影到編碼大小維度的向量中。這樣,所有編碼后的特征都將具有相同的維度。
def encode_inputs(inputs, encoding_size):encoded_features = []for feature_name in inputs:if feature_name in CATEGORICAL_FEATURES_WITH_VOCABULARY:vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]# Create a lookup to convert a string values to an integer indices.# Since we are not using a mask token nor expecting any out of vocabulary# (oov) token, we set mask_token to None and num_oov_indices to 0.index = StringLookup(vocabulary=vocabulary, mask_token=None, num_oov_indices=0)# Convert the string input values into integer indices.value_index = index(inputs[feature_name])# Create an embedding layer with the specified dimensionsembedding_ecoder = layers.Embedding(input_dim=len(vocabulary), output_dim=encoding_size)# Convert the index values to embedding representations.encoded_feature = embedding_ecoder(value_index)else:# Project the numeric feature to encoding_size using linear transformation.encoded_feature = tf.expand_dims(inputs[feature_name], -1)encoded_feature = layers.Dense(units=encoding_size)(encoded_feature)encoded_features.append(encoded_feature)return encoded_features
實施門控線性單元
門控線性單元 (GLU) 可以靈活地抑制與特定任務無關的輸入。
class GatedLinearUnit(layers.Layer):def __init__(self, units):super().__init__()self.linear = layers.Dense(units)self.sigmoid = layers.Dense(units, activation="sigmoid")def call(self, inputs):return self.linear(inputs) * self.sigmoid(inputs)
實施門控余留網絡
門控殘差網絡(GRN)的工作原理如下:
1. 對輸入應用非線性 ELU 變換。
2. 應用線性變換,然后進行濾除。
3. 應用 GLU,并將原始輸入添加到 GLU 的輸出中,以執行跳過(殘差)連接。
4. 應用層歸一化并生成輸出。
class GatedResidualNetwork(layers.Layer):def __init__(self, units, dropout_rate):super().__init__()self.units = unitsself.elu_dense = layers.Dense(units, activation="elu")self.linear_dense = layers.Dense(units)self.dropout = layers.Dropout(dropout_rate)self.gated_linear_unit = GatedLinearUnit(units)self.layer_norm = layers.LayerNormalization()self.project = layers.Dense(units)def call(self, inputs):x = self.elu_dense(inputs)x = self.linear_dense(x)x = self.dropout(x)if inputs.shape[-1] != self.units:inputs = self.project(inputs)x = inputs + self.gated_linear_unit(x)x = self.layer_norm(x)return x
實施變量選擇網絡
變量選擇網絡(VSN)的工作原理如下:
1. 對每個特征單獨應用 GRN。
2. 對所有特征的集合應用 GRN,然后使用 softmax 來產生特征權重。
3. 生成單個 GRN 輸出的加權和。
請注意,無論輸入特征的數量是多少,VSN 的輸出都是 [batch_size, encoding_size]。
class VariableSelection(layers.Layer):def __init__(self, num_features, units, dropout_rate):super().__init__()self.grns = list()# Create a GRN for each feature independentlyfor idx in range(num_features):grn = GatedResidualNetwork(units, dropout_rate)self.grns.append(grn)# Create a GRN for the concatenation of all the featuresself.grn_concat = GatedResidualNetwork(units, dropout_rate)self.softmax = layers.Dense(units=num_features, activation="softmax")def call(self, inputs):v = layers.concatenate(inputs)v = self.grn_concat(v)v = tf.expand_dims(self.softmax(v), axis=-1)x = []for idx, input in enumerate(inputs):x.append(self.grns[idx](input))x = tf.stack(x, axis=1)outputs = tf.squeeze(tf.matmul(v, x, transpose_a=True), axis=1)return outputs
創建門控殘差和變量選擇網絡模型
def create_model(encoding_size):inputs = create_model_inputs()feature_list = encode_inputs(inputs, encoding_size)num_features = len(feature_list)features = VariableSelection(num_features, encoding_size, dropout_rate)(feature_list)outputs = layers.Dense(units=1, activation="sigmoid")(features)model = keras.Model(inputs=inputs, outputs=outputs)return model
編譯、訓練和評估模型
learning_rate = 0.001
dropout_rate = 0.15
batch_size = 265
num_epochs = 20
encoding_size = 16model = create_model(encoding_size)
model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate),loss=keras.losses.BinaryCrossentropy(),metrics=[keras.metrics.BinaryAccuracy(name="accuracy")],
)# Create an early stopping callback.
early_stopping = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=5, restore_best_weights=True
)print("Start training the model...")
train_dataset = get_dataset_from_csv(train_data_file, shuffle=True, batch_size=batch_size
)
valid_dataset = get_dataset_from_csv(valid_data_file, batch_size=batch_size)
model.fit(train_dataset,epochs=num_epochs,validation_data=valid_dataset,callbacks=[early_stopping],
)
print("Model training finished.")print("Evaluating model performance...")
test_dataset = get_dataset_from_csv(test_data_file, batch_size=batch_size)
_, accuracy = model.evaluate(test_dataset)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
Start training the model...
Epoch 1/20
640/640 [==============================] - 31s 29ms/step - loss: 253.8570 - accuracy: 0.9468 - val_loss: 229.4024 - val_accuracy: 0.9495
Epoch 2/20
640/640 [==============================] - 17s 25ms/step - loss: 229.9359 - accuracy: 0.9497 - val_loss: 223.4970 - val_accuracy: 0.9505
Epoch 3/20
640/640 [==============================] - 17s 25ms/step - loss: 225.5644 - accuracy: 0.9504 - val_loss: 222.0078 - val_accuracy: 0.9515
Epoch 4/20
640/640 [==============================] - 16s 25ms/step - loss: 222.2086 - accuracy: 0.9512 - val_loss: 218.2707 - val_accuracy: 0.9522
Epoch 5/20
640/640 [==============================] - 17s 25ms/step - loss: 218.0359 - accuracy: 0.9523 - val_loss: 217.3721 - val_accuracy: 0.9528
Epoch 6/20
640/640 [==============================] - 17s 26ms/step - loss: 214.8348 - accuracy: 0.9529 - val_loss: 210.3546 - val_accuracy: 0.9543
Epoch 7/20
640/640 [==============================] - 17s 26ms/step - loss: 213.0984 - accuracy: 0.9534 - val_loss: 210.2881 - val_accuracy: 0.9544
Epoch 8/20
640/640 [==============================] - 17s 26ms/step - loss: 211.6379 - accuracy: 0.9538 - val_loss: 209.3327 - val_accuracy: 0.9550
Epoch 9/20
640/640 [==============================] - 17s 26ms/step - loss: 210.7283 - accuracy: 0.9541 - val_loss: 209.5862 - val_accuracy: 0.9543
Epoch 10/20
640/640 [==============================] - 17s 26ms/step - loss: 209.9062 - accuracy: 0.9538 - val_loss: 210.1662 - val_accuracy: 0.9537
Epoch 11/20
640/640 [==============================] - 16s 25ms/step - loss: 209.6323 - accuracy: 0.9540 - val_loss: 207.9528 - val_accuracy: 0.9552
Epoch 12/20
640/640 [==============================] - 16s 25ms/step - loss: 208.7843 - accuracy: 0.9544 - val_loss: 207.5303 - val_accuracy: 0.9550
Epoch 13/20
640/640 [==============================] - 21s 32ms/step - loss: 207.9983 - accuracy: 0.9544 - val_loss: 206.8800 - val_accuracy: 0.9557
Epoch 14/20
640/640 [==============================] - 18s 28ms/step - loss: 207.2104 - accuracy: 0.9544 - val_loss: 216.0859 - val_accuracy: 0.9535
Epoch 15/20
640/640 [==============================] - 16s 25ms/step - loss: 207.2254 - accuracy: 0.9543 - val_loss: 206.7765 - val_accuracy: 0.9555
Epoch 16/20
640/640 [==============================] - 16s 25ms/step - loss: 206.6704 - accuracy: 0.9546 - val_loss: 206.7508 - val_accuracy: 0.9560
Epoch 17/20
640/640 [==============================] - 19s 30ms/step - loss: 206.1322 - accuracy: 0.9545 - val_loss: 205.9638 - val_accuracy: 0.9562
Epoch 18/20
640/640 [==============================] - 21s 31ms/step - loss: 205.4764 - accuracy: 0.9545 - val_loss: 206.0258 - val_accuracy: 0.9561
Epoch 19/20
640/640 [==============================] - 16s 25ms/step - loss: 204.3614 - accuracy: 0.9550 - val_loss: 207.1424 - val_accuracy: 0.9560
Epoch 20/20
640/640 [==============================] - 16s 25ms/step - loss: 203.9543 - accuracy: 0.9550 - val_loss: 206.4697 - val_accuracy: 0.9554
Model training finished.
Evaluating model performance...
377/377 [==============================] - 4s 11ms/step - loss: 204.5099 - accuracy: 0.9547
Test accuracy: 95.47%
?測試集的準確率應達到 95% 以上。
要提高模型的學習能力,可以嘗試增加編碼大小值,或在 VSN 層上堆疊多個 GRN 層。這可能需要同時增加 dropout_rate 值,以避免過度擬合。