政安晨:【Keras機器學習示例演繹】(五十一)—— 利用廣義網絡、深度網絡和交叉網絡進行結構化數據學習

政安晨的個人主頁:政安晨

歡迎?👍點贊?評論?收藏

收錄專欄:?TensorFlow與Keras機器學習實戰

希望政安晨的博客能夠對您有所裨益,如有不足之處,歡迎在評論區提出指正!

本文目標:使用 "寬深 "和 "深交 "網絡進行結構化數據分類。

目錄

簡介

數據集

設置

準備數據

定義數據集元數據

實驗設置

創建模型輸入

特征編碼

實驗 1:基線模型

實驗 2:廣度和深度模型

實驗 3:深度和交叉模型

結論


?

簡介

本例演示如何使用兩種建模技術進行結構化數據分類:

廣度模型和深度模型
深度模型和交叉模型


請注意,本示例應在 TensorFlow 2.5 或更高版本上運行。?

數據集


本示例使用 UCI 機器學習資料庫中的 Covertype 數據集。任務是根據地圖變量預測森林覆蓋類型。該數據集包含 506 011 個實例和 12 個輸入特征:10 個數字特征和 2 個分類特征。每個實例被分為 7 類中的 1 類。

設置

import os# Only the TensorFlow backend supports string inputs.
os.environ["KERAS_BACKEND"] = "tensorflow"import math
import numpy as np
import pandas as pd
from tensorflow import data as tf_data
import keras
from keras import layers

準備數據


首先,讓我們將 UCI 機器學習資源庫中的數據集加載到 Pandas DataFrame 中:

data_url = ("https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz"
)
raw_data = pd.read_csv(data_url, header=None)
print(f"Dataset shape: {raw_data.shape}")
raw_data.head()
Dataset shape: (581012, 55)
0123456789...45464748495051525354
0259651325805102212321486279...0000000005
12590562212-63902202351516225...0000000005
2280413992686531802342381356121...0000000002
327851551824211830902382381226211...0000000002
42595452153-13912202341506172...0000000005

5 行 × 55 列

數據集中的兩個分類特征是二進制編碼的。我們將把這個數據集表示法轉換為典型的表示法,即每個分類特征表示為一個整數值。

soil_type_values = [f"soil_type_{idx+1}" for idx in range(40)]
wilderness_area_values = [f"area_type_{idx+1}" for idx in range(4)]soil_type = raw_data.loc[:, 14:53].apply(lambda x: soil_type_values[0::1][x.to_numpy().nonzero()[0][0]], axis=1
)
wilderness_area = raw_data.loc[:, 10:13].apply(lambda x: wilderness_area_values[0::1][x.to_numpy().nonzero()[0][0]], axis=1
)CSV_HEADER = ["Elevation","Aspect","Slope","Horizontal_Distance_To_Hydrology","Vertical_Distance_To_Hydrology","Horizontal_Distance_To_Roadways","Hillshade_9am","Hillshade_Noon","Hillshade_3pm","Horizontal_Distance_To_Fire_Points","Wilderness_Area","Soil_Type","Cover_Type",
]data = pd.concat([raw_data.loc[:, 0:9], wilderness_area, soil_type, raw_data.loc[:, 54]],axis=1,ignore_index=True,
)
data.columns = CSV_HEADER# Convert the target label indices into a range from 0 to 6 (there are 7 labels in total).
data["Cover_Type"] = data["Cover_Type"] - 1print(f"Dataset shape: {data.shape}")
data.head().T
Dataset shape: (581012, 13)
01234
Elevation25962590280427852595
Aspect515613915545
Slope329182
Horizontal_Distance_To_Hydrology258212268242153
Vertical_Distance_To_Hydrology0-665118-1
Horizontal_Distance_To_Roadways51039031803090391
Hillshade_9am221220234238220
Hillshade_Noon232235238238234
Hillshade_3pm148151135122150
Horizontal_Distance_To_Fire_Points62796225612162116172
Wilderness_Areaarea_type_1area_type_1area_type_1area_type_1area_type_1
Soil_Typesoil_type_29soil_type_29soil_type_12soil_type_30soil_type_29
Cover_Type44114

DataFrame 的形狀顯示每個樣本有 13 列(12 列表示特征,1 列表示目標標簽)。

我們把數據分成訓練集(85%)和測試集(15%)。

train_splits = []
test_splits = []for _, group_data in data.groupby("Cover_Type"):random_selection = np.random.rand(len(group_data.index)) <= 0.85train_splits.append(group_data[random_selection])test_splits.append(group_data[~random_selection])train_data = pd.concat(train_splits).sample(frac=1).reset_index(drop=True)
test_data = pd.concat(test_splits).sample(frac=1).reset_index(drop=True)print(f"Train split size: {len(train_data.index)}")
print(f"Test split size: {len(test_data.index)}")
Train split size: 493323
Test split size: 87689

然后,將訓練數據和測試數據分別存儲在不同的 CSV 文件中。

train_data_file = "train_data.csv"
test_data_file = "test_data.csv"train_data.to_csv(train_data_file, index=False)
test_data.to_csv(test_data_file, index=False)

定義數據集元數據


這里,我們定義了數據集的元數據,這些元數據將有助于將數據讀取和解析為輸入特征,并根據輸入特征的類型對其進行編碼。

TARGET_FEATURE_NAME = "Cover_Type"TARGET_FEATURE_LABELS = ["0", "1", "2", "3", "4", "5", "6"]NUMERIC_FEATURE_NAMES = ["Aspect","Elevation","Hillshade_3pm","Hillshade_9am","Hillshade_Noon","Horizontal_Distance_To_Fire_Points","Horizontal_Distance_To_Hydrology","Horizontal_Distance_To_Roadways","Slope","Vertical_Distance_To_Hydrology",
]CATEGORICAL_FEATURES_WITH_VOCABULARY = {"Soil_Type": list(data["Soil_Type"].unique()),"Wilderness_Area": list(data["Wilderness_Area"].unique()),
}CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys())FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMESCOLUMN_DEFAULTS = [[0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME] else ["NA"]for feature_name in CSV_HEADER
]NUM_CLASSES = len(TARGET_FEATURE_LABELS)

實驗設置


接下來,讓我們定義一個輸入函數,用于讀取和解析文件,然后將特征和標簽轉換為 atf.data.Dataset 以進行訓練或評估。

def get_dataset_from_csv(csv_file_path, batch_size, shuffle=False):dataset = tf_data.experimental.make_csv_dataset(csv_file_path,batch_size=batch_size,column_names=CSV_HEADER,column_defaults=COLUMN_DEFAULTS,label_name=TARGET_FEATURE_NAME,num_epochs=1,header=True,shuffle=shuffle,)return dataset.cache()

在此,我們將配置參數,并執行給定模型的訓練和評估實驗程序。

learning_rate = 0.001
dropout_rate = 0.1
batch_size = 265
num_epochs = 50hidden_units = [32, 32]def run_experiment(model):model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate),loss=keras.losses.SparseCategoricalCrossentropy(),metrics=[keras.metrics.SparseCategoricalAccuracy()],)train_dataset = get_dataset_from_csv(train_data_file, batch_size, shuffle=True)test_dataset = get_dataset_from_csv(test_data_file, batch_size)print("Start training the model...")history = model.fit(train_dataset, epochs=num_epochs)print("Model training finished")_, accuracy = model.evaluate(test_dataset, verbose=0)print(f"Test accuracy: {round(accuracy * 100, 2)}%")

創建模型輸入


現在,將模型的輸入定義為一個字典,其中鍵是特征名稱,值是具有相應特征形狀和數據類型的 keras.layers.Input 張量。

def create_model_inputs():inputs = {}for feature_name in FEATURE_NAMES:if feature_name in NUMERIC_FEATURE_NAMES:inputs[feature_name] = layers.Input(name=feature_name, shape=(), dtype="float32")else:inputs[feature_name] = layers.Input(name=feature_name, shape=(), dtype="string")return inputs

特征編碼


我們為輸入特征創建了兩種表示:稀疏表示和密集表示: 1. 在稀疏表示中,分類特征使用類別編碼層(CategoryEncoding layer)進行單次編碼。這種表示法可以幫助模型記憶特定的特征值,從而做出某些預測。2.在密集表示法中,分類特征使用嵌入層(Embedding layer)進行低維嵌入編碼。這種表示法有助于模型很好地概括未見過的特征組合。

def encode_inputs(inputs, use_embedding=False):encoded_features = []for feature_name in inputs:if feature_name in CATEGORICAL_FEATURE_NAMES:vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]# Create a lookup to convert string values to an integer indices.# Since we are not using a mask token nor expecting any out of vocabulary# (oov) token, we set mask_token to None and  num_oov_indices to 0.lookup = layers.StringLookup(vocabulary=vocabulary,mask_token=None,num_oov_indices=0,output_mode="int" if use_embedding else "binary",)if use_embedding:# Convert the string input values into integer indices.encoded_feature = lookup(inputs[feature_name])embedding_dims = int(math.sqrt(len(vocabulary)))# Create an embedding layer with the specified dimensions.embedding = layers.Embedding(input_dim=len(vocabulary), output_dim=embedding_dims)# Convert the index values to embedding representations.encoded_feature = embedding(encoded_feature)else:# Convert the string input values into a one hot encoding.encoded_feature = lookup(keras.ops.expand_dims(inputs[feature_name], -1))else:# Use the numerical features as-is.encoded_feature = keras.ops.expand_dims(inputs[feature_name], -1)encoded_features.append(encoded_feature)all_features = layers.concatenate(encoded_features)return all_features

實驗 1:基線模型


在第一個實驗中,讓我們創建一個多層前饋網絡,對分類特征進行單擊編碼。

def create_baseline_model():inputs = create_model_inputs()features = encode_inputs(inputs)for units in hidden_units:features = layers.Dense(units)(features)features = layers.BatchNormalization()(features)features = layers.ReLU()(features)features = layers.Dropout(dropout_rate)(features)outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(features)model = keras.Model(inputs=inputs, outputs=outputs)return modelbaseline_model = create_baseline_model()
keras.utils.plot_model(baseline_model, show_shapes=True, rankdir="LR")
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparisonreturn bool(asarray(a1 == a2).all())

讓我們運行它:

run_experiment(baseline_model)
Start training the model...
Epoch 1/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - loss: 1.0713 - sparse_categorical_accuracy: 0.5634
Epoch 2/50179/1862 ━[37m━━━━━━━━━━━━━━━━━━━  1s 848us/step - loss: 0.7473 - sparse_categorical_accuracy: 0.6840/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.self.gen.throw(typ, value, traceback)1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 904us/step - loss: 0.7386 - sparse_categorical_accuracy: 0.6866
Epoch 3/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 909us/step - loss: 0.7135 - sparse_categorical_accuracy: 0.6958
Epoch 4/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 878us/step - loss: 0.6975 - sparse_categorical_accuracy: 0.7051
Epoch 5/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 941us/step - loss: 0.6876 - sparse_categorical_accuracy: 0.7089
Epoch 6/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 936us/step - loss: 0.6848 - sparse_categorical_accuracy: 0.7106
Epoch 7/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 934us/step - loss: 0.7165 - sparse_categorical_accuracy: 0.6969
Epoch 8/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 924us/step - loss: 0.6979 - sparse_categorical_accuracy: 0.7053
Epoch 9/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 967us/step - loss: 0.6913 - sparse_categorical_accuracy: 0.7088
Epoch 10/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 975us/step - loss: 0.6807 - sparse_categorical_accuracy: 0.7124
Epoch 11/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 987us/step - loss: 0.6829 - sparse_categorical_accuracy: 0.7110
Epoch 12/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 917us/step - loss: 0.6823 - sparse_categorical_accuracy: 0.7109
Epoch 13/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 879us/step - loss: 0.6658 - sparse_categorical_accuracy: 0.7175
Epoch 14/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 948us/step - loss: 0.6677 - sparse_categorical_accuracy: 0.7170
Epoch 15/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 866us/step - loss: 0.6695 - sparse_categorical_accuracy: 0.7130
Epoch 16/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 860us/step - loss: 0.6847 - sparse_categorical_accuracy: 0.7074
Epoch 17/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 853us/step - loss: 0.6660 - sparse_categorical_accuracy: 0.7174
Epoch 18/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 855us/step - loss: 0.6620 - sparse_categorical_accuracy: 0.7184
Epoch 19/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 900us/step - loss: 0.6642 - sparse_categorical_accuracy: 0.7163
Epoch 20/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 969us/step - loss: 0.6614 - sparse_categorical_accuracy: 0.7167
Epoch 21/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 988us/step - loss: 0.6560 - sparse_categorical_accuracy: 0.7199
Epoch 22/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 969us/step - loss: 0.6559 - sparse_categorical_accuracy: 0.7201
Epoch 23/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 868us/step - loss: 0.6514 - sparse_categorical_accuracy: 0.7217
Epoch 24/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 925us/step - loss: 0.6509 - sparse_categorical_accuracy: 0.7222
Epoch 25/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 879us/step - loss: 0.6464 - sparse_categorical_accuracy: 0.7233
Epoch 26/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 898us/step - loss: 0.6442 - sparse_categorical_accuracy: 0.7237
Epoch 27/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 842us/step - loss: 0.6476 - sparse_categorical_accuracy: 0.7210
Epoch 28/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 815us/step - loss: 0.6427 - sparse_categorical_accuracy: 0.7247
Epoch 29/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 837us/step - loss: 0.6414 - sparse_categorical_accuracy: 0.7244
Epoch 30/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 865us/step - loss: 0.6408 - sparse_categorical_accuracy: 0.7256
Epoch 31/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 845us/step - loss: 0.6378 - sparse_categorical_accuracy: 0.7269
Epoch 32/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 842us/step - loss: 0.6432 - sparse_categorical_accuracy: 0.7235
Epoch 33/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 905us/step - loss: 0.6482 - sparse_categorical_accuracy: 0.7226
Epoch 34/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6586 - sparse_categorical_accuracy: 0.7191
Epoch 35/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 958us/step - loss: 0.6511 - sparse_categorical_accuracy: 0.7215
Epoch 36/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 910us/step - loss: 0.6571 - sparse_categorical_accuracy: 0.7217
Epoch 37/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 897us/step - loss: 0.6451 - sparse_categorical_accuracy: 0.7253
Epoch 38/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 846us/step - loss: 0.6455 - sparse_categorical_accuracy: 0.7254
Epoch 39/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 907us/step - loss: 0.6722 - sparse_categorical_accuracy: 0.7131
Epoch 40/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1000us/step - loss: 0.6393 - sparse_categorical_accuracy: 0.7282
Epoch 41/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 872us/step - loss: 0.6804 - sparse_categorical_accuracy: 0.7078
Epoch 42/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 884us/step - loss: 0.6657 - sparse_categorical_accuracy: 0.7135
Epoch 43/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 960us/step - loss: 0.6557 - sparse_categorical_accuracy: 0.7180
Epoch 44/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 870us/step - loss: 0.6671 - sparse_categorical_accuracy: 0.7115
Epoch 45/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 871us/step - loss: 0.6730 - sparse_categorical_accuracy: 0.7069
Epoch 46/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 875us/step - loss: 0.6669 - sparse_categorical_accuracy: 0.7105
Epoch 47/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 847us/step - loss: 0.6634 - sparse_categorical_accuracy: 0.7129
Epoch 48/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 846us/step - loss: 0.6625 - sparse_categorical_accuracy: 0.7137
Epoch 49/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 824us/step - loss: 0.6596 - sparse_categorical_accuracy: 0.7146
Epoch 50/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 833us/step - loss: 0.6714 - sparse_categorical_accuracy: 0.7106
Model training finished
Test accuracy: 69.5%

基線線性模型的測試準確率約為 76%。

實驗 2:廣度和深度模型

在第二個實驗中,我們創建了一個廣度和深度模型。廣度模型是線性模型,深度模型是多層前饋網絡。

在廣度模型中使用輸入特征的稀疏表示,在深度模型中使用輸入特征的密集表示。

請注意,每個輸入特征都會對模型的兩個部分產生不同的表示。

def create_wide_and_deep_model():inputs = create_model_inputs()wide = encode_inputs(inputs)wide = layers.BatchNormalization()(wide)deep = encode_inputs(inputs, use_embedding=True)for units in hidden_units:deep = layers.Dense(units)(deep)deep = layers.BatchNormalization()(deep)deep = layers.ReLU()(deep)deep = layers.Dropout(dropout_rate)(deep)merged = layers.concatenate([wide, deep])outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(merged)model = keras.Model(inputs=inputs, outputs=outputs)return modelwide_and_deep_model = create_wide_and_deep_model()
keras.utils.plot_model(wide_and_deep_model, show_shapes=True, rankdir="LR")
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparisonreturn bool(asarray(a1 == a2).all())

讓我們運行它:

run_experiment(wide_and_deep_model)
Start training the model...
Epoch 1/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 5s 2ms/step - loss: 0.8979 - sparse_categorical_accuracy: 0.6386
Epoch 2/50128/1862 ━[37m━━━━━━━━━━━━━━━━━━━  2s 1ms/step - loss: 0.6317 - sparse_categorical_accuracy: 0.7302/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.self.gen.throw(typ, value, traceback)1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6290 - sparse_categorical_accuracy: 0.7295
Epoch 3/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6130 - sparse_categorical_accuracy: 0.7350
Epoch 4/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6029 - sparse_categorical_accuracy: 0.7397
Epoch 5/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.6010 - sparse_categorical_accuracy: 0.7397
Epoch 6/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5924 - sparse_categorical_accuracy: 0.7445
Epoch 7/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5917 - sparse_categorical_accuracy: 0.7442
Epoch 8/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5945 - sparse_categorical_accuracy: 0.7438
Epoch 9/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5933 - sparse_categorical_accuracy: 0.7443
Epoch 10/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5862 - sparse_categorical_accuracy: 0.7481
Epoch 11/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5809 - sparse_categorical_accuracy: 0.7507
Epoch 12/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5777 - sparse_categorical_accuracy: 0.7519
Epoch 13/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5736 - sparse_categorical_accuracy: 0.7534
Epoch 14/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5716 - sparse_categorical_accuracy: 0.7545
Epoch 15/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5736 - sparse_categorical_accuracy: 0.7537
Epoch 16/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5712 - sparse_categorical_accuracy: 0.7559
Epoch 17/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5683 - sparse_categorical_accuracy: 0.7564
Epoch 18/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5666 - sparse_categorical_accuracy: 0.7569
Epoch 19/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5652 - sparse_categorical_accuracy: 0.7575
Epoch 20/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5634 - sparse_categorical_accuracy: 0.7583
Epoch 21/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5677 - sparse_categorical_accuracy: 0.7563
Epoch 22/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5651 - sparse_categorical_accuracy: 0.7578
Epoch 23/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5628 - sparse_categorical_accuracy: 0.7586
Epoch 24/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5619 - sparse_categorical_accuracy: 0.7593
Epoch 25/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5603 - sparse_categorical_accuracy: 0.7589
Epoch 26/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5644 - sparse_categorical_accuracy: 0.7585
Epoch 27/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5592 - sparse_categorical_accuracy: 0.7604
Epoch 28/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5571 - sparse_categorical_accuracy: 0.7616
Epoch 29/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5556 - sparse_categorical_accuracy: 0.7629
Epoch 30/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5538 - sparse_categorical_accuracy: 0.7640
Epoch 31/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5535 - sparse_categorical_accuracy: 0.7635
Epoch 32/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5521 - sparse_categorical_accuracy: 0.7645
Epoch 33/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5505 - sparse_categorical_accuracy: 0.7648
Epoch 34/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5494 - sparse_categorical_accuracy: 0.7657
Epoch 35/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5496 - sparse_categorical_accuracy: 0.7660
Epoch 36/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5488 - sparse_categorical_accuracy: 0.7673
Epoch 37/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5471 - sparse_categorical_accuracy: 0.7668
Epoch 38/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5474 - sparse_categorical_accuracy: 0.7673
Epoch 39/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5457 - sparse_categorical_accuracy: 0.7674
Epoch 40/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5452 - sparse_categorical_accuracy: 0.7689
Epoch 41/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5448 - sparse_categorical_accuracy: 0.7679
Epoch 42/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5442 - sparse_categorical_accuracy: 0.7692
Epoch 43/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5436 - sparse_categorical_accuracy: 0.7701
Epoch 44/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5419 - sparse_categorical_accuracy: 0.7706
Epoch 45/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5432 - sparse_categorical_accuracy: 0.7691
Epoch 46/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5406 - sparse_categorical_accuracy: 0.7708
Epoch 47/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5412 - sparse_categorical_accuracy: 0.7701
Epoch 48/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5400 - sparse_categorical_accuracy: 0.7701
Epoch 49/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5416 - sparse_categorical_accuracy: 0.7699
Epoch 50/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5403 - sparse_categorical_accuracy: 0.7701
Model training finished
Test accuracy: 79.04%

廣度和深度模型的測試準確率約為 79%。

實驗 3:深度和交叉模型

在第三個實驗中,我們創建了一個深度和交叉模型。該模型的深度部分與前一個實驗中創建的深度部分相同。交叉部分的主要理念是以一種高效的方式應用顯式特征交叉,交叉特征的程度隨層深度的增加而增加。

def create_deep_and_cross_model():inputs = create_model_inputs()x0 = encode_inputs(inputs, use_embedding=True)cross = x0for _ in hidden_units:units = cross.shape[-1]x = layers.Dense(units)(cross)cross = x0 * x + crosscross = layers.BatchNormalization()(cross)deep = x0for units in hidden_units:deep = layers.Dense(units)(deep)deep = layers.BatchNormalization()(deep)deep = layers.ReLU()(deep)deep = layers.Dropout(dropout_rate)(deep)merged = layers.concatenate([cross, deep])outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(merged)model = keras.Model(inputs=inputs, outputs=outputs)return modeldeep_and_cross_model = create_deep_and_cross_model()
keras.utils.plot_model(deep_and_cross_model, show_shapes=True, rankdir="LR")
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparisonreturn bool(asarray(a1 == a2).all())

讓我們運行它:

run_experiment(deep_and_cross_model)
Start training the model...
Epoch 1/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 5s 2ms/step - loss: 0.9221 - sparse_categorical_accuracy: 0.6235
Epoch 2/50116/1862 ━[37m━━━━━━━━━━━━━━━━━━━  2s 1ms/step - loss: 0.6388 - sparse_categorical_accuracy: 0.7257/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.self.gen.throw(typ, value, traceback)1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - loss: 0.6271 - sparse_categorical_accuracy: 0.7316
Epoch 3/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.6023 - sparse_categorical_accuracy: 0.7403
Epoch 4/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5896 - sparse_categorical_accuracy: 0.7453
Epoch 5/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5899 - sparse_categorical_accuracy: 0.7438
Epoch 6/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5960 - sparse_categorical_accuracy: 0.7421
Epoch 7/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5813 - sparse_categorical_accuracy: 0.7481
Epoch 8/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5748 - sparse_categorical_accuracy: 0.7500
Epoch 9/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5743 - sparse_categorical_accuracy: 0.7502
Epoch 10/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5739 - sparse_categorical_accuracy: 0.7506
Epoch 11/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5673 - sparse_categorical_accuracy: 0.7540
Epoch 12/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5649 - sparse_categorical_accuracy: 0.7561
Epoch 13/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5651 - sparse_categorical_accuracy: 0.7548
Epoch 14/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5618 - sparse_categorical_accuracy: 0.7563
Epoch 15/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5599 - sparse_categorical_accuracy: 0.7571
Epoch 16/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5568 - sparse_categorical_accuracy: 0.7585
Epoch 17/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5556 - sparse_categorical_accuracy: 0.7592
Epoch 18/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5544 - sparse_categorical_accuracy: 0.7595
Epoch 19/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5533 - sparse_categorical_accuracy: 0.7603
Epoch 20/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5532 - sparse_categorical_accuracy: 0.7597
Epoch 21/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5531 - sparse_categorical_accuracy: 0.7602
Epoch 22/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5516 - sparse_categorical_accuracy: 0.7608
Epoch 23/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5503 - sparse_categorical_accuracy: 0.7611
Epoch 24/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5492 - sparse_categorical_accuracy: 0.7619
Epoch 25/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5482 - sparse_categorical_accuracy: 0.7623
Epoch 26/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5464 - sparse_categorical_accuracy: 0.7635
Epoch 27/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5483 - sparse_categorical_accuracy: 0.7625
Epoch 28/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5654 - sparse_categorical_accuracy: 0.7555
Epoch 29/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5545 - sparse_categorical_accuracy: 0.7593
Epoch 30/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5512 - sparse_categorical_accuracy: 0.7603
Epoch 31/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5493 - sparse_categorical_accuracy: 0.7616
Epoch 32/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5485 - sparse_categorical_accuracy: 0.7627
Epoch 33/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5593 - sparse_categorical_accuracy: 0.7588
Epoch 34/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5536 - sparse_categorical_accuracy: 0.7608
Epoch 35/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5537 - sparse_categorical_accuracy: 0.7612
Epoch 36/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5518 - sparse_categorical_accuracy: 0.7621
Epoch 37/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5502 - sparse_categorical_accuracy: 0.7618
Epoch 38/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5537 - sparse_categorical_accuracy: 0.7597
Epoch 39/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5526 - sparse_categorical_accuracy: 0.7609
Epoch 40/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5508 - sparse_categorical_accuracy: 0.7608
Epoch 41/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5495 - sparse_categorical_accuracy: 0.7613
Epoch 42/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5478 - sparse_categorical_accuracy: 0.7625
Epoch 43/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5471 - sparse_categorical_accuracy: 0.7629
Epoch 44/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5462 - sparse_categorical_accuracy: 0.7640
Epoch 45/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5458 - sparse_categorical_accuracy: 0.7633
Epoch 46/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5466 - sparse_categorical_accuracy: 0.7635
Epoch 47/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5492 - sparse_categorical_accuracy: 0.7633
Epoch 48/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5474 - sparse_categorical_accuracy: 0.7639
Epoch 49/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5452 - sparse_categorical_accuracy: 0.7645
Epoch 50/501862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5446 - sparse_categorical_accuracy: 0.7663
Model training finished
Test accuracy: 77.98%

深度和交叉模型的測試準確率約為 81%。

結論


您可以使用 Keras 預處理層輕松處理具有不同編碼機制的分類特征,包括單次編碼和特征嵌入。此外,針對不同的數據集屬性,不同的模型架構(如廣義網絡、深度網絡和交叉網絡)具有不同的優勢。您可以探索獨立使用它們,或者將它們結合起來,以獲得最適合您數據集的結果。


本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/bicheng/21278.shtml
繁體地址,請注明出處:http://hk.pswp.cn/bicheng/21278.shtml
英文地址,請注明出處:http://en.pswp.cn/bicheng/21278.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Python 技能提升(三)

global 和 nonlocal b 全局變量 global variabledef foo():a 局部變量 local variable# 在局部里面操作全局變量&#xff0c;需要加上聲明global bb b &#xff01;&#xff01;&#xff01;print(b)foo() # 全局變量 global variable&#xff01;&#xff01;&#xff01…

Python 遞歸函數一例

現有示例數據 # 示例數據 pending_join [ {increment: "department Finance", statement_index: 0}, {increment: "name Lisa", statement_index: 2}, {increment: "gender Female", statement_index: 3}, {increment: "hire_date <…

redis如何實現分布式鎖

Redisson是怎么實現分布式鎖的 分布式鎖&#xff1a;Redisson 提供了一種簡單而強大的方式來實現分布式鎖。 它支持多種鎖模式&#xff0c;如公平鎖、可重入鎖、讀寫鎖等&#xff0c;并且提供了鎖的超時設置和自動釋放功能。 鎖的獲取 在Redisson中常見獲取鎖的方式有 lock() …

【代碼隨想錄訓練營】【Day 37】【貪心-4】| Leetcode 840, 406, 452

【代碼隨想錄訓練營】【Day 37】【貪心-4】| Leetcode 840, 406, 452 需強化知識點 python list sort的高階用法&#xff0c;兩個key&#xff0c;另一種逆序寫法python list insert的用法 題目 860. 檸檬水找零 思路&#xff1a;注意 20 塊找零&#xff0c;可以找3張5塊升…

Mysql基礎教程(13):GROUP BY

MySQL GROUP BY 【 GROUP BY】 子句用于將結果集根據指定的字段或者表達式進行分組。 有時候&#xff0c;我們需要將結果集按照某個維度進行匯總。這在統計數據的時候經常用到&#xff0c;考慮以下的場景&#xff1a; 按班級求取平均成績。按學生匯總某個人的總分。按年或者…

“世界酒中國菜”系列活動如何助推鄉村振興和文化交流?

"世界酒中國菜"系列活動如何助推鄉村振興和文化交流&#xff1f; 《經濟參考報》&#xff08;2024年5月24日 第6版&#xff09; 新華社北京&#xff08;記者 張曉明&#xff09; “世界酒中國菜”系列活動自啟動以來&#xff0c;已在國內外產生了廣泛影響。這一國家…

mysql面試之分庫分表總結

文章目錄 1.為什么要分庫分表2.分庫分表有哪些中間件&#xff0c;不同的中間件都有什么優點和缺點&#xff1f;3.分庫分表的方式(水平分庫,垂直分庫,水平分表,垂直分表)3.1 水平分庫3.2 垂直分庫3.3 水平分表3.4 垂直分表 4.分庫分表帶來的問題4.1 事務一致性問題4.2 跨節點關聯…

【退役之重學 SQL】什么是笛卡爾積

一、初識笛卡爾積 概念&#xff1a; 笛卡爾積是指在關系型數據庫中&#xff0c;兩個表進行 join 操作時&#xff0c;沒有指定任何條件&#xff0c;導致生成的結果集&#xff0c;是兩個表中所有行的組合。 簡單來說&#xff1a; 笛卡爾積是兩個表的乘積&#xff0c;結果集中的每…

力扣 454題 四數相加Ⅱ 記錄

題目描述 給你四個整數數組 nums1、nums2、nums3 和 nums4 &#xff0c;數組長度都是 n &#xff0c;請你計算有多少個元組 (i, j, k, l) 能滿足&#xff1a; 0 < i, j, k, l < n nums1[i] nums2[j] nums3[k] nums4[l] 0示例 1&#xff1a; 輸入&#xff1a;nums1 …

Flutter 中的 SliverOpacity 小部件:全面指南

Flutter 中的 SliverOpacity 小部件&#xff1a;全面指南 Flutter 是一個功能強大的 UI 框架&#xff0c;由 Google 開發&#xff0c;允許開發者使用 Dart 語言來構建高性能、美觀的跨平臺應用。在 Flutter 的滾動組件體系中&#xff0c;SliverOpacity 是一個用來為其子 Slive…

強化學習中Q值的概念

在強化學習中&#xff0c;Q值是一個非常核心的概念&#xff0c;用來表示在給定的狀態下&#xff0c;采取某個特定動作所期望獲得的總回報。Q值基本上是一種衡量“動作價值”的方式&#xff0c;即在當前狀態采取一個動作能帶來多大價值。 定義和計算 Q值通常表示為 (Q(s, a))&…

RabbitMQ小結

MQ分類 Acitvemq kafka 優點&#xff1a;性能好&#xff0c;吞吐量高百萬級&#xff0c;分布式&#xff0c;消息有序 缺點&#xff1a;單機超過64分區&#xff0c;cpu會飆高&#xff0c;消費失敗不支持重試 &#xff0c; Rocket 阿里的mq產品 優點&#xff1a;單機吞吐量也…

香橙派 Kunpeng Pro:基于ncnn的深度學習模型量化與部署實踐

一 引言 近10年里以深度學習為代表的機器學習技術在圖像處理&#xff0c;語音識別&#xff0c;自然語言處理等領域里取得了非常多的突破&#xff0c;其背后的核心算法是深度學習為代表的AI基礎模型。 一般來講&#xff0c;我們進行AI項目研發時&#xff0c;遵循三個步驟。 第…

LabVIEW步進電機的串口控制方法與實現

本文介紹了在LabVIEW環境中通過串口控制步進電機的方法&#xff0c;涵蓋了基本的串口通信原理、硬件連接步驟、LabVIEW編程實現以及注意事項。通過這些方法&#xff0c;用戶可以實現對步進電機的精確控制&#xff0c;適用于各種自動化和運動控制應用場景。 步進電機與串口通信…

python3.8環境下安裝pyqt5

1.實驗目的 測試python可視化工具包pyqt5,為后期做系統前端頁面做鋪墊 2.實驗環境 1.軟件 anaconda2.5 pycharm2024.1.1 pyqt5 2.硬件 GPU 4070TI Intel I7 1400K 3. 安裝步驟 (base) C:\Users\PC>conda -V conda 23.7.4(base) C:\Users\PC>conda create qttest p…

spring項目修改時間格式

一、配置方式 在application.yml上添加 spring:jackson:date-format: yyyy-MM-dd HH:mm:sstime-zone: GMT8 二、注解方式 1、添加依賴 <dependency><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-annotations</artifactId&…

解釋def __int__(self):和def __init__(self):的區別

文章目錄 __init__ 方法例子 __int__ 方法例子 總結 def __int__(self): 和 def __init__(self): 是Python中兩個不同的特殊方法&#xff08;或魔法方法&#xff09;&#xff0c;它們有著不同的用途和含義。 __init__ 方法 作用&#xff1a;__init__ 方法是類的構造函數。當你…

大文件分片【筆記】

createChunk.js Spark-md5計算文件各分片MD5生成文件指紋 可以幫助我們更加方便地進行文件哈希計算和文件完整性檢測等操作。 import sparkMd5 from ./sparkmd5.jsexport function createChunk(file, index, chunkSize) {return new Promise((resolve, reject) > {const sta…

整理好了!2024年最常見 20 道 Kafka面試題(一)

一、什么是Apache Kafka&#xff0c;它主要用于什么場景&#xff1f; Apache Kafka是一個分布式流處理平臺&#xff0c;最初由LinkedIn開發&#xff0c;后來成為Apache軟件基金會的一個開源項目。它被設計為一個高吞吐量、可擴展、容錯的消息隊列系統&#xff0c;能夠處理實時…