實驗目的
理解和掌握樸素貝葉斯基本原理和方法,理解極大似然估計方法,理解先驗概率分布和后驗概率分布等概念,掌握樸素貝葉斯分類器訓練方法。
實驗要求
給定數據集,編程實現樸素貝葉斯分類算法,計算相應先驗概率,條件概率,高斯分布均值和方差的估計值,并給出模型在測試集上的精度。
實驗環境
python, numpy, scipy
實驗代碼
import numpy as npfrom scipy.stats import norm# 導入訓練數據train_dataset_data = np.genfromtxt("experiment_07_training_set.csv", delimiter=",", skip_header=1, usecols=(1, 2, 3, 4))rowOfTrainDataset = train_dataset_data.shape[0]train_dataset_label = np.genfromtxt("experiment_07_training_set.csv", delimiter=",", skip_header=1, usecols=(5,), dtype="str")# 導入測試數據test_dataset_data = np.genfromtxt("experiment_07_testing_set.csv", delimiter=",", skip_header=1, usecols=(1, 2, 3, 4))rowOfTestDataset = test_dataset_data.shape[0]test_dataset_label = np.genfromtxt("experiment_07_testing_set.csv", delimiter=",", skip_header=1, usecols=(5,), dtype="str")# 定義種類列表和屬性列表species = ["Iris-setosa", "Iris-versicolor", "Iris-virginica"]xs = ["SepalLength", "SepalWidth", "PetalLength", "PetalWidth"]# 統計先驗概率prior = np.zeros(3)for i in range(3):prior[i] = np.sum(train_dataset_label == species[i], axis=0) / rowOfTrainDatasetprint("先驗概率: ")for i in range(3):print(f"{species[i]}-> {prior[i]}")# 計算條件概率condition = np.zeros((3, 4, 2))# 對3個類別計算for i in range(3):# 每個類別4個屬性for j in range(4):temp = train_dataset_data[train_dataset_label == species[i], j]# 存儲均值condition[i, j, 0] = np.mean(temp)# 存儲標準差condition[i, j, 1] = np.sqrt(np.var(temp))print(f"高斯分布參數估計:")for i in range(3):print(f"P(X|Y={species[i]})->", end='')for j in range(4):print(f"|X1={xs[j]}:均值->{condition[i, j, 0]:.4f} 標準差->{condition[i, j, 1]: .4f}|", end=' ')print("")# 計算精度pred = np.zeros_like(test_dataset_label)for i in range(rowOfTestDataset):# 將概率初始化為0probability1 = 0# 計算3種類別的概率取最大概率作為種類for j in range(3):p0 = norm.pdf(test_dataset_data[i, 0], loc=condition[j, 0, 0], scale=condition[j, 0, 1])p1 = norm.pdf(test_dataset_data[i, 1], loc=condition[j, 1, 0], scale=condition[j, 1, 1])p2 = norm.pdf(test_dataset_data[i, 2], loc=condition[j, 2, 0], scale=condition[j, 2, 1])p3 = norm.pdf(test_dataset_data[i, 3], loc=condition[j, 3, 0], scale=condition[j, 3, 1])probability2 = prior[j] * p0 * p1 * p2 * p3if probability2 > probability1:pred[i] = jprobability1 = probability2# 將類別從編號轉為字符串pred_species = np.array([species[int(p)] for p in pred])# 計算精度accuracy = np.sum(pred_species == test_dataset_label) / rowOfTestDatasetprint(f"模型精度: {accuracy * 100: .2f}%")
結果分析
先驗概率
類別 | 先驗概率 |
P(Y=setosa) | 0.4 |
P(Y=versicolor) | 0.4 |
P(Y=virginica) | 0.2 |
高斯分布參數估計精度:
類別 | X1=SepalLength | X2=SepalWidth | X3=PetalLength | X4=PetalWidth | ||||
均值 | 標準差 | 均值 | 標準差 | 均值 | 標準差 | 均值 | 標準差 | |
P(X|Y=setosa) | 5.0375 | 0.3576 | 3.4400 | 0.3597 | 1.4625 | 0.1698 | 0.2325 | 0.0985 |
P(X|Y=versicolor) | 6.0150 | 0.5126 | 2.7875 | 0.3257 | 4.3200 | 0.4440 | 1.3500 | 0.2049 |
P(X|Y=virginica) | 6.5600 | 0.7130 | 2.9200 | 0.3763 | 5.6550 | 0.6241 | 2.0450 | 0.2673 |
模型精度:
92.00%