1. 背景
數學小白一枚,看推理過程需要很多時間。好在有大神們源碼和DS幫忙,教程里的推理過程才能勉強拼湊一二。
* 留意: 推導過程中X都是向量組表達: shape(feature, sample_n); 和numpy中的默認矩陣正好相反。
2. PCA / KPCA
PCA | KPCA(Linear Kernel) |
詳細推理基本過程找教程。(詳細步驟我也推不出來,數學太菜) 大概過程: 1. 求最小|X-XWWt|^2 時的W 2. 通過trace的性質,等價于求trace(AtA) 3. 最后推導出:需要最大化XXtW=lambdaW,又要降低維度; 所以計算比例lamda中由大到小排序,保留滿足一定閾值的前n個特征值和對應的特征向量(就是W)。 輸出: 降維Xd= X@Wd 代碼很簡單. | 詳細推理基本過程找教程。(詳細步驟我也推不出來,數學太菜) 大概過程: 1. 巧妙的設了一個A=XW/sqrt(lambda), K=XtX 2. 通過推導KA=lamdaA,W=XtA/sqrt(lamda) * 大模型解釋的A為什么要這么設 輸出: 降維Xd= X@Wd = lambda * sqrt(lamda)? 代碼相對復雜一些。運行的結果和PCA一樣的。 |
PCA
import numpy as np
from sklearn.datasets import load_digits, load_iris
from sklearn.decomposition import KernelPCA#global round float to scale 2
np.set_printoptions(precision=2, suppress=True)X, _ = load_iris(return_X_y=True)X=X[:5]
print(X)#========================================================
#PCA
# 1. W= XtX's eigVec (responding to max eigVal)
# 2. X_rec=X@W@W.T
#========================================================
eVals, eVecs=np.linalg.eig(X.T@X)
print(np.allclose(X.T@X, eVecs@np.diag(eVals)@eVecs.T))
print('val',eVals)
print('val',eVals[:2])
print('vec',eVecs)
print('vec',eVecs[:2])W=eVecs.T[:2].Tprint("W",W)
X_rec=X@W@W.T
print(X_rec)
print(X)
print(np.linalg.norm(X - X_rec))
print(np.var(X - X_rec))
KPCA
import numpy as np
from sklearn.datasets import load_digits, load_iris
from sklearn.decomposition import KernelPCA#global round float to scale 2
np.set_printoptions(precision=2, suppress=True)X, _ = load_iris(return_X_y=True)# X=X[:5]
# XMean=np.mean(X)
# X=X-XMean
print(X)#========================================================
# KPCA
# set: A= XW/sqrt(lambda)
# based on PCA's conclusion: XtXW=lambda W //由于W有約束, WtW=1 單位正交向量組
# >>> W=XtXW/lambda = XtA/sqrt(lambda) //WtW == 1 == AtXXtA/lambda = AtKA/lambda = At lambda A/lambda = lambda/lambda AtA = 1
# >>> 同時:XXtXW=lambda XW >>> KA*sqrt(lambda) = lambda A*sqrt(lambda) >>> KA = lambda A //設A時XW/n(任意值),這個公式都成立;但按上面的設定,可以保證W單位正交。
#
# 1. W = XtA/sqrt(lambda) (A is eigVec of X@X.T)
# 2. X_rec=X@W@W.T
#========================================================
# create a callable kernel PCA object
# transformer = KernelPCA(n_components=2, kernel='linear')
# X_transformed = transformer.fit_transform(X)
eVals, eVecs=np.linalg.eig(X@X.T)
print(np.allclose(X@X.T, eVecs@np.diag(eVals)@eVecs.T))print('val',eVals)
print('val',eVals[:2])
print('vec',eVecs)
print('vec',eVecs[:2])# W = XtA/sqrt(lambda)
W=X.T@eVecs.T[:2].T@np.linalg.pinv(np.sqrt(np.diag(eVals[:2])))# X_hat = XW = XXtA/sqrt(lambda)= KA/sqrt(lambda) = lambda A/sqrt(lambda) = A*sqrt(lambda)
# 這就是源碼中直接用 A*sqrt(lambda) 返回X_transformed的原因:
#<code>
# no need to use the kernel to transform X, use shortcut expression
# X_transformed = self.eigenvectors_ * np.sqrt(self.eigenvalues_)
#</code># print(X_transformed.shape)
# print(X_transformed)
#
# W=X.T@transformer.eigenvectors_
# print(transformer.eigenvectors_.shape)
# print(transformer.eigenvectors_)print("W",W)
X_rec=X@W@W.T
print(X_rec)
print(X)
print(np.linalg.norm(X - X_rec))
print(np.var(X - X_rec))
參考:
《Python機器學習》