主成分分析 獨立成分分析
by Moshe Binieli
由Moshe Binieli
主成分分析概述 (An overview of Principal Component Analysis)
This article will explain you what Principal Component Analysis (PCA) is, why we need it and how we use it. I will try to make it as simple as possible while avoiding hard examples or words which can cause a headache.
本文將向您解釋什么是主成分分析(PCA),為什么需要它以及如何使用它。 我將嘗試使其盡可能簡單,同時避免使用可能引起頭痛的硬示例或單詞。
A moment of honesty: to fully understand this article, a basic understanding of some linear algebra and statistics is essential. Take a few minutes to review the following topics, if you need to, in order to make it easy to understand PCA:
誠實的時刻:要完全理解本文,對一些線性代數和統計量的基本理解是必不可少的。 如果需要,請花幾分鐘時間回顧以下主題,以使其易于理解PCA:
- vectors 向量
- eigenvectors 特征向量
- eigenvalues 特征值
- variance 方差
- covariance 協方差
那么該算法如何幫助我們呢? 該算法有什么用? (So how can this algorithm help us? What are the uses of this algorithm?)
- Identifies the most relevant directions of variance in the data. 標識數據中最相關的方差方向。
- Helps capture the most “important” features. 幫助捕獲最“重要”的功能。
- Easier to make computations on the dataset after the dimension reductions since we have fewer data to deal with. 降維后,由于我們要處理的數據較少,因此更易于在數據集上進行計算。
- Visualization of the data. 數據可視化。
簡短的口頭解釋。 (Short verbal explanation.)
Let’s say we have 10 variables in our dataset and let’s assume that 3 variables capture 90% of the dataset, and 7 variables capture 10% of the dataset.
假設我們的數據集中有10個變量,并假設3個變量捕獲了90%的數據集,而7個變量捕獲了10%的數據集。
Let’s say we want to visualize 10 variables. Of course we cannot do that, we can visualize only maximum 3 variables (Maybe in future we will be able to).
假設我們要可視化10個變量。 當然我們不能這樣做,我們只能可視化最多3個變量(也許將來我們可以)。
So we have a problem: we don’t know which of the variables captures the largest variability in our data. To solve this mystery, we’ll apply the PCA Algorithm. The output will tell us who are those variables. Sounds cool, doesn’t it? ?
所以我們有一個問題:我們不知道哪個變量捕獲了數據中最大的可變性。 為了解決這個難題,我們將應用PCA算法。 輸出將告訴我們那些變量是誰。 聽起來很酷,不是嗎? ?
那么,使PCA起作用的步驟是什么? 我們如何運用魔法? (So what are the steps to make PCA work? How do we apply the magic?)
- Take the dataset you want to apply the algorithm on. 獲取您要對其應用算法的數據集。
- Calculate the covariance matrix. 計算協方差矩陣。
- Calculate the eigenvectors and their eigenvalues. 計算特征向量及其特征值。
- Sort the eigenvectors according to their eigenvalues in descending order. 根據特征向量按降序對特征向量進行排序。
- Choose the first K eigenvectors (where k is the dimension we’d like to end up with). 選擇前K個特征向量(其中k是我們要最終得出的維數)。
- Build new reduced dataset. 建立新的簡化數據集。
是時候使用真實數據了。 (Time for an example with real data.)
1) 將數據集加載到矩陣中: (1) Load the dataset to a matrix:)
Our main goal is to figure out how many variables are the most important for us and stay only with them.
我們的主要目標是找出對我們來說最重要的變量有多少,并且只保留它們。
For this example, we will use the program “Spyder” for running python. We’ll also use a pretty cool dataset that is embedded inside “sklearn.datasets” which is called “load_iris”. You can read more about this dataset at Wikipedia.
對于此示例,我們將使用程序“ Spyder”運行python。 我們還將使用嵌入在“ sklearn.datasets”內部的非常酷的數據集,稱為“ load_iris”。 您可以在Wikipedia上了解有關此數據集的更多信息。
First of all, we will load the iris module and transform the dataset into a matrix. The dataset contains 4 variables with 150 examples. Hence, the dimensionality of our data matrix is: (150, 4).
首先,我們將加載虹膜模塊并將數據集轉換為矩陣。 數據集包含4個變量和150個示例。 因此,我們的數據矩陣的維數為:(150,4)。
import numpy as npimport pandas as pdfrom sklearn.datasets import load_iris
irisModule = load_iris()dataset = np.array(irisModule.data)
There are more rows in this dataset — as we said there are 150 rows, but we can see only 17 rows.
該數據集中有更多行-就像我們說的有150行,但我們只能看到17行。
The concept of PCA is to reduce the dimensionality of the matrix by finding the directions that captures most of the variability in our data matrix. Therefore, we’d like to find them.
PCA的概念是通過找到捕獲我們數據矩陣中大多數可變性的方向來減少矩陣的維數。 因此,我們想找到它們。
2) 計算協方差矩陣: (2) Calculate the covariance matrix:)
It’s time to calculate the covariance matrix of our dataset, but what does this even mean? Why do we need to calculate the covariance matrix? How will it look?
現在是時候計算數據集的協方差矩陣了,但這到底意味著什么? 為什么我們需要計算協方差矩陣? 看起來如何?
Variance is the expectation of the squared deviation of a random variable from its mean. Informally, it measures the spread of a set of numbers from their mean. The mathematical definition is:
方差是隨機變量與其平均值的平方偏差的期望值。 非正式地, 它根據一組均值來衡量一組數字的傳播。 數學定義為:
Covariance is a measure of the joint variability of two random variables. In other words, how any 2 features vary from each other. Using the covariance is very common when looking for patterns in data. The mathematical definition is:
協方差是兩個隨機變量聯合變量的度量。 換句話說,任何兩個功能如何彼此不同。 在數據中查找模式時,通常使用協方差。 數學定義為:
From this definition, we can conclude that the covariance matrix will be symmetric. This is important because it means that its eigenvectors will be real and non-negative, which makes it easier for us (we dare you to claim that working with complex numbers is easier than real ones!)
根據這個定義,我們可以得出結論,協方差矩陣將是對稱的。 這很重要,因為這意味著它的特征向量將是實數和非負數,這使我們更容易(我們敢于要求與實數相比,使用復數更容易!)
After calculating the covariance matrix it will look like this:
計算協方差矩陣后,它將如下所示:
As you can see, the main diagonal is written as V (variance) and the rest is written as C (covariance), why is that?
如您所見,主對角線寫為V (方差),其余對角線寫為C (協方差),為什么?
Because calculating the covariance of the same variable is basically calculating its variance (if you’re not sure why — please take a few minutes to understand what variance and covariance are).
因為計算同一個變量的協方差基本上就是計算其方差(如果不確定為什么,請花幾分鐘來了解什么是方差和協方差)。
Let’s calculate in Python the covariance matrix of the dataset using the following code:
讓我們使用以下代碼在Python中計算數據集的協方差矩陣:
covarianceMatrix = pd.DataFrame(data = np.cov(dataset, rowvar = False), columns = irisModule.feature_names, index = irisModule.feature_names)
We’re not interested in the main diagonal, because they are the variance of the same variable. Since we’re trying to find new patterns in the dataset, we’ll ignore the main diagonal.
我們對主對角線不感興趣,因為它們是同一變量的方差。 由于我們正在嘗試在數據集中查找新的模式,因此我們將忽略主對角線 。
Since the matrix is symmetric, covariance(a,b) = covariance(b,a), we will look only at the top values of the covariance matrix (above diagonal).
由于矩陣是對稱的,所以協方差(a,b)=協方差(b,a), 我們將僅查看協方差矩陣的最高值(對角線以上) 。
Something important to mention about covariance: if the covariance of variables
關于協方差重要的事情要提到:如果變量的協方差
a and b is positive, that means they vary in the same direction. If the covariance of a and b is negative, they vary in different directions.
a和b為正 ,表示它們沿相同方向變化。 如果a和b的協方差為負 ,則它們在不同的方向上變化。
3) 計算特征值和特征向量: (3) Calculate the eigenvalues and eigenvectors:)
As I mentioned at the beginning, eigenvalues and eigenvectors are the basic terms you must know in order to understand this step. Therefore, I won’t explain it, but will rather move to compute them.
正如我在開始時提到的那樣,特征值和特征向量是您必須了解的基本術語,才能理解此步驟。 因此,我不會解釋它,而是會進行計算。
The eigenvector associated with the largest eigenvalue indicates the direction in which the data has the most variance. Hence, using eigenvalues we will know what eigenvectors capture the most variability in our data.
與最大特征值關聯的特征向量指示數據具有最大方差的方向。 因此,使用特征值,我們將知道哪些特征向量捕獲了數據中最大的可變性。
eigenvalues, eigenvectors = np.linalg.eig(covarianceMatrix)
This is the vector of the eigenvalues, the first index at eigenvalues vector is associated with the first index at eigenvectors matrix.
這是特征值的向量,在特征值向量處的第一索引與在特征向量矩陣處的第一索引相關聯。
The eigenvalues:
特征值:
The eigenvectors matrix:
特征向量矩陣:
4)選擇前K個特征值(K個主要分量/軸): (4) Choose the first K eigenvalues (K principal components/axises):)
The eigenvalues tells us the amount of variability in the direction of its corresponding eigenvector. Therefore, the eigenvector with the largest eigenvalue is the direction with most variability. We call this eigenvector the first principle component (or axis). From this logic, the eigenvector with the second largest eigenvalue will be called the second principal component, and so on.
特征值告訴我們在其相應特征向量方向上的變化量。 因此,特征值最大的特征向量是變化最大的方向。 我們稱這個特征向量為第一主成分(或軸)。 根據此邏輯,具有第二大特征值的特征向量將被稱為第二主成分,依此類推。
We see the following values:[4.224, 0.242, 0.078, 0.023]
我們看到以下值:[4.224、0.242、0.078、0.023]
Let’s translate those values to percentages and visualize them. We’ll take the percentage that each eigenvalue covers in the dataset.
讓我們將這些值轉換為百分比并將其可視化。 我們將獲取每個特征值在數據集中所占的百分比。
totalSum = sum(eigenvalues)variablesExplained = [(i / totalSum) for i in sorted(eigenvalues, reverse = True)]
As you can clearly see the first and eigenvalue takes 92.5% and the second one takes 5.3%, and the third and forth don’t cover much data from the total dataset. Therefore we can easily decide to remain with only 2 variables, the first one and the second one.
正如您可以清楚地看到的那樣, 第一個和特征值占92.5% , 第二個占5.3%,而第三個和第四個并沒有覆蓋整個數據集中的大量數據。 因此,我們可以輕松地決定只保留兩個變量 ,第一個和第二個。
featureVector = eigenvectors[:,:2]
Let’s remove the third and fourth variables from the dataset. Important to say that at this point we lose some information. It is impossible to reduce dimensions without losing some information (under the assumption of general position). PCA algorithm tells us the right way to reduce dimensions while keeping the maximum amount of information regarding our data.
讓我們從數據集中刪除第三個和第四個變量。 重要的一點是,我們現在會丟失一些信息。 在不損失某些信息的情況下減小尺寸是不可能的(在一般情況下)。 PCA算法為我們提供了減少尺寸的正確方法,同時又保留了有關我們數據的最大信息量。
And the remaining data set looks like this:
其余數據集如下所示:
5) 建立新的簡化數據集: (5) Build the new reduced dataset:)
We want to build a new reduced dataset from the K chosen principle components.
我們想從K個選定的主成分構建一個新的簡化數據集。
We’ll take the K chosen principles component (k=2 here) which gives us a matrix of size (4, 2), and we will take the original dataset which is a matrix of size (150, 4).
我們將選取K個選定的原則分量(此處k = 2),這將為我們提供一個大小為(4,2)的矩陣,我們將為原始數據集提供一個大小為(150,4)的矩陣。
We’ll perform matrices multiplication in such a way:
我們將以以下方式執行矩陣乘法:
- The first matrix we take is the matrix that contains the K component principles we’ve chosen and we transpose this matrix. 我們采用的第一個矩陣是包含我們選擇的K個成分原則的矩陣,并且我們對該矩陣進行轉置。
- The second matrix we take is the original matrix and we transpose it. 我們采用的第二個矩陣是原始矩陣,然后對其進行轉置。
- At this point, we perform matrix multiplication between those two matrices. 在這一點上,我們在這兩個矩陣之間執行矩陣乘法。
- After we perform matrix multiplication we transpose the result matrix. 在執行矩陣乘法之后,我們轉置結果矩陣。
featureVectorTranspose = np.transpose(featureVector)datasetTranspose = np.transpose(dataset)newDatasetTranspose = np.matmul(featureVectorTranspose, datasetTranspose)newDataset = np.transpose(newDatasetTranspose)
After performing the matrices multiplication and transposing the result matrix, these are the values we get for the new data which contains only the K principal components we’ve chosen.
在執行矩陣乘法并轉置結果矩陣之后,這些就是我們從新數據中獲得的值,這些新數據僅包含我們選擇的K個主成分。
結論 (Conclusion)
As (we hope) you can now see, PCA is not that hard. We’ve managed to reduce the dimensions of the dataset pretty easily using Python.
正如我們現在所希望的那樣,PCA并不難。 我們已經成功地使用Python輕松減少了數據集的尺寸。
In our data set, we did not cause serious impact because we removed only 2 variables out of 4. But let’s assume we have 200 variables in our data set, and we reduced from 200 variables to 3 variables — it’s already becoming more meaningful.
在我們的數據集中,我們沒有造成嚴重影響,因為我們僅從4個變量中刪除了2個變量。但是,假設我們的數據集中有200個變量,并且從200個變量減少到3個變量-它已經變得越來越有意義。
Hopefully, you’ve learned something new today. Feel free to contact Chen Shani or Moshe Binieli on Linkedin for any questions.
希望您今天學到了一些新知識。 如有任何疑問,請隨時通過Linkedin與Chen Shani或Moshe Binieli聯系。
翻譯自: https://www.freecodecamp.org/news/an-overview-of-principal-component-analysis-6340e3bc4073/
主成分分析 獨立成分分析