空間復雜度 用什么符號表示
Do you really understand Big O? If so, then this will refresh your understanding before an interview. If not, don’t worry — come and join us for some endeavors in computer science.
您真的了解Big O嗎? 如果是這樣,那么這將在面試前刷新您的理解。 如果沒有,請不要擔心-快來加入我們,為計算機科學做出一些努力。
If you have taken some algorithm related courses, you’ve probably heard of the term Big O notation. If you haven’t, we will go over it here, and then get a deeper understanding of what it really is.
如果您上過一些與算法有關的課程,您可能聽說過“ Big O符號 ”一詞。 如果您還沒有,我們將在這里進行介紹,然后對它的真正含義有更深入的了解。
Big O notation is one of the most fundamental tools for computer scientists to analyze the cost of an algorithm. It is a good practice for software engineers to understand in-depth as well.
Big O表示法是計算機科學家分析算法成本的最基本工具之一。 對于軟件工程師來說,也是一個深入了解的好習慣。
This article is written with the assumption that you have already tackled some code. Also, some in-depth material also requires high-school math fundamentals, and therefore can be a bit less comfortable to total beginners. But if you are ready, let’s get started!
本文假定您已經處理了一些代碼。 另外,一些深入的材料也需要高中數學基礎知識,因此對于初學者來說可能不太舒服。 但是,如果您準備好了,那就開始吧!
In this article, we will have an in-depth discussion about Big O notation. We will start with an example algorithm to open up our understanding. Then, we will go into the mathematics a little bit to have a formal understanding. After that we will go over some common variations of Big O notation. In the end, we will discuss some of the limitations of Big O in a practical scenario. A table of contents can be found below.
在本文中,我們將對Big O符號進行深入的討論。 我們將從一個示例算法開始,以增進我們的理解。 然后,我們將對數學進行一點點正式的理解。 之后,我們將介紹Big O符號的一些常見變體。 最后,我們將在實際情況中討論Big O的一些局限性。 目錄可以在下面找到。
目錄 (Table of Contents)
- What is Big O notation, and why does it matter 什么是Big O表示法,為什么重要
- Formal Definition of Big O notation 大O符號的形式定義
- Big O, Little O, Omega & Theta 大O,小O,Omega和Theta
- Complexity Comparison Between Typical Big Os 典型大操作系統之間的復雜度比較
- Time & Space Complexity 時空復雜性
- Best, Average, Worst, Expected Complexity 最佳,平均,最差,預期復雜度
- Why Big O doesn’t matter 為什么大O沒關系
- In the end… 到底…
So let’s get started.
因此,讓我們開始吧。
1.什么是Big O符號,為什么重要 (1. What is Big O Notation, and why does it matter)
“Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.”
“大O表示法是一種數學表示法,它描述了當參數趨于特定值或無窮大時函數的極限行為。 它是由保羅·巴赫曼(Paul Bachmann),埃德蒙·蘭道(Edmund Landau)等人發明的一系列記譜法的成員,這些記法統稱為“巴赫曼·朗道記法或漸近記法”。
— Wikipedia’s definition of Big O notation
—維基百科對Big O符號的定義
In plain words, Big O notation describes the complexity of your code using algebraic terms.
簡而言之,Big O表示法使用代數術語描述了代碼的復雜性 。
To understand what Big O notation is, we can take a look at a typical example, O(n2), which is usually pronounced “Big O squared”. The letter “n” here represents the input size, and the function “g(n) = n2” inside the “O()” gives us an idea of how complex the algorithm is with respect to the input size.
要了解什么是Big O符號,我們可以看一個典型的例子O(n2) ,通常將其稱為“ Big O squared” 。 字母“ n”代表輸入大小 , “ O()”內部的函數“ g(n)=n2 ”使我們了解算法相對于輸入大小有多復雜。
A typical algorithm that has the complexity of O(n2) would be the selection sort algorithm. Selection sort is a sorting algorithm that iterates through the list to ensure every element at index i is the ith smallest/largest element of the list. The CODEPEN below gives a visual example of it.
具有O(n2)復雜度的典型算法是選擇排序算法。 選擇排序是一種遍歷列表的排序算法,以確保索引i處的每個元素都是列表中的第i個最小/最大元素。 下面的CODEPEN給出了一個直觀的示例。
The algorithm can be described by the following code. In order to make sure the ith element is the ith smallest element in the list, this algorithm first iterates through the list with a for loop. Then for every element it uses another for loop to find the smallest element in the remaining part of the list.
可以通過以下代碼描述該算法。 為了確保第i個元素是列表中的第i個最小元素,此算法首先使用for循環遍歷列表。 然后,對于每個元素,它使用另一個for循環在列表的其余部分中找到最小的元素。
SelectionSort(List) {for(i from 0 to List.Length) {SmallestElement = List[i]for(j from i to List.Length) {if(SmallestElement > List[j]) {SmallestElement = List[j]}}Swap(List[i], SmallestElement)}
}
In this scenario, we consider the variable List as the input, thus input size n is the number of elements inside List. Assume the if statement, and the value assignment bounded by the if statement, takes constant time. Then we can find the big O notation for the SelectionSort function by analyzing how many times the statements are executed.
在這種情況下,我們將變量List視為輸入,因此輸入大小n是List內元素的數量 。 假設if語句以及由if語句限制的值分配花費固定時間。 然后,通過分析執行語句的次數,我們可以找到SelectionSort函數的大O表示法。
First the inner for loop runs the statements inside n times. And then after i is incremented, the inner for loop runs for n-1 times… …until it runs once, then both of the for loops reach their terminating conditions.
首先,內部for循環在n次內運行語句。 然后,在i遞增后,內部for循環運行n-1次……直到運行一次,然后兩個for循環都達到其終止條件。
This actually ends up giving us a geometric sum, and with some high-school math we would find that the inner loop will repeat for 1+2 … + n times, which equals n(n-1)/2 times. If we multiply this out, we will end up getting n2/2-n/2.
實際上,這最終給了我們一個幾何總和,并且通過一些高中數學,我們會發現內部循環將重復1 + 2…+ n次,等于n(n-1)/ 2次。 如果將其相乘,最終將得到n2/ 2-n / 2。
When we calculate big O notation, we only care about the dominant terms, and we do not care about the coefficients. Thus we take the n2 as our final big O. We write it as O(n2), which again is pronounced “Big O squared”.
當我們計算大O表示法時,我們只在乎優勢項 ,而在乎系數。 因此,我們將n2作為最終的大O。我們將其寫為O(n2),再次稱為“大O平方” 。
Now you may be wondering, what is this “dominant term” all about? And why do we not care about the coefficients? Don’t worry, we will go over them one by one. It may be a little bit hard to understand at the beginning, but it will all make a lot more sense as you read through the next section.
現在您可能想知道,這個“主導術語”到底是什么? 為什么我們不關心系數呢? 不用擔心,我們將一一介紹。 一開始可能很難理解,但是在閱讀下一部分時,所有這些都將變得更加有意義。
2.大O符號的形式定義 (2. Formal Definition of Big O notation)
Once upon a time there was an Indian king who wanted to reward a wise man for his excellence. The wise man asked for nothing but some wheat that would fill up a chess board.
曾幾何時,有一位印度國王想獎勵一個聰明人,以表彰他的卓越。 聰明的人只要求一點麥子就可以填滿棋盤。
But here were his rules: in the first tile he wants 1 grain of wheat, then 2 on the second tile, then 4 on the next one…each tile on the chess board needed to be filled by double the amount of grains as the previous one. The na?ve king agreed without hesitation, thinking it would be a trivial demand to fulfill, until he actually went on and tried it…
但是這是他的規則:在第一個圖塊中,他需要1粒小麥,然后在第二個圖塊中需要2顆小麥,然后在下一個圖塊中獲取4個...之一。 天真的國王毫不猶豫地答應了,認為滿足他的要求是微不足道的,直到他繼續嘗試下去……
So how many grains of wheat does the king owe the wise man? We know that a chess board has 8 squares by 8 squares, which totals 64 tiles, so the final tile should have 2?? grains of wheat. If you do a calculation online, you will end up getting 1.8446744*101?, that is about 18 followed by 18 zeroes. Assuming that each grain of wheat weights 0.01 grams, that gives us 184,467,440,737 tons of wheat. And 184 billion tons is quite a lot, isn’t it?
那么國王欠聰明人多少麥子呢? 我們知道國際象棋棋盤有8平方乘8平方,總共64瓦,因此最終的瓦應該有2粒小麥。 如果在線進行計算,最終將得到1.8446744 *101?,即大約18,后跟18個零。 假設每粒小麥的重量為0.01克,那么我們就有184,467,440,737噸小麥。 1,840億噸是很多,不是嗎?
The numbers grow quite fast later for exponential growth don’t they? The same logic goes for computer algorithms. If the required efforts to accomplish a task grow exponentially with respect to the input size, it can end up becoming enormously large.
后來數字呈指數級增長很快,不是嗎? 相同的邏輯適用于計算機算法。 如果完成任務所需的努力相對于輸入大小呈指數增長,則最終可能變得非常大。
Now the square of 64 is 4096. If you add that number to 2??, it will be lost outside the significant digits. This is why, when we look at the growth rate, we only care about the dominant terms. And since we want to analyze the growth with respect to the input size, the coefficients which only multiply the number rather than growing with the input size do not contain useful information.
現在64的平方是4096。如果將該數字加到2??,它將在有效數字之外丟失。 這就是為什么當我們查看增長率時,我們只關心主導術語。 而且,由于我們要分析輸入大小的增長,因此僅乘以數字而不是隨輸入大小增長的系數不包含有用的信息。
Below is the formal definition of Big O:
以下是Big O的正式定義:
The formal definition is useful when you need to perform a math proof. For example, the time complexity for selection sort can be defined by the function f(n) = n2/2-n/2 as we have discussed in the previous section.
當您需要執行數學證明時,形式定義很有用。 例如,選擇排序的時間復雜度可以由函數f(n)=n2/ 2-n / 2定義,正如我們在上一節中討論的那樣。
If we allow our function g(n) to be n2, we can find a constant c = 1, and a N? = 0, and so long as N > N?, N2 will always be greater than N2/2-N/2. We can easily prove this by subtracting N2/2 from both functions, then we can easily see N2/2 > -N/2 to be true when N > 0. Therefore, we can come up with the conclusion that f(n) = O(n2), in the other selection sort is “big O squared”.
如果我們讓函數g(n)為n2,我們可以找到一個常數c = 1,一個N?= 0,只要N>N?,N2總是大于N2/ 2-N / 2。 我們可以通過從兩個函數中減去N2/ 2來輕松證明這一點,然后可以很容易地看到N2/ 2> -N / 2在N> 0時為真。因此,我們可以得出以下結論:f(n)= O(n2),在另一個選擇中是“大O平方”。
You might have noticed a little trick here. That is, if you make g(n) grow supper fast, way faster than anything, O(g(n)) will always be great enough. For example, for any polynomial function, you can always be right by saying that they are O(2?) because 2? will eventually outgrow any polynomials.
您可能已經注意到這里的一個小技巧。 也就是說,如果使g(n)快速增長,比任何事物都快,那么O(g(n))將永遠足夠大。 例如,對于任何多項式函數,您總是可以正確地說它們是O(2?),因為2?最終將超出任何多項式。
Mathematically, you are right, but generally when we talk about Big O, we want to know the tight bound of the function. You will understand this more as you read through the next section.
從數學上來說,您是對的,但是通常在我們談論Big O時,我們想知道函數的緊密界限 。 在閱讀下一部分時,您將更加了解這一點。
But before we go, let’s test your understanding with the following question. The answer will be found in later sections so it won’t be a throw away.
但是在我們開始之前,讓我們用以下問題測試您的理解。 答案將在后面的章節中找到,因此不會被拋棄。
Question: An image is represented by a 2D array of pixels. If you use a nested for loop to iterate through every pixel (that is, you have a for loop going through all the columns, then another for loop inside to go through all the rows), what is the time complexity of the algorithm when the image is considered as the input?
問題:圖像由2D像素陣列表示。 如果您使用嵌套的for循環迭代每個像素(也就是說,有一個for循環遍歷所有列,然后是另一個for循環遍歷所有行),那么當圖像被視為輸入?
3.大O,小O,Omega和Theta (3. Big O, Little O, Omega & Theta)
Big O: “f(n) is O(g(n))” iff for some constants c and N?, f(N) ≤ cg(N) for all N > N?
大O:對于某些常數c和N?,“ f(n)是O(g(n))” iff,對于所有N>N?,f(N)≤cg(N)
Omega: “f(n) is Ω(g(n))” iff for some constants c and N?, f(N) ≥ cg(N) for all N > N?
Ω:對于某些常數c和N?,“ f(n)為Ω(g(n))” iff,對于所有N>N?,f(N)≥cg(N)
Theta: “f(n) is Θ(g(n))” iff f(n) is O(g(n)) and f(n) is Ω(g(n))
Theta:“ f(n)是Θ(g(n))”,如果f(n)是O(g(n))且f(n)是Ω(g(n))
Little O: “f(n) is o(g(n))” iff f(n) is O(g(n)) and f(n) is not Θ(g(n))
小O:“ f(n)是o(g(n))”,如果f(n)是O(g(n))且f(n)不是Θ(g(n))
—Formal Definition of Big O, Omega, Theta and Little O
— Big O,Omega,Theta和Little O的正式定義
In plain words:
用簡單的話說:
Big O (O()) describes the upper bound of the complexity.
大O(O())描述了復雜度的上限 。
Omega (Ω()) describes the lower bound of the complexity.
Ω(Ω())描述了復雜度的下限 。
Theta (Θ()) describes the exact bound of the complexity.
Theta(Θ())描述了復雜度的確切范圍。
Little O (o()) describes the upper bound excluding the exact bound.
小O(o())描述了上限,但不包括精確界限 。
For example, the function g(n) = n2 + 3n is O(n3), o(n?), Θ(n2) and Ω(n). But you would still be right if you say it is Ω(n2) or O(n2).
例如,函數g(n)=n2+ 3n是O(n3),o(n?),Θ(n2)和Ω(n)。 但是,如果說它是Ω(n2)或O(n2),那您還是對的。
Generally, when we talk about Big O, what we actually meant is Theta. It is kind of meaningless when you give an upper bound that is way larger than the scope of the analysis. This would be similar to solving inequalities by putting ∞ on the larger side, which will almost always make you right.
通常,當我們談論大O時,實際上是指Theta。 當您給出的上限大于分析范圍時,這是毫無意義的。 這類似于將∞放在較大的一側來解決不等式,這幾乎總是會使您正確。
But how do we determine which functions are more complex than others? In the next section you will be reading, we will learn that in detail.
但是,我們如何確定哪些功能比其他功能更復雜? 在下一節中,您將閱讀,我們將詳細學習。
4.典型大操作系統之間的復雜度比較 (4. Complexity Comparison Between Typical Big Os)
When we are trying to figure out the Big O for a particular function g(n), we only care about the dominant term of the function. The dominant term is the term that grows the fastest.
當我們試圖找出特定函數g(n)的Big O時,我們只關心函數的主導項 。 占主導地位的術語是增長最快的術語。
For example, n2 grows faster than n, so if we have something like g(n) = n2 + 5n + 6, it will be big O(n2). If you have taken some calculus before, this is very similar to the shortcut of finding limits for fractional polynomials, where you only care about the dominant term for numerators and denominators in the end.
例如,n2的增長快于n,因此,如果我們有g(n)=n2+ 5n + 6之類的值,那么它將是O(n2)大。 如果您之前進行過演算,則此操作與查找分數多項式極限的快捷方式非常相似,在該方法中,您只關心最后的分子和分母的主導項。
But which function grows faster than the others? There are actually quite a few rules.
但是哪個功能比其他功能增長更快? 實際上有很多規則。
1. O(1)具有最小的復雜度 (1. O(1) has the least complexity)
Often called “constant time”, if you can create an algorithm to solve the problem in O(1), you are probably at your best. In some scenarios, the complexity may go beyond O(1), then we can analyze them by finding its O(1/g(n)) counterpart. For example, O(1/n) is more complex than O(1/n2).
通常稱為“恒定時間” ,如果您可以創建一種算法來解決O(1)中的問題,則可能處于最佳狀態。 在某些情況下,復雜度可能會超過O(1),然后我們可以通過找到O(1 / g(n))對應項來對其進行分析。 例如,O(1 / n)比O(1 /n2)更復雜。
2. O(log(n))比O(1)復雜,但比多項式復雜 (2. O(log(n)) is more complex than O(1), but less complex than polynomials)
As complexity is often related to divide and conquer algorithms, O(log(n)) is generally a good complexity you can reach for sorting algorithms. O(log(n)) is less complex than O(√n), because the square root function can be considered a polynomial, where the exponent is 0.5.
由于復雜度通常與分治算法有關,因此O(log(n))通常是排序算法可以達到的良好復雜度。 O(log(n))比O(√n)復雜,因為平方根函數可以看作是多項式,指數為0.5。
3.多項式的復雜度隨著指數的增加而增加 (3. Complexity of polynomials increases as the exponent increases)
For example, O(n?) is more complex than O(n?). Due to the simplicity of it, we actually went over quite many examples of polynomials in the previous sections.
例如,O(n?)比O(n?)更復雜。 由于它的簡單性,我們實際上在前面的部分中介紹了很多多項式的示例。
4.指數比多項式具有更大的復雜度,只要系數是n的正整數倍 (4. Exponentials have greater complexity than polynomials as long as the coefficients are positive multiples of n)
O(2?) is more complex than O(n??), but O(2?) is actually less complex than O(1). We generally take 2 as base for exponentials and logarithms because things tends to be binary in Computer Science, but exponents can be changed by changing the coefficients. If not specified, the base for logarithms is assumed to be 2.
O(2?)比O(n??)復雜,但是O(2?)實際上比O(1)復雜。 我們通常以2為指數和對數的底數,因為在計算機科學中,事物往往是二進制的,但是可以通過更改系數來改變指數。 如果未指定,則將對數的底數假定為2。
5.階乘的復雜度大于指數 (5. Factorials have greater complexity than exponentials)
If you are interested in the reasoning, look up the Gamma function, it is an analytic continuation of a factorial. A short proof is that both factorials and exponentials have the same number of multiplications, but the numbers that get multiplied grow for factorials, while remaining constant for exponentials.
如果您對推理感興趣,請查找Gamma函數 ,它是階乘的解析延續 。 一個簡短的證明是階乘和指數都有相同的乘法次數,但是乘數的乘數增長,而指數保持不變。
6.相乘項 (6. Multiplying terms)
When multiplying, the complexity will be greater than the original, but no more than the equivalence of multiplying something that is more complex. For example, O(n * log(n)) is more complex than O(n) but less complex than O(n2), because O(n2) = O(n * n) and n is more complex than log(n).
當相乘時,復雜度將大于原始乘積,但不超過與更復雜的乘積相等。 例如,O(n * log(n))比O(n)更復雜,但比O(n2)復雜,因為O(n2)= O(n * n)并且n比log(n )。
To test your understanding, try ranking the following functions from the most complex to the lease complex. The solutions with detailed explanations can be found in a later section as you read. Some of them are meant to be tricky and may require some deeper understanding of math. As you get to the solution, you will understand them more.
為了測試您的理解,請嘗試對從最復雜到最復雜的租賃以下功能進行排名。 閱讀后,可以在后面的部分中找到具有詳細說明的解決方案。 其中一些注定很棘手,可能需要對數學有更深的理解。 當您找到解決方案時,您將對它們有更多的了解。
Question: Rank following functions from the most complex to the lease complex.
問題:將以下功能從最復雜的到租賃復雜的進行排序。
Solution to Section 2 Question:
第2部分問題的解決方案:
It was actually meant to be a trick question to test your understanding. The question tries to make you answer O(n2) because there is a nested for loop. However, n is supposed to be the input size. Since the image array is the input, and every pixel was iterated through only once, the answer is actually O(n). The next section will go over more examples like this one.
實際上,這是測試您的理解的一個棘手問題。 該問題試圖使您回答O(n2),因為存在嵌套的for循環。 但是,n應該是輸入大小。 由于圖像數組是輸入,并且每個像素僅迭代一次,因此答案實際上是O(n)。 下一節將介紹更多類似的示例。
5.時空復雜性 (5. Time & Space Complexity)
So far, we have only been discussing the time complexity of the algorithms. That is, we only care about how much time it takes for the program to complete the task. What also matters is the space the program takes to complete the task. The space complexity is related to how much memory the program will use, and therefore is also an important factor to analyze.
到目前為止,我們僅討論了算法的時間復雜度。 也就是說,我們只關心程序完成任務所花費的時間。 同樣重要的是程序完成任務所需的空間。 空間復雜度與程序將使用多少內存有關,因此也是分析的重要因素。
The space complexity works similarly to time complexity. For example, selection sort has a space complexity of O(1), because it only stores one minimum value and its index for comparison, the maximum space used does not increase with the input size.
空間復雜度與時間復雜度類似。 例如,選擇排序的空間復雜度為O(1),因為它僅存儲一個最小值及其比較索引,因此使用的最大空間不會隨輸入大小而增加。
Some algorithms, such as bucket sort, have a space complexity of O(n), but are able to chop down the time complexity to O(1). Bucket sort sorts the array by creating a sorted list of all the possible elements in the array, then increments the count whenever the element is encountered. In the end the sorted array will be the sorted list elements repeated by their counts.
某些算法(例如存儲桶排序)的空間復雜度為O(n),但可以將時間復雜度降低為O(1)。 存儲桶排序通過創建數組中所有可能元素的排序列表對數組進行排序,然后在遇到元素時增加計數。 最后,排序后的數組將是按其計數重復的排序后的列表元素。
6.最佳,平均,最差,預期的復雜度 (6. Best, Average, Worst, Expected Complexity)
The complexity can also be analyzed as best case, worst case, average case and expected case.
復雜度還可以按最佳情況,最壞情況,平均情況和預期情況進行分析。
Let’s take insertion sort, for example. Insertion sort iterates through all the elements in the list. If the element is larger than its previous element, it inserts the element backwards until it is larger than the previous element.
讓我們以插入排序為例。 插入排序遍歷列表中的所有元素。 如果該元素大于其先前的元素,它將向后插入該元素,直到其大于先前的元素。
If the array is initially sorted, no swap will be made. The algorithm will just iterate through the array once, which results a time complexity of O(n). Therefore, we would say that the best-case time complexity of insertion sort is O(n). A complexity of O(n) is also often called linear complexity.
如果最初對數組進行了排序,則不會進行交換。 該算法將僅迭代一次數組,這會導致O(n)的時間復雜度。 因此,我們可以說插入排序的最佳情況下時間復雜度為O(n)。 O(n)的復雜度通常也稱為線性復雜度 。
Sometimes an algorithm just has bad luck. Quick sort, for example, will have to go through the list in O(n) time if the elements are sorted in the opposite order, but on average it sorts the array in O(n * log(n)) time. Generally, when we evaluate time complexity of an algorithm, we look at their worst-case performance. More on that and quick sort will be discussed in the next section as you read.
有時候算法只是運氣不好。 例如,如果元素以相反的順序排序,則快速排序將必須在O(n)時間中遍歷列表,但平均而言,它會以O(n * log(n))時間對數組進行排序。 通常,當我們評估算法的時間復雜度時,我們會查看它們的最壞情況性能。 閱讀時,將在下一部分中討論更多有關此內容和快速排序的信息。
The average case complexity describes the expected performance of the algorithm. Sometimes involves calculating the probability of each scenarios. It can get complicated to go into the details and therefore not discussed in this article. Below is a cheat-sheet on the time and space complexity of typical algorithms.
平均案例復雜度描述了算法的預期性能。 有時涉及計算每種情況的概率。 進入細節可能會變得很復雜,因此本文不予討論。 以下是典型算法在時間和空間上的復雜性的備忘單。
Solution to Section 4 Question:
第4部分問題的解決方案:
By inspecting the functions, we should be able to immediately rank the following polynomials from most complex to lease complex with rule 3. Where the square root of n is just n to the power of 0.5.
通過檢查這些函數,我們應該能夠立即使用規則3將以下多項式從最復數到租復數進行排序。其中n的平方根僅是n的乘方為0.5。
Then by applying rules 2 and 6, we will get the following. Base 3 log can be converted to base 2 with log base conversions. Base 3 log still grows a little bit slower then base 2 logs, and therefore gets ranked after.
然后通過應用規則2和6,我們將得到以下結果。 可以將對數3的對數轉換為對數 2的對數 。 基數3的日志仍然比基數2的日志慢一些,因此排名靠后。
The rest may look a little bit tricky, but let’s try to unveil their true faces and see where we can put them.
其余的可能看起來有些棘手,但讓我們嘗試揭露它們的真實面Kong,看看可以將它們放置在哪里。
First of all, 2 to the power of 2 to the power of n is greater than 2 to the power of n, and the +1 spices it up even more.
首先,2乘以2的冪到n的冪大于2乘以n的冪,而+1會加倍更多。
And then since we know 2 to the power of log(n) with based 2 is equal to n, we can convert the following. The log with 0.001 as exponent grows a little bit more than constants, but less than almost anything else.
然后,由于我們知道以2為底的log(n)的冪等于2,因此可以轉換以下內容。 指數為0.001的對數增長比常數多一點,但幾乎沒有其他任何東西。
The one with n to the power of log(log(n)) is actually a variation of the quasi-polynomial, which is greater than polynomial but less than exponential. Since log(n) grows slower than n, the complexity of it is a bit less. The one with the inverse log converges to constant, as 1/log(n) diverges to infinity.
log(log(n))的冪次為n的那個實際上是準多項式的變體,它大于多項式但小于指數。 由于log(n)的增長慢于n,因此它的復雜性要低一些。 隨著1 / log(n)趨于無窮大,具有逆對數的那個收斂到常數。
The factorials can be represented by multiplications, and thus can be converted to additions outside the logarithmic function. The “n choose 2” can be converted into a polynomial with a cubic term being the largest.
階乘可以用乘法表示,因此可以在對數函數之外轉換為加法。 可以將“ n選擇2”轉換為三次項最大的多項式。
And finally, we can rank the functions from the most complex to the least complex.
最后,我們可以對功能進行排序,從最復雜到最不復雜。
為什么BigO沒關系 (Why BigO doesn’t matter)
!!! — WARNING — !!!
!!! - 警告 - !!!
Contents discussed here are generally not accepted by most programmers in the world. Discuss it at your own risk in an interview. People actually blogged about how they failed their Google interviews because they questioned the authority, like here.
世界上大多數程序員通常不接受此處討論的內容。 在面試中討論風險自負 。 人們實際上在博客上寫了他們如何在Google面試中失敗的原因,因為他們在這里質疑權威。
!!! — WARNING — !!!
!!! - 警告 - !!!
Since we have previously learned that the worst case time complexity for quick sort is O(n2), but O(n * log(n)) for merge sort, merge sort should be faster — right? Well you probably have guessed that the answer is false. The algorithms are just wired up in a way that makes quick sort the “quick sort”.
由于我們先前已經了解到,快速排序的最壞情況下的時間復雜度為O(n2),但是對于合并排序,則為O(n * log(n)),因此合并排序應該更快-是嗎? 好吧,您可能已經猜到答案是錯誤的。 僅以使快速排序成為“快速排序”的方式來連接算法。
To demonstrate, check out this trinket.io I made. It compares the time for quick sort and merge sort. I have only managed to test it on arrays with a length up to 10000, but as you can see so far, the time for merge sort grows faster than quick sort. Despite quick sort having a worse case complexity of O(n2), the likelihood of that is really low. When it comes to the increase in speed quick sort has over merge sort bounded by the O(n * log(n)) complexity, quick sort ends up with a better performance in average.
為了演示,請查看我制作的此trinket.io 。 它比較快速排序和合并排序的時間。 我只設法在長度不超過10000的數組上對其進行測試,但是到目前為止,您可以看到,合并排序的時間比快速排序的時間增長得更快。 盡管快速排序的情況復雜度為O(n2),但這種可能性確實很小。 當談到速度的提高時,快速排序已經超過了以O(n * log(n))復雜度為邊界的合并排序,因此快速排序的平均性能更高。
I have also made the below graph to compare the ratio between the time they take, as it is hard to see them at lower values. And as you can see, the percentage time taken for quick sort is in a descending order.
我還制作了下圖來比較它們花費的時間之間的比率,因為很難看到它們的值較低。 如您所見,快速排序所用的時間百分比是降序排列的。
The moral of the story is, Big O notation is only a mathematical analysis to provide a reference on the resources consumed by the algorithm. Practically, the results may be different. But it is generally a good practice trying to chop down the complexity of our algorithms, until we run into a case where we know what we are doing.
故事的寓意是,Big O表示法只是一種數學分析,可為算法消耗的資源提供參考。 實際上,結果可能會有所不同。 但這通常是一種嘗試降低算法復雜性的好習慣,直到遇到我們知道自己在做什么的情況。
到底… (In the end…)
I like coding, learning new things and sharing them with the community. If there is anything in which you are particularly interested, please let me know. I generally write on web design, software architecture, mathematics and data science. You can find some great articles I have written before if you are interested in any of the topics above.
我喜歡編碼,學習新事物并與社區分享。 如果您有什么特別感興趣的,請告訴我。 我通常寫有關網頁設計,軟件體系結構,數學和數據科學的文章。 如果您對以上任何主題感興趣,則可以找到我之前寫的一些很棒的文章。
Hope you have a great time learning computer science!!!
希望您在學習計算機科學方面過得愉快!!!
翻譯自: https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/
空間復雜度 用什么符號表示