兩個富翁打賭_打賭您無法解決這個Google面試問題。

兩個富翁打賭

by Kevin Ghadyani

通過凱文·加迪亞尼(Kevin Ghadyani)

打賭您無法解決這個Google面試問題。 (Bet you can’t solve this Google interview question.)

將棘手的問題分解為小塊。 (Breaking tough problems into small pieces.)

I wanted to see someone else’s take on software engineering and started binge watching TechLead on YouTube. I spent the next few days coming up with various solutions to an interview question he asked while working at Google.

我想看看別人對軟件工程的看法,并開始在YouTube上瘋狂觀看TechLead。 接下來的幾天中,我針對他在Google工作期間提出的面試問題提出了各種解決方案。

這部影片讓我興奮 (This Video Got Me Excited)

TechLead brought up a question he asked in over 100 interviews at Google. It got me curious to think up a solution in RxJS. This article’s going to go over traditional methods though.

TechLead Google的 100多次采訪中提出了一個問題 。 我很想知道RxJS中的解決方案。 不過,本文將介紹傳統方法。

The real goal of his question is to get information from the interviewee. Will they ask the right questions before coding? Is the solution going to fit into the guidelines of the project? He even notes it doesn’t matter at all if you get the right answer. He wants to figure out how you think and if you can even understand the problem.

他的問題的真正目的是從受訪者那里獲取信息。 他們在編碼之前會問正確的問題嗎? 解決方案是否適合該項目的準則? 他甚至指出,如果您得到正確的答案,那根本不重要。 他想弄清楚您的想法以及您是否可以理解問題。

He talked about a few solutions, one that was recursive (limited by stack size), another that was iterative (limited by memory size). We’ll be looking into both of these and more!

他談到了一些解決方案,一種是遞歸的(受堆棧大小限制),另一種是迭代的(受內存大小限制)。 我們將研究這些以及更多內容!

TechLead的問題 (TechLead’s Question)

In his question, he asks us to take this grid of items and get the count of the largest contiguous block where all colors are the same.

在他的問題中,他要求我們采用該項目網格并獲取所有顏色相同的最大連續塊的計數。

When I heard his question and saw the picture, I was thinking “oh man, I’ve gotta do some 2D image modeling to figure this out”. Sounds near-impossible to answer during an interview.

當我聽到他的問題并看到圖片時,我在想“哦,老兄,我得做一些2D圖像建模才能弄清楚這一點”。 在面試中聽起來幾乎不可能回答。

But after he explained it more, that’s really not the case. You’re processing the data that’s already been captured, not parsing an image. I realize now, the image is actually a misnomer.

但是在他進一步解釋之后,事實并非如此。 您正在處理已經捕獲的數據,而不是解析圖像。 我現在知道,該圖像實際上是錯誤的名稱。

資料建模 (Data Modeling)

Before you write any code, you need to define your data model. I can’t stress this enough. Before coding anything this advanced, first figure out what you’re working with and gather business requirements.

在編寫任何代碼之前,您需要定義數據模型。 我不能太強調這一點。 在對這種高級功能進行編碼之前,首先要弄清楚您正在使用什么并收集業務需求。

In our case, TechLead defined a lot of those requirements for us:

在我們的案例中,TechLead為我們定義了許多這些要求:

  • Concept of a colored square or “node” as we’ll call it.

    我們將其稱為彩色正方形或“節點”的概念。
  • There are 10K nodes in our dataset.

    我們的數據集中有1萬個節點。
  • Nodes are organized into columns and rows (2D).

    節點被組織為列和行(2D)。
  • The number of columns and rows can be uneven.

    列和行的數量可以不均勻。
  • Nodes have colors and some way to denote adjacencies.

    節點具有顏色和某種表示鄰接的方式。

We can also derive some more information from our data:

我們還可以從數據中獲取更多信息:

  • No two nodes will overlap.

    沒有兩個節點會重疊。
  • Nodes will never be adjacent to themselves.

    節點永遠不會與自己相鄰。
  • A node will never have duplicate adjacencies.

    節點永遠不會有重復的鄰接關系。
  • Nodes that reside on the sides and corners will be missing one or two adjacencies respectively.

    位于側面和角落的節點將分別缺少一兩個相鄰關系。

What we don’t know:

我們不知道的是:

  • The ratio of rows to columns

    行與列之比
  • The number of possible colors.

    可能的顏色數。
  • The chances of having only 1 color.

    只有一種顏色的機會。
  • The rough distribution of colors.

    顏色的粗略分布。

The higher level you are as a developer, the more of these questions you’ll know to ask. It’s not a matter of experience either. While that helps, it doesn’t make you any better if you can’t pick out the unknowns.

作為開發人員的級別越高,您將要問的問題越多。 這也不是經驗問題。 雖然這樣做有幫助,但是如果您不能挑出未知數,也不會使您變得更好。

I don’t expect most people to pick out these unknowns. Until I started working the algorithm in my head, I didn’t know them all either. Unknowns take time to figure out. It’s a lot of discussion and back-and-forth with business people to find all the kinks.

我不希望大多數人會發現這些未知數。 直到我開始腦子里工作算法,我也不全都知道。 未知數需要時間才能弄清楚。 與商人進行大量討論和來回查找所有問題。

Looking at his image, it appears as though the distribution is random. He used only 3 colors and never said anything otherwise, so we will too. We’ll also assume there’s a likelihood all colors will be the same.

查看他的圖像,似乎分布是隨機的。 他只使用了3種顏色,并且從不說其他任何東西,所以我們也一樣。 我們還將假設所有顏色都可能相同。

Since it could kill our algorithm, I’m going to assume we’re working with a 100x100 grid. That way, we don’t have to deal with the odd cases of 1 row and 10K columns.

由于它可能殺死我們的算法,因此我假設我們正在使用100x100的網格。 這樣,我們不必處理1行和10K列的奇數情況。

In a typical setting, I would ask all these questions within the first few hours of data-discovery. That’s what TechLead really cared about. Are you gonna start by coding a random solution or are you going to figure out the problem?

在典型的情況下,我會在數據發現的最初幾個小時內問所有這些問題。 這就是TechLead真正關心的問題。 您將從編碼隨機解決方案開始還是要找出問題所在?

You’re going to make mistakes in your data model. I know I did when first writing this article, but if you plan ahead, those issues will be a lot easier to manage. I ended up having to only rewrite small portions of my code because of it.

您將在數據模型中犯錯誤。 我知道我第一次寫這篇文章的時候就做過,但是如果您提前計劃的話,這些問題將更容易處理。 因此,我最終只需要重寫代碼的一小部分。

創建數據模型 (Creating the Data Model)

We need to know how data is coming in and in what format we want to process it.

我們需要知道數據如何傳入以及以何種格式處理。

Since we don’t have a system in place for processing the data, we need to come up with a visualization ourselves.

由于我們沒有適當的系統來處理數據,因此我們需要自己進行可視化處理。

The basic building blocks of our data:

我們數據的基本構建塊:

  • Color

    顏色
  • ID

    ID
  • X

    X
  • Y

    ?

Why do we need an ID? Because we could come across the same item more than once. We want to prevent infinite loops so we need to mark where we’ve been in those cases.

為什么我們需要一個ID? 因為我們可以多次遇到同一項目。 我們要防止無限循環,因此我們需要標記在這些情況下的去向。

Also, data like this will typically be assigned some sort of ID, hash, or other value. It’s a unique identifier so we have some way to identify that particular node. If we want to know the largest contiguous block, we need to know which nodes are in that block.

同樣,通常會為此類數據分配某種ID,哈希或其他值。 這是一個唯一的標識符,因此我們可以通過某種方式來標識該特定節點。 如果我們想知道最大的連續塊,則需要知道該塊中有哪些節點。

Since he shaped out the data in a grid, I’m going to assume we’ll get it back with X and Y values. Using just those properties, I was able to generate some HTML to ensure what we’re generating looks like what he’s given us.

由于他將數據整理成網格狀,因此我假設我們將使用X和Y值將其取回。 僅使用這些屬性,我就可以生成一些HTML以確保我們生成的內容看起來像他給我們的一樣。

This was done using absolute positioning just like his example:

就像他的例子一樣,使用絕對定位來完成:

It even works with a larger dataset:

它甚至適用于更大的數據集:

He’s the code which generates the nodes:

他是生成節點的代碼:

We take our columns and rows, create a 1D array out of the number of items, then generate our nodes off that data.

我們取出列和行,從項目數量中創建一維數組,然后根據該數據生成節點。

Instead of a color, I’m using colorId. First, because it’s cleaner to randomize. Second, we’d usually have to look up the color value ourselves.

我使用的是colorId ,而不是color 。 首先,因為隨機化比較干凈。 其次,我們通常必須自己查詢顏色值。

While he never explicitly stated it, he only used 3 color values. I’m limiting our dataset to 3 colors as well. Just know it could be hundreds of colors and the final algorithms wouldn’t need to change.

盡管他從未明確聲明,但他僅使用了3種顏色值。 我也將數據集限制為3種顏色。 只需知道這可能是數百種顏色,并且最終算法不需要更改。

As a simpler example, here’s a 2x2 nodes list:

作為一個簡單的示例,這是一個2x2節點列表:

數據處理 (Data Processing)

No matter which method we’re going to use, we want to know the adjacencies for each of these nodes. X and Y values aren’t going to cut it.

無論我們要使用哪種方法,我們都想知道每個節點的鄰接關系。 X和Y值不會減少它。

So given an X and Y, we need to figure out how to find adjacent X and Y values. It’s pretty simple though. We simply find nodes plus and minus 1 on both X and Y.

因此,給定X和Y,我們需要弄清楚如何找到相鄰的X和Y值。 這很簡單。 我們只需在X和Y上找到正負1的節點即可。

I wrote a helper function for that piece of the logic:

我為此邏輯編寫了一個輔助函數:

The way we’re generating nodes, there’s actually a mathematical way to figure out the IDs of adjacent nodes. Instead, I’m going to assume nodes will come into our system in random order.

我們生成節點的方式實際上是一種數學方法來找出相鄰節點的ID。 相反,我將假定節點將以隨機順序進入我們的系統。

I ran all nodes through a second pass to add adjacencies:

我對所有節點進行了第二遍添加鄰接關系:

I’ve avoided making any unnecessary optimizations in this preprocessor code. It won’t affect our final performance stats and will only help to simplify our algorithms.

我避免在此預處理程序代碼中進行任何不必要的優化。 它不會影響我們的最終性能數據,只會幫助簡化算法。

I went ahead and changed the colorId into a color. It’s completely unnecessary for our algorithms, but I wanted to make it easier to visualize.

我繼續將colorId更改為color 。 對于我們的算法而言,這完全沒有必要,但我想使其更易于可視化。

We call getNodeAtLocation for each set of adjacent X and Y values and find our northId, eastId, southId, and westId. We don’t pass on our X and Y values since they’re no longer required.

我們為每組相鄰的X和Y值調用getNodeAtLocation ,并找到我們的northIdeastIdsouthIdwestId 。 由于不再需要X和Y值,因此我們不會傳遞它們。

After getting our cardinal IDs, we convert them to a single adjacentIds array which includes only those that have a value. That way, if we have corners and sides, we don’t have to worry about checking if those IDs are null. It also allows us to loop an array instead of manually noting each cardinal ID in our algorithms.

讓我們基數的ID后,我們將它們轉換為單個adjacentIds陣列只包括那些具有價值。 這樣,如果我們有邊角,就不必擔心檢查這些ID是否為空。 它還允許我們循環一個數組,而不是手動注釋算法中的每個基本ID。

Here’s another 2x2 example using a new set of nodes run through addAdjacencies:

這是另一個2x2的示例,該示例使用通過addAdjacencies運行的一組新節點:

預處理優化 (Preprocessing Optimizations)

I wanted to simplify the algorithms for this article greatly, so I added in another optimization pass. This one removes adjacent IDs that don’t match the current node’s color.

我想大大簡化本文的算法,因此添加了另一個優化過程。 這將刪除與當前節點的顏色不匹配的相鄰ID。

After rewriting our addAdjacencies function, this is what we have now:

重寫addAdjacencies函數之后,現在是這樣的:

I slimmed down addAdjacencies while adding more functionality.

我減少了addAdjacencies同時添加了更多功能。

By removing nodes that don’t match in color, our algorithm can be 100% sure any IDs in the adjacentIds prop are contiguous nodes.

通過刪除顏色不匹配的節點,我們的算法可以100%確保adjacentIds ID道具中的任何ID是連續的節點。

Lastly, I removed any nodes that don’t have same-color adjacencies. That simplifies our algorithms even more, and we’ve shrunk the total nodes to only those we care about.

最后,我刪除了沒有相同顏色鄰接關系的所有節點。 這進一步簡化了我們的算法,并且將總節點縮減為僅關心的節點。

錯誤的方式—遞歸 (The Wrong Way — Recursion)

TechLead stated we couldn’t do this algorithm recursively because we’d hit a stack overflow.

TechLead表示我們無法遞歸執行此算法,因為我們遇到了堆棧溢出的情況。

While he’s partly correct, there are a few ways to mitigate this problem. Either iterating or using tail recursion. We’ll get to the iterative example, but JavaScript no longer has tail recursion as a native language feature.

盡管他部分正確,但有幾種方法可以緩解此問題。 迭代或使用尾遞歸。 我們將轉到迭代示例,但是JavaScript不再具有尾部遞歸作為本機語言功能。

While we can still simulate tail recursion in JavaScript, we’re going to keep this simple and create a typical recursive function instead.

盡管我們仍然可以在JavaScript中模擬尾部遞歸,但我們將保持這種簡單性,而是創建一個典型的遞歸函數。

Before we hit the code, we need to figure out our algorithm. For recursion, it makes sense to use a depth-first search. Don’t worry about knowing the computer science term. A coworker said it when I was showing him the different solutions I came up with.

在編寫代碼之前,我們需要弄清楚我們的算法。 對于遞歸,使用深度優先搜索是有意義的。 不必擔心知道計算機科學術語。 當我向他展示我想出的不同解決方案時,一位同事說過。

算法 (The Algorithm)

We’ll start with a node and go as far as we can until we’ve hit an endpoint. Then we’ll come back and take the next branching path until we’ve scanned the entire contiguous block.

我們將從節點開始,并盡我們所能直到到達終點。 然后,我們將返回并采用下一個分支路徑,直到我們掃描了整個連續塊。

That’s part of it. We also have to keep track of where we’ve been and the length of the largest contiguous block.

這就是其中的一部分。 我們還必須跟蹤我們去過的地方以及最大的連續塊的長度。

What I did was divide our function into 2 pieces. One would hold the largest list and previously scanned IDs while looping every node at least once. The other would start at an unscanned root node and do our depth-first traversal.

我所做的就是將我們的功能分為兩部分。 一個將擁有最大的列表和先前掃描的ID,同時至少每個節點循環一次。 另一個將從未掃描的根節點開始,然后進行深度優先遍歷。

Here’s what those functions look like:

這些功能如下所示:

Insane right? I even debated showing the code because it gets so gnarly.

瘋了吧? 我什至爭論過顯示代碼,因為它太粗糙了。

To slim this down, let’s go step-by-step.

為了減少這種負擔,讓我們逐步進行。

遞歸函數 (The Recursive Function)

getContiguousIds is our recursive function. This gets called once for each node. Each time it returns, you get an updated list of contiguous nodes.

getContiguousIds是我們的遞歸函數。 每個節點調用一次。 每次返回時,您都會獲得更新的連續節點列表。

There is only one condition in this function: is our node already in the list? If not, call getContiguousIds again. When it returns, we’ll have an updated list of contiguous nodes which returns to our reducer and used as the state for the next adjacentId.

此功能只有一個條件: 列表中是否已存在我們的節點? 如果不是,請再次調用getContiguousIds 。 當它返回時,我們將有一個更新的連續節點列表,該列表將返回到化簡器,并用作下一個adjacentId ID的狀態。

You might be wondering where we’re adding values to contiguousIds. That happens when we concat the current node onto contiguousIds. Each time we recurse further, we’re making sure to add the current node onto the list of contiguousIds before looping its adjacentIds.

您可能想知道我們在哪里向contiguousIds添加值。 當這種情況發生,我們concat當前節點上contiguousIds 。 每次我們進一步遞歸的時候,我們正在確認當前節點添加到列表中contiguousIds循環的前adjacentIds

Always adding the current node ensures we don’t infinitely recurse.

始終添加當前節點可確保我們不會無限遞歸。

循環 (The Loop)

The second half of this function also goes through each node once.

此功能的后半部分還將遍歷每個節點一次。

We have reducer surrounding the recursive function. This one checks if our code’s been scanned. If so, keep looping until we find a node that hasn’t or until we fall out of the loop.

我們在遞歸函數周圍有reducer。 這個檢查我們的代碼是否已被掃描。 如果是這樣,請繼續循環直到找到一個尚未存在的節點,或者直到退出循環為止。

If our node hasn’t been scanned, call getContiguousIds and wait till it’s done. This is synchronous, but it can take some time.

如果尚未掃描我們的節點,請調用getContiguousIds并等待其完成。 這是同步的,但可能需要一些時間。

Once it comes back with a list of contiguousIds, check those against the largestContiguousIds list. If larger, store that value.

一旦返回了contiguousIds列表,請對照largestContiguousIds列表進行檢查。 如果更大,則存儲該值。

At the same time, we’re going to add those contiguousIds to our scannedIds list to mark where we’ve been.

同時,我們將這些contiguousIds添加到scannedIds列表中,以標記我們去過的地方。

It’s pretty simple when you see it all laid out.

當您看到所有布局時,這非常簡單。

執行 (Execution)

Even with 10K items, it didn’t run into stack overflow issues with 3 randomized colors. If I changed everything to use a single color, I was able to run into a stack overflow. That’s because our recursive function was going through 10K recursions.

即使有1萬個項目,它也不會遇到3種隨機顏色的堆棧溢出問題。 如果我將所有內容更改為僅使用一種顏色,就會遇到堆棧溢出的情況。 那是因為我們的遞歸函數要進行10K遞歸。

順序迭代 (Sequential Iteration)

Since memory is larger than the function call stack, my next thought was doing the whole thing in a single loop.

由于內存大于函數調用堆棧,因此我的下一個想法是在單個循環中完成整個操作。

We’re going to keep track of a list of node lists. We’ll keep adding to them and linking them together until we fall out of the loop.

我們將跟蹤節點列表的列表。 我們將繼續添加它們并將它們鏈接在一起,直到脫離循環為止。

This method requires we keep all possible node lists in memory until we’ve completed the loop. In the recursive example, we only kept the largest list in memory.

此方法要求我們將所有可能的節點列表保留在內存中,直到完成循環。 在遞歸示例中,我們僅在內存中保留了最大的列表。

Another crazy one. Let’s break this down from the top. We’re looping each node once. But now we have to check if our id is in the list of node lists: contiguousIdsList.

另一個瘋狂的。 讓我們從頂部開始進行分解。 我們將每個節點循環一次。 但是現在我們必須檢查我們的id是否在節點列表列表中: contiguousIdsList

If it’s not in any list of contiguousIds, we’ll add it and its adjacentIds. That way, while looping, something else will link up to it.

如果它不在任何contiguousIds列表中,則將其及其adjacentIds ID添加。 這樣,在循環時,其他東西將鏈接到它。

If our node is in one of the lists, it’s possible it’s in quite a few of them. We want to link all those together, and remove the unlinked ones from the contiguousIdsList.

如果我們的節點在列表之一中,則可能在其中很多列表中。 我們希望將所有這些鏈接在一起,并從contiguousIdsList刪除未鏈接的鏈接。

That’s it.

而已。

After we’ve come up with a list of node lists, then we check which one’s the largest, and we’re done.

在提出節點列表列表之后,我們檢查哪個節點列表最大,然后完成。

執行 (Execution)

Unlike the recursive version, this one does finish when all 10K items are the same color.

與遞歸版本不同,當所有10K項都具有相同的顏色時,此函數確實會完成。

Other than that, it’s pretty slow; much slower than I originally expected. I’d forgot to account for looping the list of lists in my performance estimates, and that clearly had an impact on performance.

除此之外,它還很慢。 比我最初預期的要慢得多。 我忘了在性能評估中考慮循環列表列表,這顯然會對性能產生影響。

隨機迭代 (Random Iteration)

I wanted to take the methodology behind the recursive method and apply it iteratively.

我想將方法??學放在遞歸方法的后面,并迭代地應用它。

I spent the good portion of a night trying to remember how to change the index in the loop dynamically and then I remembered while(true). It’s been so long since I’ve written traditional loops, I’d completely forgotten about it.

我花了一整夜的時間試圖記住如何動態更改循環中的索引,然后想起了while(true) 。 自從我寫了傳統的循環已經很久了,我完全忘記了它。

Now that I’d had my weapon, I moved in for the attack. As I spent a lot of time trying to speed up the observable versions (more on that later), I decided to go in lazy and old-school-mutate the data.

現在我有了武器,就搬進去了。 由于我花了很多時間試圖加快可觀察版本的速度(稍后會詳細介紹),因此我決定將數據懶散地進行老式修改。

The goal of this algorithm was to hit each node exactly once and only store the largest contiguous block:

該算法的目標是精確擊中每個節點一次,并且僅存儲最大的連續塊:

Even though I wrote this like most people probably would, it’s by far the least readable. I can’t even tell you what it’s going without going through it myself first top-to-bottom.

即使我像大多數人一樣寫了這篇文章,但到目前為止,它的可讀性最差。 我什至無法告訴你這是怎么回事,除非我自己從上到下進行。

Instead of adding to a list of previously scanned IDs, we’re splicing out values from our remainingNodes array.

而不是添加到先前掃描的ID的列表中,我們是從remainingNodes數組中拼接出值。

Lazy! I wouldn’t ever recommend doing this yourself, but I was at the end of my rope creating these samples and wanted to try something different.

懶! 我永遠不會建議自己這樣做,但是我在創建這些樣本的最后階段想嘗試一些不同的嘗試。

細目分類 (The Breakdown)

I broke this out into 3 sections separated by if blocks.

我將其分為3個部分,中間用if塊分隔。

Let’s start with the middle section. We’re checking for queuedIds. If we have some, we do another loop through the queued items to see if they’re in our remainingNodes.

讓我們從中間部分開始。 我們正在檢查queuedIds 。 如果有的話,我們將對排隊的項目進行另一個循環,以查看它們是否在remainingNodes

In the third section, it depends on the results of the second section. If we don’t have any queuedIds, and remainingNodesIndex is -1, then we’re done with that node list and we need to start at a new root node. The new root node is always at index 0 because we’re splicing our remainingNodes.

在第三部分中,它取決于第二部分的結果。 如果我們沒有任何queuedIds ,而remainingNodesIndex節點queuedIds-1 ,那么我們已經完成了該節點列表,并且需要從一個新的根節點開始。 新的根節點始終位于索引0因為我們正在拼接remainingNodes

Back at the top of our loop, I could’ve used while(true), but I wanted a way out in case something went wrong. This was helpful while debugging since infinite loops can be a pain to figure out.

回到循環的頂部,我可以使用while(true) ,但是我想找到一條出路,以防出現問題。 這在調試時很有用,因為無限循環可能很難解決。

After that, we’re splicing out our node. We’re adding it to our list of contiguousIds, and adding the adjacentIds onto the queue.

之后,我們將拼接節點。 我們將它添加到我們的列表contiguousIds ,并添加adjacentIds到隊列中。

執行 (Execution)

This ended up being nearly as fast as the recursive version. It was the fastest of all algorithms when all nodes are the same color.

最終這幾乎與遞歸版本一樣快。 當所有節點都是相同的顏色時,這是所有算法中最快的。

特定于數據的優化 (Data-Specific Optimizations)

分組相似的顏色 (Grouping Similar Colors)

Since we know only blues go with blues, we could’ve grouped similarly colored nodes together for the sequential iteration version.

因為我們知道只有藍色與藍色一起出現,所以我們可以將顏色相似的節點分組在一起以進行順序迭代。

Splitting it up into 3 smaller arrays lowers our memory footprint and the amount of looping we need to do in our list of lists. Still, that doesn’t solve the situation where all colors are the same so this wouldn’t fix our recursive version.

將其分成3個較小的數組可以減少內存占用,并減少列表列表中需要執行的循環次數。 但是,這并不能解決所有顏色都相同的情況,因此無法解決我們的遞歸版本。

It also means we could multi-thread the operation, lowering the execution time by nearly a third.

這也意味著我們可以對操作進行多線程處理,從而將執行時間減少了近三分之一。

If we execute these sequentially, we just need to run the largest of the three first. If the largest is larger than the other two, you don’t need to check them.

如果我們按順序執行這些命令,則只需要首先運行三個命令中的最大一個即可。 如果最大的大于其他兩個,則無需檢查它們。

可能的最大尺寸 (Largest Possible Size)

Instead of checking if we have the largest list at certain intervals, we could be checking each iteration.

可以檢查每個迭代,而不是檢查是否在特定時間間隔內擁有最大的列表。

If the largest set is greater than or equal to half the available nodes (5K or higher), it’s obvious we already have the largest.

如果最大的集合大于或等于可用節點的一半(5K或更高),則很明顯我們已經擁有最大的集合。

Using the random iteration version, we could find the largest list size so far and see how many nodes are remaining. If there are less than the size of the largest, we’ve already got the largest.

使用隨機迭代版本,我們可以找到到目前為止最大的列表大小,并查看剩余的節點數。 如果小于最大的大小,則我們已經擁有最大的大小。

使用遞歸 (Use Recursion)

While recursion has its limitations, we can still use it. All we have to do is check the number of remaining nodes. If it’s under the stack limit, we can switch to the faster recursive version. Risky, but it’d definitely improve the execution time as you got further along in your loop.

盡管遞歸有其局限性,但我們仍然可以使用它。 我們要做的就是檢查剩余節點的數量。 如果低于堆棧限制,我們可以切換到更快的遞歸版本。 冒險,但是隨著循環的進行,它肯定會縮短執行時間。

使用`for`循環 (Use a `for` Loop)

Since we know our max item count, there’ll be a minor benefit from switching the reduce function to a traditional for loop.

由于我們知道最大項目數,因此將reduce函數切換為傳統的for循環會有一個次要的好處。

For whatever reason, Array.prototype methods are incredibly slow compared to for loops.

無論出于什么原因, 與for循環相比 , Array.prototype方法的速度都非常慢 。

使用尾遞歸 (Use Tail Recursion)

In the same way, I didn’t go over the observable versions in this particular article, I think tail recursion requires an article of its own.

同樣,我沒有在這篇特定的文章中介紹可觀察的版本,我認為尾部遞歸需要自己的文章。

It’s a big topic with a lot to explain, but while it would allow the recursive version to run, it might not end up being faster than the while loop like you’d expect.

這是一個很大的話題,需要解釋很多,但是盡管它允許遞歸版本運行,但最終可能不會比您期望的while循環更快。

RxJS:可維護性與性能 (RxJS: Maintainability vs Performance)

There are ways to rewrite these functions where you’ll have an easier time comprehending and maintaining them. The primary solution I came up with used RxJS in the Redux-Observable style, but without Redux.

有多種方法可以重寫這些功能,從而使您可以更輕松地理解和維護它們。 我想到的主要解決方案是使用Redux-Observable風格的RxJS,但沒有Redux。

That was actually my challenge for the article. I wanted to code the regular ways, then stream the data using RxJS to see how far I could push it.

這實際上是我對本文的挑戰。 我想以常規方式編寫代碼,然后使用RxJS傳輸數據以查看將數據推送多遠。

I made 3 versions in RxJS and took a few liberties to speed up the execution time. Unlike my transducer articles, all three wound up slower even if I increased rows and columns.

我在RxJS中制作了3個版本,并花了一些時間來加快執行時間。 與我的換能器文章不同,即使我增加了行數和列數,這三者的纏繞速度也變慢。

I spent my nights that week dreaming up possible solutions and combing over every inch of code. I’d even lie on the ground, close my eyes, and think think think. Each time, I came up with better ideas but kept hitting JavaScript speed limitations.

那一周我整夜都在做夢,想出可能的解決方案,梳理每英寸的代碼。 我什至躺在地上,閉上眼睛,然后想想。 每次,我想出了更好的主意,但都遇到了JavaScript速度限制。

There’s a whole list of optimizations I could’ve done, but at the cost of code readability. I didn’t want that (still used one anyway).

我可以完成全部優化,但要犧牲代碼的可讀性。 我不想要那個(還是還是用了一個)。

I finally got one of the observable solutions — now the fastest — running in half the time. That was the best improvement overall.

我終于得到了一種可以觀察到的解決方案(現在是最快的解決方案),運行了一半的時間。 這是總體上最好的改進。

The only time I could beat out the memory-heavy sequential iteration with observables was when every node is the same color. That was the only time. Technically that beats the recursive one too because it stack overflows in that scenario.

我唯一能用可觀察的方法擊敗大量內存的順序迭代的時間是每個節點的顏色相同。 那是唯一的時間。 從技術上講,這也優于遞歸方法,因為在這種情況下,它的堆棧會溢出。

After all that work figuring out how to stream the data with RxJS, I realized it’s way too much for this one article. Expect a future article to go over those code examples in detail.

在完成所有工作,弄清楚如何使用RxJS傳輸數據之后,我意識到這篇文章太過分了。 希望以后的文章詳細介紹這些代碼示例。

If you want to see the code early, you can see it on GitHub:https://github.com/Sawtaytoes/JavaScript-Performance-Interview-Question

如果您想及早查看代碼,可以在GitHub上查看: https : //github.com/Sawtaytoes/JavaScript-Performance-Interview-Question

最終統計 (Final Stats)

In general, the largest contiguous block was anywhere from 30–80 nodes on average.

通常,最大的連續塊平均在30-80個節點之間。

These are my numbers:

這些是我的電話號碼:

No matter how many times I ran the tests, the relative positions of each method remained the same.

無論我運行了多少次測試,每種方法的相對位置都保持不變。

The Redux-Observable Concurrent method suffered when all nodes were the same color. I tried a bunch of things to make it faster, but nothing worked :/.

當所有節點都是相同的顏色時,Redux-Observable并發方法會受到影響。 我嘗試了很多方法使它更快,但沒有任何效果:/。

游戲開發 (Game Development)

I’ve come across this code twice in my career. It was at a much smaller scale, in Lua, and happened while working on my indie game Pulsen.

在我的職業生涯中,我兩次遇到此代碼。 在Lua,它的規模要小得多,并且是在開發我的獨立游戲Pulsen時發生的 。

In one situation, I was working on a world map. It had a predefined list of nodes, and I processed this list in real-time. This allowed hitting [LEFT], [RIGHT], [UP], and [DOWN] to move you around the world map even if the angle was slightly off.

在一種情況下,我正在制作世界地圖。 它具有預定義的節點列表,我可以實時處理此列表。 即使角度稍微偏離,也可以按[LEFT][RIGHT][UP][DOWN]在世界地圖上移動。

I also wrote a node generator for an unknown list of items with X and Y values. Sound familiar? I also had to center that grid on the screen. It was a heckuva lot easier to do this in HTML than it was in the game engine though. Although, centering a bunch of absolutely-positioned divs isn’t easy either.

我還為帶有X和Y值的未知項目列表編寫了一個節點生成器。 聽起來有點熟? 我還必須將網格放在屏幕中央。 用HTML做到這一點比在游戲引擎中要容易得多。 雖然,將一堆絕對定位的div居中也不容易。

In both of these solutions, the real-time execution time wasn’t a big deal because I did a lot of preprocessing when you loaded up the game.

在這兩種解決方案中,實時執行時間都沒什么大不了的,因為在您加載游戲時我做了很多預處理。

I want to emphasize that TechLead’s question might be something you come across in your career; maybe, but it’s rare that speed is ever a factor in typical JavaScript applications.

我想強調,TechLead的問題可能是您在職業生涯中遇到的; 也許,但是在典型JavaScript應用程序中,速度從來都不是一個重要因素。

Based on TechLeads other videos, he was using Java at Google. I’m assuming the positions he interviewed cared about execution speed. They probably had a bunch of worker tasks processing huge chunks of data so a solution like this might’ve been necessary.

根據TechLeads的其他視頻,他在Google使用Java。 我假設他采訪的職位關心執行速度。 他們可能有一堆處理大量數據的工作者任務,因此可能需要這樣的解決方案。

But then, it could’ve been a job working on HTML and CSS, and he was just trolling the interviewee; who knows!

但是,那可能是處理HTML和CSS的工作,而他只是在誘騙受訪者。 誰知道!

結論 (Conclusion)

As you saw with the final stats, the worst-looking code was almost the fastest and accomplished all our requirements. Good luck maintaining that sucker!

正如您在最終統計數據中看到的那樣,外觀最差的代碼幾乎是最快的,并且可以滿足我們的所有要求。 祝你好運,保持那個吸盤!

From my own experience, I spent longer developing the non-RxJS versions. I think it’s because the faster versions required thinking holistically. Redux-Observable allows you to think in small chunks.

根據我自己的經驗,我花了更長的時間開發非RxJS版本。 我認為這是因為更快的版本需要整體考慮。 Redux-Observable允許您一小部分地思考。

This was a really fun and frustrating problem to solve. It seemed really difficult at first, but after breaking it up into pieces, the pieces all came together :).

這是一個非常有趣且令人沮喪的問題。 起初看起來確實很困難,但是將其分解成碎片后,所有碎片都匯集在一起??了:)。

更多閱讀 (More Reads)

If you wanna read more on JavaScript performance, check out my other articles:

如果您想了解有關JavaScript性能的更多信息,請查看我的其他文章:

  • Speed Up JavaScript Array Processing

    加速JavaScript數組處理

  • Using Transducers to Speed Up JavaScript Arrays

    使用換能器加速JavaScript數組

If you liked what you read, please check out my other articles on similar eye-opening topics:

如果您喜歡所閱讀的內容,請查看我的其他文章,這些文章涉及類似的令人驚訝的話題:

  • Callbacks: The Definitive Guide

    回調:權威指南

  • Promises: The Definitive Guide

    承諾:權威指南

  • Feature Flags: Be Truly Agile

    功能標志:真正敏捷

  • Make Poop from Food Emojis

    用食物表情符號制作便便

翻譯自: https://www.freecodecamp.org/news/bet-you-cant-solve-this-google-interview-question-4a6e5a4dc8ee/

兩個富翁打賭

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/391757.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/391757.shtml
英文地址,請注明出處:http://en.pswp.cn/news/391757.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

vue.js 安裝

寫 一個小小的安裝步驟 踩坑過來的 點擊.然后安裝cnpm.再接著使用文章說明繼續安裝 # 全局安裝 vue-cli $ cnpm install --global vue-cli # 創建一個基于 webpack 模板的新項目 $ vue init webpack my-project這時候一路空格 選項.當遇到第一個讓你敲 Y/N 的時候 選擇Y …

Swift 的函數和閉包

函數的關鍵字是 func ,函數定義的格式是: func funcName(para:paraType) -> returnType{// code } 復制代碼函數的參數標簽 其中參數的那部分的詳細結構是用小括號括起來,參數名,冒號,參數類型: (number…

pandas之表格樣式

在juoyter notebook中直接通過df輸出DataFrame時&#xff0c;顯示的樣式為表格樣式&#xff0c;通過sytle可對表格的樣式做一些定制&#xff0c;類似excel的條件格式。 df pd.DataFrame(np.random.rand(5,4),columns[A,B,C,D]) s df.style print(s,type(s)) #<pandas.io.f…

多層感知機 深度神經網絡_使用深度神經網絡和合同感知損失的能源產量預測...

多層感知機 深度神經網絡in collaboration with Hsu Chung Chuan, Lin Min Htoo, and Quah Jia Yong.與許忠傳&#xff0c;林敏濤和華佳勇合作。 1. Introduction1.簡介 Since the early 1990s, several countries, mostly in the European Union and North America, had sta…

ajax跨域

//遠程的地址1.通過header頭實現ajax跨域PHP文件的代碼$origin isset($_SERVER[HTTP_ORIGIN])? $_SERVER[HTTP_ORIGIN] : ; $allow_origin array(http://www.example.com, http://www.example2.com);if(in_array($origin, $allow_origin)){ header(Access-Control-Allow-Ori…

java線程并發庫之--線程同步工具CountDownLatch用法

CountDownLatch&#xff0c;一個同步輔助類&#xff0c;在完成一組正在其他線程中執行的操作之前&#xff0c;它允許一個或多個線程一直等待。 主要方法 public CountDownLatch(int count); public void countDown(); public void await() throws InterruptedException 構造方法…

leetcode 766. 托普利茨矩陣

給你一個 m x n 的矩陣 matrix 。如果這個矩陣是托普利茨矩陣&#xff0c;返回 true &#xff1b;否則&#xff0c;返回 false 。 如果矩陣上每一條由左上到右下的對角線上的元素都相同&#xff0c;那么這個矩陣是 托普利茨矩陣 。 輸入&#xff1a;matrix [[1,2,3,4],[5,1,…

藍牙調試工具如何使用_使用此有價值的工具改進您的藍牙項目:第2部分!

藍牙調試工具如何使用This post is originally from www.jaredwolff.com. 這篇文章最初來自www.jaredwolff.com。 This is Part 2 of configuring your own Bluetooth Low Energy Service using a Nordic NRF52 series processor. If you haven’t seen Part 1 go back and ch…

gRPC快速入門記錄

為什么使用grpc 1.protocl buffer一種高效的序列化結構。 2.支持http 2.0標準化協議。 http/2 1.http/2對每個源只需創建一個持久連接&#xff0c;在這一個連接內&#xff0c;可以并行的處理多個請求和響應&#xff0c;而且做到不相互影響。 2.允許客戶端和服務端實現自己的數據…

微服務、分布式、云架構構建電子商務平臺

大型企業分布式微服務云架構服務組件 實現模塊化、微服務化、原子化、灰度發布、持續集成 分布式、微服務、云架構構建電子商務平臺 commonservice eureka Netflix事件、消息總線&#xff0c;用于在集群&#xff08;例如&#xff0c;配置變化事件&#xff09;中傳播狀態變化&am…

使用Matplotlib Numpy Pandas構想泰坦尼克號高潮

Did you know, a novel predicted the Titanic sinking 14 years previously to the actual disaster???您知道嗎&#xff0c;一本小說預言泰坦尼克號在14年前沉沒到了真正的災難中&#xff1f;&#xff1f;&#xff1f; In 1898 (14 years before the Titanic sank), Amer…

spark 架構_深入研究Spark內部和架構

spark 架構by Jayvardhan Reddy通過杰伊瓦爾丹雷迪(Jayvardhan Reddy) 深入研究Spark內部和架構 (Deep-dive into Spark internals and architecture) Apache Spark is an open-source distributed general-purpose cluster-computing framework. A spark application is a JV…

使用faker生成測試數據

需要先安裝faker模塊&#xff0c;pip install faker 導入模塊中的Faker類&#xff1a;from faker import Faker 實例化faker Faker() print(姓名相關) print(姓名:,faker.name()) print(名:,faker.first_name()) print(姓:,faker.last_name()) print(男姓名:,faker.name_male(…

JavaScript中的數組創建

JavaScript中的數組創建 本文轉載自&#xff1a;眾成翻譯 譯者&#xff1a;loveky 鏈接&#xff1a;http://www.zcfy.cc/article/713 原文&#xff1a;http://rainsoft.io/power-up-the-array-creation-in-javascript/ 數組是一個包含了對象或原始類型的有序集合。很難想象一個…

CODEVS——T1519 過路費

http://codevs.cn/problem/1519/ 時間限制: 1 s空間限制: 256000 KB題目等級 : 大師 Master題解查看運行結果題目描述 Description在某個遙遠的國家里&#xff0c;有 n個城市。編號為 1,2,3,…,n。這個國家的政府修建了m 條雙向道路&#xff0c;每條道路連接著兩個城市。政府規…

pca數學推導_PCA背后的統計和數學概念

pca數學推導As I promised in the previous article, Principal Component Analysis (PCA) with Scikit-learn, today, I’ll discuss the mathematics behind the principal component analysis by manually executing the algorithm using the powerful numpy and pandas lib…

pandas之cut

cut( )用來把一組數據分割成離散的區間。 cut(x, bins, rightTrue, labelsNone, retbinsFalse, precision3, include_lowestFalse, duplicatesraise) # x&#xff1a;被切分的數據&#xff0c;必須是一維的 # bins&#xff1a;①int型整數&#xff1a;將x按照數值大小平均分成分…

為Tueri.io構建React圖像優化組件

Let’s face it, image optimization is hard. We want to make it effortless.面對現實吧&#xff0c;圖像優化非常困難。 我們希望毫不費力。 When we set out to build our React Component there were a few problems we wanted to solve:當我們開始構建React組件時&#…

紅黑樹分析

紅黑樹的性質&#xff1a; 性質1&#xff1a;每個節點要么是黑色&#xff0c;要么是紅色。 性質2&#xff1a;根節點是黑色。性質3&#xff1a;每個葉子節點&#xff08;NIL&#xff09;是黑色。性質4&#xff1a;每個紅色節點的兩個子節點一定都是黑色。不能有兩個紅色節點相…

overlay 如何實現跨主機通信?- 每天5分鐘玩轉 Docker 容器技術(52)

上一節我們在 host1 中運行了容器 bbox1&#xff0c;今天將詳細討論 overlay 網絡跨主機通信的原理。 在 host2 中運行容器 bbox2&#xff1a; bbox2 IP 為 10.0.0.3&#xff0c;可以直接 ping bbox1&#xff1a; 可見 overlay 網絡中的容器可以直接通信&#xff0c;同時 docke…