An evolutionary journey of multitasking
多重任務的進化之旅
In the beginning, computers had one CPU that executed a set of instructions written by a programmer one by one. No operating system (OS), no scheduling, no threads, no multitasking. This was how computers worked for a long time. We’re talking back when a program was assembled in a deck of punched cards, and you got in big trouble if you were so unfortunate that you dropped the deck onto the floor.
一開始,計算機只有一個CPU,一個接一個地執行程序員編寫的一組指令。沒有操作系統(OS),沒有調度,沒有線程,沒有多任務。在很長一段時間里,計算機就是這樣工作的。我們說的是當一個程序被組裝在一副打孔卡片上時,如果你不幸把卡片掉到地板上,你就有大麻煩了。
There were operating systems being researched very early and when personal computing started to grow in the 80s, operating systems such as DOS were the standard on most consumer PCs.
人們很早就開始研究操作系統,當個人電腦在80年代開始發展時,DOS等操作系統是大多數個人電腦的標準配置。
These operating systems usually yielded control of the entire CPU to the program currently executing, and it was up to the programmer to make things work and implement any kind of multitasking for their program. This worked fine, but as interactive UIs using a mouse and windowed operating systems became the norm, this model simply couldn’t work anymore.
這些操作系統通常將整個CPU的控制權交給當前正在執行的程序,由程序員來完成工作并為程序實現任何類型的多任務處理。這種模式運行良好,但隨著使用鼠標和窗口操作系統的交互式ui成為常態,這種模式就不再適用了。
Non-preemptive multitasking
非搶占式多任務
Non-preemptive multitasking was the first method used to be able to keep a UI interactive (and running background processes).
非搶占式多任務是第一種能夠保持UI交互(并運行后臺進程)的方法。
This kind of multitasking put the responsibility of letting the OS run other tasks, such as responding to input from the mouse or running a background task, in the hands of the programmer.
這種多任務處理將讓操作系統運行其他任務的責任,例如響應來自鼠標的輸入或運行后臺任務,交給了程序員。
Typically, the programmer yielded control to the OS.
通常,程序員將控制權交給操作系統。
Besides offloading a huge responsibility to every programmer writing a program for your platform, this method was naturally error-prone. A small mistake in a program’s code could halt or crash the entire system.
除了將巨大的責任推卸給為您的平臺編寫程序的每個程序員之外,這種方法自然容易出錯。程序代碼中的一個小錯誤可能會使整個系統停止或崩潰。
Another popular term for what we call non-preemptive multitasking is cooperative multitasking. Windows 3.1 used cooperative multitasking and required programmers to yield control to the OS by using specific system calls. One badly-behaving application could thereby halt the entire system.
我們稱之為非搶占式多任務的另一個流行術語是合作多任務。Windows 3.1使用協同多任務,并要求程序員通過使用特定的系統調用將控制權交給操作系統。一個行為不佳的應用程序就可能使整個系統癱瘓。
Preemptive multitasking
搶占式多任務
While non-preemptive multitasking sounded like a good idea, it turned out to create serious problems as well. Letting every program and programmer out there be responsible for having a responsive UI in an operating system can ultimately lead to a bad user experience, since every bug out there could halt the entire system.
雖然非搶占式多任務處理聽起來是個好主意,但事實證明它也會產生嚴重的問題。讓每個程序和程序員都負責操作系統中的響應性UI,最終會導致糟糕的用戶體驗,因為每個漏洞都可能導致整個系統癱瘓。
The solution was to place the responsibility of scheduling the CPU resources between the programs that requested it (including the OS itself) in the hands of the OS. The OS can stop the execution of a process, do something else, and switch back.
解決方案是將在請求它的程序(包括操作系統本身)之間調度CPU資源的責任置于操作系統的手中。操作系統可以停止一個進程的執行,做其他事情,然后切換回來。
On such a system, if you write and run a program with a graphical user interface on a single-core machine, the OS will stop your program to update the mouse position before it switches back to your program to continue. This happens so frequently that we don’t usually observe any difference whether the CPU has a lot of work or is idle.
在這樣的系統上,如果您在單核機器上編寫并運行帶有圖形用戶界面的程序,操作系統將停止您的程序以更新鼠標位置,然后再切換回您的程序繼續。這種情況發生得如此頻繁,以至于我們通常不會觀察到CPU是否有大量工作或空閑有任何區別。
The OS is responsible for scheduling tasks and does this by switching contexts on the CPU. This process can happen many times each second, not only to keep the UI responsive but also to give some time to other background tasks and IO events.
操作系統負責調度任務,并通過在CPU上切換上下文來完成。這個過程每秒可以發生很多次,不僅是為了保持UI響應,也是為了給其他后臺任務和IO事件一些時間。
This is now the prevailing way to design an operating system.
這是現在設計操作系統的主流方式。
Later in this book, we’ll write our own green threads and cover a lot of basic knowledge about context switching, threads, stacks, and scheduling that will give you more insight into this topic, so stay tuned.
在本書的后面,我們將編寫自己的綠色線程,并介紹有關上下文切換、線程、堆棧和調度的許多基本知識,這些知識將使您對這個主題有更深入的了解,請繼續關注。
Hyper-threading
超線程
As CPUs evolved and added more functionality such as several arithmetic logic units (ALUs) and additional logic units, the CPU manufacturers realized that the entire CPU wasn’t fully utilized. For example, when an operation only required some parts of the CPU, an instruction could be run on the ALU simultaneously. This became the start of hyper-threading.
隨著CPU的發展和增加更多的功能,如幾個算術邏輯單元(alu)和額外的邏輯單元,CPU制造商意識到整個CPU沒有得到充分利用。例如,當一個操作只需要CPU的某些部分時,一條指令可以同時在ALU上運行。這就是超線程的開始。
Your computer today, for example, may have 6 cores and 12 logical cores… This is exactly where hyperthreading comes in. It “simulates” two cores on the same core by using unused parts of the CPU to drive progress on thread 2 and simultaneously running the code on thread 1. It does this by using a number of smart tricks (such as the one with the ALU).
例如,你今天的電腦可能有6個核心和12個邏輯核心。這正是超線程的用武之地。它通過使用CPU的未使用部分來驅動線程2上的進程并同時在線程1上運行代碼,從而在同一個內核上“模擬”兩個內核。它通過使用許多聰明的技巧(比如使用ALU的技巧)來實現這一點。
Now, using hyper-threading, we could actually offload some work on one thread while keeping the UI interactive by responding to events in the second thread even though we only had one CPU core, thereby utilizing our hardware better.
現在,使用超線程,我們實際上可以在一個線程上卸載一些工作,同時通過響應第二個線程中的事件保持UI交互,即使我們只有一個CPU核心,從而更好地利用我們的硬件。
It turns out that hyper-threading has been continuously improved since the 90s. Since you’re not actually running two CPUs, there will be some operations that need to wait for each other to finish. The performance gain of hyper-threading compared to multitasking in a single core seems to be somewhere close to 30% but it largely depends on the workload.
事實證明,自上世紀90年代以來,超線程一直在不斷改進。由于實際上并沒有運行兩個cpu,因此會有一些操作需要等待對方完成。與單核的多任務處理相比,超線程的性能增益似乎接近30%,但這主要取決于工作負載。
Multicore processors
多核處理器
As most know, the clock frequency of processors has been flat for a long time. Processors get faster by improving caches, branch prediction, and speculative execution, and by working on the processing pipelines of the processors, but the gains seem to be diminishing.
眾所周知,處理器的時鐘頻率在很長一段時間內一直是平坦的。處理器通過改進緩存、分支預測和推測執行,以及處理處理器的處理管道來提高速度,但收益似乎正在減少。
On the other hand, new processors are so small that they allow us to have many on the same chip. Now, most CPUs have many cores and most often, each core will also have the ability to perform hyper-threading.
另一方面,新的處理器是如此之小,以至于我們可以在同一個芯片上安裝許多處理器。現在,大多數cpu都有許多核心,而且大多數情況下,每個核心都有執行超線程的能力。
Do you really write synchronous code?
你真的在寫同步代碼嗎?
Like many things, this depends on your perspective. From the perspective of your process and the code you write, everything will normally happen in the order you write it.
像許多事情一樣,這取決于你的觀點。從您的流程和您編寫的代碼的角度來看,一切通常都將按照您編寫的順序發生。
From the operating system’s perspective, it might or might not interrupt your code, pause it, and run some other code in the meantime before resuming your process.
從操作系統的角度來看,它可能會也可能不會中斷您的代碼,暫停它,并在恢復您的進程之前同時運行一些其他代碼。
From the perspective of the CPU, it will mostly execute instructions one at a time.* It doesn’t care who wrote the code, though, so when a hardware interrupt happens, it will immediately stop and give control to an interrupt handler. This is how the CPU handles concurrency.
從CPU的角度來看,它主要是一次執行一條指令。*它不關心誰寫的代碼,雖然,所以當硬件中斷發生時,它會立即停止并把控制權交給中斷處理程序。這就是CPU處理并發的方式。
However, modern CPUs can also do a lot of things in parallel. Most CPUs are pipelined, meaning that the next instruction is loaded while the current one is executing. It might have a branch predictor that tries to figure out what instructions to load next.
然而,現代cpu也可以并行處理很多事情。大多數cpu都是流水線的,這意味著當當前指令執行時,下一條指令會被加載。它可能有一個分支預測器,試圖找出下一步加載什么指令。
The processor can also reorder instructions by using out-of-order execution if it believes it makes things faster this way without ‘asking’ or ‘telling’ the programmer or the OS, so you might not have any guarantee that A happens before B.
處理器也可以通過使用亂序執行來重新排序指令,如果它認為這樣做可以讓事情更快,而不需要“詢問”或“告訴”程序員或操作系統,所以你可能無法保證A在B之前發生。
The CPU offloads some work to separate ‘coprocessors’ such as the FPU for floating-point calculations, leaving the main CPU ready to do other tasks et cetera.
CPU將一些工作卸載給獨立的“協處理器”,比如FPU進行浮點計算,讓主CPU準備好做其他任務等等。
As a high-level overview, it’s OK to model the CPU as operating in a synchronous manner, but for now, let’s just make a mental note that this is a model with some caveats that become especially important when talking about parallelism, synchronization primitives (such as mutexes and atomics), and the security of computers and operating systems.
作為一個高級概述,將CPU建模為以同步方式操作是可以的,但是現在,讓我們記住,這個模型帶有一些注意事項,在討論并行性、同步原語(如互斥體和原子)以及計算機和操作系統的安全性時,這些注意事項變得特別重要。