Cilium動手實驗室: 精通之旅---4.Cilium Gateway API - Lab
- 1. 環境準備
- 2. API 網關--HTTP
- 2.1 部署應用
- 2.2 部署網關
- 2.3 HTTP路徑匹配
- 2.4 HTTP頭匹配
- 3. API網關--HTTPS
- 3.1 創建TLS證書和私鑰
- 3.2 部署HTTPS網關
- 3.3 HTTPS請求測試
- 4. API網關--TLS 路由
- 4.1 部署應用
- 4.2 部署網關
- 4.3 測試TLS請求
- 5. API網關--流量拆分
- 5.1 部署應用
- 5.2 負載均衡流量
- 5.3 流量拆分-- 50%比50%
- 5.4 流量拆分-- 99%比1%
- 5.5 小測試
- 6. 測驗
- 6.1 題目
- 6.2 解題
1. 環境準備
Lab環境訪問
https://isovalent.com/labs/gateway-api/
本套環境1 control 2個worker
cilium install --version v1.17.1 \--namespace kube-system \--set kubeProxyReplacement=true \--set gatewayAPI.enabled=true
確認環境狀態
root@server:~# kubectl get crd \gatewayclasses.gateway.networking.k8s.io \gateways.gateway.networking.k8s.io \httproutes.gateway.networking.k8s.io \referencegrants.gateway.networking.k8s.io \tlsroutes.gateway.networking.k8s.io
NAME CREATED AT
gatewayclasses.gateway.networking.k8s.io 2025-05-27T23:51:41Z
gateways.gateway.networking.k8s.io 2025-05-27T23:51:41Z
httproutes.gateway.networking.k8s.io 2025-05-27T23:51:41Z
referencegrants.gateway.networking.k8s.io 2025-05-27T23:51:42Z
tlsroutes.gateway.networking.k8s.io 2025-05-27T23:51:42Z
root@server:~# cilium status --wait/ˉˉ\/ˉˉ\__/ˉˉ\ Cilium: OK\__/ˉˉ\__/ Operator: OK/ˉˉ\__/ˉˉ\ Envoy DaemonSet: OK\__/ˉˉ\__/ Hubble Relay: disabled\__/ ClusterMesh: disabledDaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet cilium-envoy Desired: 3, Ready: 3/3, Available: 3/3
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 3cilium-envoy Running: 3cilium-operator Running: 1clustermesh-apiserver hubble-relay
Cluster Pods: 3/3 managed by Cilium
Helm chart version: 1.17.1
Image versions cilium quay.io/cilium/cilium:v1.17.1@sha256:8969bfd9c87cbea91e40665f8ebe327268c99d844ca26d7d12165de07f702866: 3cilium-envoy quay.io/cilium/cilium-envoy:v1.31.5-1739264036-958bef243c6c66fcfd73ca319f2eb49fff1eb2ae@sha256:fc708bd36973d306412b2e50c924cd8333de67e0167802c9b48506f9d772f521: 3cilium-operator quay.io/cilium/operator-generic:v1.17.1@sha256:628becaeb3e4742a1c36c4897721092375891b58bae2bfcae48bbf4420aaee97: 1
root@server:~# k get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 3h2m v1.31.0
kind-worker Ready <none> 3h1m v1.31.0
kind-worker2 Ready <none> 3h1m v1.31.0
root@server:~# cilium config view | grep -w "enable-gateway-api"
enable-gateway-api true
enable-gateway-api-alpn false
enable-gateway-api-app-protocol false
enable-gateway-api-proxy-protocol false
enable-gateway-api-secrets-sync true
驗證一下 GatewayClass 是否已部署并接受:
root@server:~# kubectl get GatewayClass
NAME CONTROLLER ACCEPTED AGE
cilium io.cilium/gateway-controller True 4m59s
GatewayClass 是一種可以部署的 Gateway:換句話說,它是一個模板。這樣做是為了讓基礎設施提供商提供不同類型的網關。然后,用戶可以選擇他們喜歡的 Gateway。
例如,基礎設施提供商可以創建兩個名為 internet
和 private
的 GatewayClass
,以反映定義面向 Internet 與私有內部應用程序的 Gateway。
在我們的例子中,Cilium Gateway API (io.cilium/gateway-controller
) 將被實例化。
下面的架構表示網關 API 使用的各種組件。使用 Ingress 時,所有功能都在一個 API 中定義。通過將入口路由要求解構為多個 API,用戶可以從更通用、更靈活和面向角色的模型中受益。
實際的 L7 流量規則在 HTTPRoute
API 中定義。
2. API 網關–HTTP
2.1 部署應用
這個項目也是老演員了,Istio的Bookinfo.
- 🔍
details
- ?
ratings
- ?
reviews
- 📕
productpage
使用其中一些服務作為 Gateway API 的基礎。
項目的內容
root@server:~# yq /opt/bookinfo.yml
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.##################################################################################################
# This file defines the services, service accounts, and deployments for the Bookinfo sample.
#
# To apply all 4 Bookinfo services, their corresponding service accounts, and deployments:
#
# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
#
# Alternatively, you can deploy any resource separately:
#
# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l service=reviews # reviews Service
# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l account=reviews # reviews ServiceAccount
# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l app=reviews,version=v3 # reviews-v3 Deployment
####################################################################################################################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:name: detailslabels:app: detailsservice: details
spec:ports:- port: 9080name: httpselector:app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:name: bookinfo-detailslabels:account: details
---
apiVersion: apps/v1
kind: Deployment
metadata:name: details-v1labels:app: detailsversion: v1
spec:replicas: 1selector:matchLabels:app: detailsversion: v1template:metadata:labels:app: detailsversion: v1spec:serviceAccountName: bookinfo-detailscontainers:- name: detailsimage: docker.io/istio/examples-bookinfo-details-v1:1.16.2imagePullPolicy: IfNotPresentports:- containerPort: 9080securityContext:runAsUser: 1000
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:name: ratingslabels:app: ratingsservice: ratings
spec:ports:- port: 9080name: httpselector:app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:name: bookinfo-ratingslabels:account: ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:name: ratings-v1labels:app: ratingsversion: v1
spec:replicas: 1selector:matchLabels:app: ratingsversion: v1template:metadata:labels:app: ratingsversion: v1spec:serviceAccountName: bookinfo-ratingscontainers:- name: ratingsimage: docker.io/istio/examples-bookinfo-ratings-v1:1.16.2imagePullPolicy: IfNotPresentports:- containerPort: 9080securityContext:runAsUser: 1000
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:name: reviewslabels:app: reviewsservice: reviews
spec:ports:- port: 9080name: httpselector:app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:name: bookinfo-reviewslabels:account: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:name: reviews-v1labels:app: reviewsversion: v1
spec:replicas: 1selector:matchLabels:app: reviewsversion: v1template:metadata:labels:app: reviewsversion: v1spec:serviceAccountName: bookinfo-reviewscontainers:- name: reviewsimage: docker.io/istio/examples-bookinfo-reviews-v1:1.16.2imagePullPolicy: IfNotPresentenv:- name: LOG_DIRvalue: "/tmp/logs"ports:- containerPort: 9080volumeMounts:- name: tmpmountPath: /tmp- name: wlp-outputmountPath: /opt/ibm/wlp/outputsecurityContext:runAsUser: 1000volumes:- name: wlp-outputemptyDir: {}- name: tmpemptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: reviews-v2labels:app: reviewsversion: v2
spec:replicas: 1selector:matchLabels:app: reviewsversion: v2template:metadata:labels:app: reviewsversion: v2spec:serviceAccountName: bookinfo-reviewscontainers:- name: reviewsimage: docker.io/istio/examples-bookinfo-reviews-v2:1.16.2imagePullPolicy: IfNotPresentenv:- name: LOG_DIRvalue: "/tmp/logs"ports:- containerPort: 9080volumeMounts:- name: tmpmountPath: /tmp- name: wlp-outputmountPath: /opt/ibm/wlp/outputsecurityContext:runAsUser: 1000volumes:- name: wlp-outputemptyDir: {}- name: tmpemptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: reviews-v3labels:app: reviewsversion: v3
spec:replicas: 1selector:matchLabels:app: reviewsversion: v3template:metadata:labels:app: reviewsversion: v3spec:serviceAccountName: bookinfo-reviewscontainers:- name: reviewsimage: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2imagePullPolicy: IfNotPresentenv:- name: LOG_DIRvalue: "/tmp/logs"ports:- containerPort: 9080volumeMounts:- name: tmpmountPath: /tmp- name: wlp-outputmountPath: /opt/ibm/wlp/outputsecurityContext:runAsUser: 1000volumes:- name: wlp-outputemptyDir: {}- name: tmpemptyDir: {}
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:name: productpagelabels:app: productpageservice: productpage
spec:ports:- port: 9080name: httpselector:app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:name: bookinfo-productpagelabels:account: productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:name: productpage-v1labels:app: productpageversion: v1
spec:replicas: 1selector:matchLabels:app: productpageversion: v1template:metadata:labels:app: productpageversion: v1spec:serviceAccountName: bookinfo-productpagecontainers:- name: productpageimage: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2imagePullPolicy: IfNotPresentports:- containerPort: 9080volumeMounts:- name: tmpmountPath: /tmpsecurityContext:runAsUser: 1000volumes:- name: tmpemptyDir: {}
---
部署應用
kubectl apply -f /opt/bookinfo.yml
檢查應用程序是否已正確部署:
root@server:~# kubectl apply -f /opt/bookinfo.yml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
root@server:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-67894999b5-hswsw 1/1 Running 0 51s
productpage-v1-7bd5bd857c-shr9z 1/1 Running 0 51s
ratings-v1-676ff5568f-w467l 1/1 Running 0 51s
reviews-v1-f5b4b64f-sjk2s 1/1 Running 0 51s
reviews-v2-74b7dd9f45-rk2n6 1/1 Running 0 51s
reviews-v3-65d744df5c-zqljm 1/1 Running 0 51s
root@server:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.96.188.110 <none> 9080/TCP 93s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h10m
productpage ClusterIP 10.96.173.43 <none> 9080/TCP 93s
ratings ClusterIP 10.96.118.245 <none> 9080/TCP 93s
reviews ClusterIP 10.96.33.54 <none> 9080/TCP 93s
請注意,使用 Cilium Service Mesh 時,沒有在每個演示應用程序微服務旁邊創建 Envoy sidecar。使用 sidecar 實現,輸出將顯示 2/2 READY:
一個用于微服務,一個用于 Envoy sidecar。
2.2 部署網關
配置文件
root@server:~# yq basic-http.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:name: my-gateway
spec:gatewayClassName: ciliumlisteners:- protocol: HTTPport: 80name: web-gwallowedRoutes:namespaces:from: Same
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: http-app-1
spec:parentRefs:- name: my-gatewaynamespace: defaultrules:- matches:- path:type: PathPrefixvalue: /detailsbackendRefs:- name: detailsport: 9080- matches:- headers:- type: Exactname: magicvalue: fooqueryParams:- type: Exactname: greatvalue: examplepath:type: PathPrefixvalue: /method: GETbackendRefs:- name: productpageport: 9080
部署網關
root@server:~# kubectl apply -f basic-http.yaml
gateway.gateway.networking.k8s.io/my-gateway created
httproute.gateway.networking.k8s.io/http-app-1 created
網關使用的配置:
spec:gatewayClassName: ciliumlisteners:- protocol: HTTPport: 80name: web-gwallowedRoutes:namespaces:from: Same
首先,請注意 Gateway
部分中的 gatewayClassName
字段使用值 cilium
。這是指之前配置的 Cilium GatewayClass
。
網關將在端口 80 上偵聽南向進入集群的 HTTP 流量。allowedRoutes
用于指定 Route 可以附加到此 Gateway 的命名空間。Same
表示此 Gateway 只能使用同一命名空間中的路由。
請注意,如果我們要使用 All
而不是 Same
,我們將允許此網關與任何命名空間中的路由相關聯,并且它將使我們能夠跨多個命名空間使用單個網關,這些命名空間可能由不同的團隊管理。
我們可以在 HTTPRoutes 中指定不同的命名空間 。
現在,讓我們回顧一下 HTTPRoute
清單。HTTPRoute
是一種 GatewayAPI 類型,用于指定從網關偵聽器到 Kubernetes 服務的 HTTP 請求的路由行為。
它由 Rules 組成,可根據您的要求引導流量。
第一條規則本質上是一個簡單的 L7 代理路由:對于路徑以 /details
開頭的 HTTP 流量,通過端口 9080 將流量轉發到 details
Service。
rules:- matches:- path:type: PathPrefixvalue: /detailsbackendRefs:- name: detailsport: 9080
第二條規則類似,但它利用了不同的匹配標準。如果 HTTP 請求具有:
- 名稱設置為
magic
且值為foo
的 HTTP 標頭 - HTTP 方法是 “GET”
- HTTP 查詢參數命名為
great
,值為example
,則流量將通過 9080 端口發送到productpage
服務。
rules:- matches:- headers:- type: Exactname: magicvalue: fooqueryParams:- type: Exactname: greatvalue: examplepath:type: PathPrefixvalue: /method: GETbackendRefs:- name: productpageport: 9080
如您所見,您可以部署一致的復雜 L7 流量規則(使用 Ingress API,通常需要注釋來實現此類路由目標,并且這會造成一個 Ingress 控制器與另一個 Ingress 控制器之間的不一致)。
這些新 API 的好處之一是 Gateway API 基本上被拆分為單獨的功能 – 一個用于描述 Gateway,另一個用于到后端服務的路由。通過拆分這兩個功能,它使運營商能夠更改和交換網關,但保持相同的路由配置。
換句話說:如果您決定要改用其他 Gateway API 控制器,您將能夠重復使用相同的清單。
現在,我們再看一下 Services,因為 Gateway 已經部署了:
root@server:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-gateway-my-gateway LoadBalancer 10.96.212.15 172.18.255.200 80:30157/TCP 3m2s
details ClusterIP 10.96.188.110 <none> 9080/TCP 7m4s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h15m
productpage ClusterIP 10.96.173.43 <none> 9080/TCP 7m4s
ratings ClusterIP 10.96.118.245 <none> 9080/TCP 7m4s
reviews ClusterIP 10.96.33.54 <none> 9080/TCP 7m4s
您將看到一個名為 cilium-gateway-my-gateway
的 LoadBalancer
服務 它是為 Gateway API 創建的。
相同的外部 IP 地址也與網關關聯:
root@server:~# kubectl get gateway
NAME CLASS ADDRESS PROGRAMMED AGE
my-gateway cilium 172.18.255.200 True 3m22s
讓我們檢索此 IP 地址:
GATEWAY=$(kubectl get gateway my-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY
2.3 HTTP路徑匹配
現在,我們來檢查基于 URL 路徑的流量是否由 Gateway API 代理。
檢查是否可以向該外部地址發出 HTTP 請求:
root@server:~# curl --fail -s http://$GATEWAY/details/1 | jq
{"id": 1,"author": "William Shakespeare","year": 1595,"type": "paperback","pages": 200,"publisher": "PublisherA","language": "English","ISBN-10": "1234567890","ISBN-13": "123-1234567890"
}
由于路徑以 /details
開頭,因此此流量將與第一條規則匹配,并將通過端口 9080 代理到 details
Service。
2.4 HTTP頭匹配
這一次,我們將根據 HTTP 參數(如標頭值、方法和查詢參數)路由流量。運行以下命令:
root@server:~# curl -v -H 'magic: foo' "http://$GATEWAY?great=example"
* Trying 172.18.255.200:80...
* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /?great=example HTTP/1.1
> Host: 172.18.255.200
> User-Agent: curl/8.5.0
> Accept: */*
> magic: foo
>
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 1683
< server: envoy
< date: Wed, 28 May 2025 00:11:15 GMT
< x-envoy-upstream-service-time: 9
<
<!DOCTYPE html>
<html><head><title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1"><!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="static/bootstrap/css/bootstrap.min.css"><!-- Optional theme -->
<link rel="stylesheet" href="static/bootstrap/css/bootstrap-theme.min.css"></head><body><p><h3>Hello! This is a simple bookstore application consisting of three services as shown below</h3>
</p><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><td>http://details:9080</td></tr><tr><th>endpoint</th><td>details</td></tr><tr><th>children</th><td><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><th>endpoint</th><th>children</th></tr><tr><td>http://details:9080</td><td>details</td><td></td></tr><tr><td>http://reviews:9080</td><td>reviews</td><td><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><th>endpoint</th><th>children</th></tr><tr><td>http://ratings:9080</td><td>ratings</td><td></td></tr></table></td></tr></table></td></tr></table><p><h4>Click on one of the links below to auto generate a request to the backend as a real user or a tester</h4>
</p>
<p><a href="/productpage?u=normal">Normal user</a></p>
<p><a href="/productpage?u=test">Test user</a></p><!-- Latest compiled and minified JavaScript -->
<script src="static/jquery.min.js"></script><!-- Latest compiled and minified JavaScript -->
<script src="static/bootstrap/js/bootstrap.min.js"></script></body>
</html>
* Connection #0 to host 172.18.255.200 left intact
curl
查詢應成功,并返回成功的 200
代碼和詳細的 HTML 回復(注意 Hello! This is a simple bookstore application consisting of three services as shown below
)
3. API網關–HTTPS
3.1 創建TLS證書和私鑰
在此任務中,我們將使用 Gateway API 進行 HTTPS 流量路由;因此,我們需要一個 TLS 證書進行數據加密。
出于演示目的,我們將使用由虛構的自簽名證書頒發機構 (CA) 簽名的 TLS 證書。一種簡單的方法是使用 mkcert
創建一個證書來驗證 bookinfo.cilium.rocks
和 hipstershop.cilium.rocks
,因為這些是此網關示例中使用的主機名:
root@server:~# mkcert '*.cilium.rocks'
Created a new local CA 💥
Note: the local CA is not installed in the system trust store.
Run "mkcert -install" for certificates to be trusted automatically ??Created a new certificate valid for the following names 📜- "*.cilium.rocks"Reminder: X.509 wildcards only go one level deep, so this won't match a.b.cilium.rocks ??The certificate is at "./_wildcard.cilium.rocks.pem" and the key at "./_wildcard.cilium.rocks-key.pem" ?It will expire on 28 August 2027 🗓
Mkcert 創建了一個密鑰 ( _wildcard.cilium.rocks-key.pem
) 和一個證書 (_wildcard.cilium.rocks.pem
),我們將用于 Gateway 服務。
使用此密鑰和證書創建 Kubernetes TLS 密鑰:
root@server:~# kubectl create secret tls demo-cert \--key=_wildcard.cilium.rocks-key.pem \--cert=_wildcard.cilium.rocks.pem
secret/demo-cert created
3.2 部署HTTPS網關
查看當前目錄中提供的 HTTPS Gateway API 示例:
root@server:~# yq basic-https.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:name: tls-gateway
spec:gatewayClassName: ciliumlisteners:- name: https-1protocol: HTTPSport: 443hostname: "bookinfo.cilium.rocks"tls:certificateRefs:- kind: Secretname: demo-cert- name: https-2protocol: HTTPSport: 443hostname: "hipstershop.cilium.rocks"tls:certificateRefs:- kind: Secretname: demo-cert
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: https-app-route-1
spec:parentRefs:- name: tls-gatewayhostnames:- "bookinfo.cilium.rocks"rules:- matches:- path:type: PathPrefixvalue: /detailsbackendRefs:- name: detailsport: 9080
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: https-app-route-2
spec:parentRefs:- name: tls-gatewayhostnames:- "hipstershop.cilium.rocks"rules:- matches:- path:type: PathPrefixvalue: /backendRefs:- name: productpageport: 9080
它與我們之前評論的幾乎相同。只需在 Gateway 清單中注意以下內容:
spec:gatewayClassName: ciliumlisteners:- name: https-1protocol: HTTPSport: 443hostname: "bookinfo.cilium.rocks"tls:certificateRefs:- kind: Secretname: demo-cert
以及 HTTPRoute 清單中的以下內容:
spec:parentRefs:- name: tls-gatewayhostnames:- "bookinfo.cilium.rocks"
HTTPS Gateway API 示例基于 HTTP 示例中所做的作,并為兩個 HTTP 路由添加了 TLS 終止:
/details
前綴將路由到 HTTP 質詢中部署的details
HTTP 服務/
前綴將被路由到 HTTP 挑戰賽中部署的productpage
HTTP 服務
這些服務將通過 TLS 進行保護,并可通過兩個域名訪問:
bookinfo.cilium.rocks
hipstershop.cilium.rocks
在我們的示例中,網關為對 bookinfo.cilium.rocks
和 hipstershop.cilium.rocks
的所有請求提供 demo-cert
Secret 資源中定義的 TLS 證書。
現在,讓我們將 Gateway 部署到集群:
root@server:~# kubectl apply -f basic-https.yaml
gateway.gateway.networking.k8s.io/tls-gateway created
httproute.gateway.networking.k8s.io/https-app-route-1 created
httproute.gateway.networking.k8s.io/https-app-route-2 created
這將創建一個 LoadBalancer
服務,大約 30 秒后,該服務應填充一個外部 IP 地址。
驗證網關
是否分配了負載均衡器 IP 地址:
root@server:~# kubectl get gateway tls-gateway
NAME CLASS ADDRESS PROGRAMMED AGE
tls-gateway cilium 172.18.255.201 True 49s
root@server:~# GATEWAY=$(kubectl get gateway tls-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY
172.18.255.201
3.3 HTTPS請求測試
將 Mkcert CA 安裝到您的系統中,以便 cURL 可以信任它:
root@server:~# mkcert -install
The local CA is now installed in the system trust store! ??
現在讓我們向 Gateway 發出請求:
root@server:~# curl -s \--resolve bookinfo.cilium.rocks:443:${GATEWAY} \https://bookinfo.cilium.rocks/details/1 | jq
{"id": 1,"author": "William Shakespeare","year": 1595,"type": "paperback","pages": 200,"publisher": "PublisherA","language": "English","ISBN-10": "1234567890","ISBN-13": "123-1234567890"
}
應使用 HTTPS 正確檢索數據(因此,正確實現了 TLS 握手)。
4. API網關–TLS 路由
4.1 部署應用
我們將使用 NGINX Web 服務器。查看 NGINX 配置。
root@server:~# cat nginx.conf
events {
}http {log_format main '$remote_addr - $remote_user [$time_local] $status ''"$request" $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log /var/log/nginx/access.log main;error_log /var/log/nginx/error.log;server {listen 443 ssl;root /usr/share/nginx/html;index index.html;server_name nginx.cilium.rocks;ssl_certificate /etc/nginx-server-certs/tls.crt;ssl_certificate_key /etc/nginx-server-certs/tls.key;}
}
如您所見,它在端口 443 上偵聽 SSL 流量。請注意,它指定了之前創建的證書和密鑰。
在部署服務器時,我們需要將文件掛載到正確的路徑 (/etc/nginx-server-certs
)。
NGINX 服務器配置保存在 Kubernetes ConfigMap 中。讓我們創建它。
root@server:~# kubectl create configmap nginx-configmap --from-file=nginx.conf=./nginx.conf
configmap/nginx-configmap created
查看 NGINX 服務器 Deployment 和它前面的 Service:
root@server:~# yq tls-service.yaml
---
apiVersion: v1
kind: Service
metadata:name: my-nginxlabels:run: my-nginx
spec:ports:- port: 443protocol: TCPselector:run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:name: my-nginx
spec:selector:matchLabels:run: my-nginxreplicas: 1template:metadata:labels:run: my-nginxspec:containers:- name: my-nginximage: nginxports:- containerPort: 443volumeMounts:- name: nginx-index-filemountPath: /usr/share/nginx/html/- name: nginx-configmountPath: /etc/nginxreadOnly: true- name: nginx-server-certsmountPath: /etc/nginx-server-certsreadOnly: truevolumes:- name: nginx-index-fileconfigMap:name: index-html-configmap- name: nginx-configconfigMap:name: nginx-configmap- name: nginx-server-certssecret:secretName: demo-cert
如您所見,我們正在部署一個帶有 nginx
鏡像的容器,掛載多個文件,例如 HTML 索引、NGINX 配置和證書。請注意,我們正在重復使用之前創建的 demo-cert
TLS 密鑰。
root@server:~# kubectl apply -f tls-service.yaml
service/my-nginx created
deployment.apps/my-nginx created
驗證 Service 和 Deployment 已成功部署:
root@server:~# kubectl get svc,deployment my-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx ClusterIP 10.96.76.254 <none> 443/TCP 27sNAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 27s
4.2 部署網關
查看當前目錄中提供的 Gateway API 配置文件:
root@server:~# yq tls-gateway.yaml \tls-route.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:name: cilium-tls-gateway
spec:gatewayClassName: ciliumlisteners:- name: httpshostname: "nginx.cilium.rocks"port: 443protocol: TLStls:mode: PassthroughallowedRoutes:namespaces:from: All
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TLSRoute
metadata:name: nginx
spec:parentRefs:- name: cilium-tls-gatewayhostnames:- "nginx.cilium.rocks"rules:- backendRefs:- name: my-nginxport: 443
它們與我們在前面的任務中回顧的幾乎相同。只需注意 Gateway 清單中設置的 Passthrough
模式即可:
spec:gatewayClassName: ciliumlisteners:- name: httpshostname: "nginx.cilium.rocks"port: 443protocol: TLStls:mode: PassthroughallowedRoutes:namespaces:from: All
以前,我們使用 HTTPRoute
資源。這一次,我們使用的是 TLSRoute
:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: TLSRoute
metadata:name: nginx
spec:parentRefs:- name: cilium-tls-gatewayhostnames:- "nginx.cilium.rocks"rules:- backendRefs:- name: my-nginxport: 443
您之前了解了如何在網關上終止 TLS 連接。那是在 Terminate
模式下使用 Gateway API。在本例中,網關處于直通
模式:區別在于,流量在客戶端和 Pod 之間始終保持加密狀態。
在 Terminate
中:
- Client -> Gateway: HTTPS
- Gateway -> Pod: HTTP
在 Passthrough
中:
- Client -> Gateway: HTTPS
- Gateway -> Pod: HTTPS
除了使用 SNI 標頭進行路由外,網關實際上不會檢查流量。實際上,hostnames
字段定義了一組 SNI 名稱,這些名稱應與 TLS 握手中 TLS ClientHello 消息的 SNI 屬性匹配。
現在,讓我們將 Gateway 和 TLSRoute 部署到集群中:
root@server:~# kubectl apply -f tls-gateway.yaml -f tls-route.yaml
gateway.gateway.networking.k8s.io/cilium-tls-gateway created
tlsroute.gateway.networking.k8s.io/nginx created
驗證網關
是否已分配 LoadBalancer IP 地址:
root@server:~# kubectl get gateway cilium-tls-gateway
NAME CLASS ADDRESS PROGRAMMED AGE
cilium-tls-gateway cilium 172.18.255.202 True 25s
root@server:~# GATEWAY=$(kubectl get gateway cilium-tls-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY
172.18.255.202
我們還要仔細檢查 TLSRoute 是否已成功預置并已連接到網關。
root@server:~# kubectl get tlsroutes.gateway.networking.k8s.io -o json | jq '.items[0].status.parents[0]'
{"conditions": [{"lastTransitionTime": "2025-05-28T00:30:09Z","message": "Accepted TLSRoute","observedGeneration": 1,"reason": "Accepted","status": "True","type": "Accepted"},{"lastTransitionTime": "2025-05-28T00:30:09Z","message": "Service reference is valid","observedGeneration": 1,"reason": "ResolvedRefs","status": "True","type": "ResolvedRefs"}],"controllerName": "io.cilium/gateway-controller","parentRef": {"group": "gateway.networking.k8s.io","kind": "Gateway","name": "cilium-tls-gateway"}
}
4.3 測試TLS請求
現在,讓我們通過 HTTPS 向網關發出請求:
root@server:~# curl -v \--resolve "nginx.cilium.rocks:443:$GATEWAY" \"https://nginx.cilium.rocks:443"
* Added nginx.cilium.rocks:443:172.18.255.202 to DNS cache
* Hostname nginx.cilium.rocks was found in DNS cache
* Trying 172.18.255.202:443...
* Connected to nginx.cilium.rocks (172.18.255.202) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / X25519 / RSASSA-PSS
* ALPN: server accepted http/1.1
* Server certificate:
* subject: O=mkcert development certificate; OU=root@server
* start date: May 28 00:13:47 2025 GMT
* expire date: Aug 28 00:13:47 2027 GMT
* subjectAltName: host "nginx.cilium.rocks" matched cert's "*.cilium.rocks"
* issuer: O=mkcert development CA; OU=root@server; CN=mkcert root@server
* SSL certificate verify ok.
* Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* Certificate level 1: Public key type RSA (3072/128 Bits/secBits), signed using sha256WithRSAEncryption
* using HTTP/1.x
> GET / HTTP/1.1
> Host: nginx.cilium.rocks
> User-Agent: curl/8.5.0
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
< HTTP/1.1 200 OK
< Server: nginx/1.27.5
< Date: Wed, 28 May 2025 00:31:30 GMT
< Content-Type: text/html
< Content-Length: 100
< Last-Modified: Wed, 28 May 2025 00:27:14 GMT
< Connection: keep-alive
< ETag: "68365862-64"
< Accept-Ranges: bytes
<
<html>
<h1>Welcome to our webserver listening on port 443.</h1>
</br>
<h1>Cilium rocks.</h1>
</html
* Connection #0 to host nginx.cilium.rocks left intact
應使用 HTTPS 正確檢索數據(因此,正確實現了 TLS 握手)。
輸出中有幾點需要注意。
- 它應該是成功的(您應該在最后看到一個帶有
Cilium rocks
的 HTML 輸出。 - 連接是通過端口 443 建立的 - 您應該會看到
Connected to nginx.cilium.rocks (172.18.255.200) port 443
。 - 您應該會看到 TLS handshake 和 TLS version negotiation。預計協商將導致使用 TLSv1.3。
- 預期會看到成功的證書驗證(注意
SSL 證書驗證正常
)。
5. API網關–流量拆分
5.1 部署應用
首先,讓我們在集群中部署一個示例 echo 應用程序。應用程序將回復客戶端,并在回復正文中包含有關接收原始請求的 Pod 和節點的信息。我們將使用此信息來說明流量在多個 Kubernetes 服務之間分配。
使用以下命令查看 YAML 文件。您將看到我們正在部署多個 Pod 和服務。這些服務稱為 echo-1
和 echo-2
,流量將在這些服務之間分配。
root@server:~# yq echo-servers.yaml
---
apiVersion: v1
kind: Service
metadata:labels:app: echo-1name: echo-1
spec:ports:- port: 8080name: highprotocol: TCPtargetPort: 8080selector:app: echo-1
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: echo-1name: echo-1
spec:replicas: 1selector:matchLabels:app: echo-1template:metadata:labels:app: echo-1spec:containers:- image: gcr.io/kubernetes-e2e-test-images/echoserver:2.2name: echo-1ports:- containerPort: 8080env:- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:labels:app: echo-2name: echo-2
spec:ports:- port: 8090name: highprotocol: TCPtargetPort: 8080selector:app: echo-2
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: echo-2name: echo-2
spec:replicas: 1selector:matchLabels:app: echo-2template:metadata:labels:app: echo-2spec:containers:- image: gcr.io/kubernetes-e2e-test-images/echoserver:2.2name: echo-2ports:- containerPort: 8080env:- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIP
部署應用
root@server:~# kubectl apply -f echo-servers.yaml
service/echo-1 created
deployment.apps/echo-1 created
service/echo-2 created
deployment.apps/echo-2 created
檢查應用程序是否已正確部署:
root@server:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-67894999b5-hswsw 1/1 Running 0 33m
echo-1-597b976bc7-5r4xb 1/1 Running 0 88s
echo-2-7ccd4fd567-2mgnn 1/1 Running 0 88s
my-nginx-7bd456664-s7mpc 1/1 Running 0 7m53s
productpage-v1-7bd5bd857c-shr9z 1/1 Running 0 33m
ratings-v1-676ff5568f-w467l 1/1 Running 0 33m
reviews-v1-f5b4b64f-sjk2s 1/1 Running 0 33m
reviews-v2-74b7dd9f45-rk2n6 1/1 Running 0 33m
reviews-v3-65d744df5c-zqljm 1/1 Running 0 33m
快速瀏覽一下部署的服務:
root@server:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cilium-gateway-cilium-tls-gateway LoadBalancer 10.96.57.24 172.18.255.202 443:30846/TCP 5m20s
cilium-gateway-my-gateway LoadBalancer 10.96.212.15 172.18.255.200 80:30157/TCP 29m
cilium-gateway-tls-gateway LoadBalancer 10.96.211.194 172.18.255.201 443:31647/TCP 18m
details ClusterIP 10.96.188.110 <none> 9080/TCP 33m
echo-1 ClusterIP 10.96.235.22 <none> 8080/TCP 110s
echo-2 ClusterIP 10.96.204.162 <none> 8090/TCP 110s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h42m
my-nginx ClusterIP 10.96.76.254 <none> 443/TCP 8m15s
productpage ClusterIP 10.96.173.43 <none> 9080/TCP 33m
ratings ClusterIP 10.96.118.245 <none> 9080/TCP 33m
reviews ClusterIP 10.96.33.54 <none> 9080/TCP 33m
5.2 負載均衡流量
讓我們回顧一下 HTTPRoute
清單。
root@server:~# yq load-balancing-http-route.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: load-balancing-route
spec:parentRefs:- name: my-gatewayrules:- matches:- path:type: PathPrefixvalue: /echobackendRefs:- kind: Servicename: echo-1port: 8080weight: 50- kind: Servicename: echo-2port: 8090weight: 50
讓我們使用以下清單部署 HTTPRoute:
root@server:~# kubectl apply -f load-balancing-http-route.yaml
httproute.gateway.networking.k8s.io/load-balancing-route created
此規則本質上是一個簡單的 L7 代理路由:對于路徑以 /echo
開頭的 HTTP 流量,分別通過端口 8080 和 8090 將流量轉發到 echo-1
和 echo-2
服務。
backendRefs:- kind: Servicename: echo-1port: 8080weight: 50- kind: Servicename: echo-2port: 8090weight: 50
5.3 流量拆分-- 50%比50%
讓我們再次檢索與網關關聯的 IP 地址:
GATEWAY=$(kubectl get gateway my-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY
現在,我們來檢查基于 URL 路徑的流量是否由 Gateway API 代理。
檢查是否可以向該外部地址發出 HTTP 請求:
root@server:~# GATEWAY=$(kubectl get gateway my-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY
172.18.255.200
root@server:~# curl --fail -s http://$GATEWAY/echoHostname: echo-2-7ccd4fd567-2mgnnPod Information:node name: kind-workerpod name: echo-2-7ccd4fd567-2mgnnpod namespace: defaultpod IP: 10.244.1.161Server values:server_version=nginx: 1.12.2 - lua: 10010Request Information:client_address=10.244.2.110method=GETreal path=/echoquery=request_version=1.1request_scheme=httprequest_uri=http://172.18.255.200:8080/echoRequest Headers:accept=*/* host=172.18.255.200 user-agent=curl/8.5.0 x-envoy-internal=true x-forwarded-for=172.18.0.1 x-forwarded-proto=http x-request-id=b17459aa-5d2c-4cb4-9d93-ebdcc123a286 Request Body:-no body in request-
在回復中,獲得接收查詢的 Pod 的名稱。
Hostname: echo-2-7ccd4fd567-2mgnn
請注意,您還可以在原始請求中看到標頭。這在即將到來的任務中非常有用。
您應該會看到回復在兩個 Pod/節點之間均勻平衡。
root@server:~# curl --fail -s http://$GATEWAY/echo|grep Hostname
Hostname: echo-1-597b976bc7-5r4xb
root@server:~# curl --fail -s http://$GATEWAY/echo|grep Hostname
Hostname: echo-2-7ccd4fd567-2mgnn
root@server:~# curl --fail -s http://$GATEWAY/echo|grep Hostname
Hostname: echo-1-597b976bc7-5r4xb
root@server:~# curl --fail -s http://$GATEWAY/echo|grep Hostname
Hostname: echo-1-597b976bc7-5r4xb
root@server:~# curl --fail -s http://$GATEWAY/echo|grep Hostname
Hostname: echo-2-7ccd4fd567-2mgnn
讓我們通過運行循環并計算請求數來仔細檢查流量是否在多個 Pod 之間均勻分配:
for _ in {1..500}; docurl -s -k "http://$GATEWAY/echo" >> curlresponses.txt;
done
驗證響應是否已(或多或少)均勻分布。
root@server:~# for _ in {1..500}; docurl -s -k "http://$GATEWAY/echo" >> curlresponses.txt;
done
root@server:~# grep -o "Hostname: echo-." curlresponses.txt | sort | uniq -c258 Hostname: echo-1242 Hostname: echo-2
可以看到,流量幾乎是1比1的.這也正符合我們配置的設定.我們再次回顧下配置文件
backendRefs:- kind: Servicename: echo-1port: 8080weight: 50- kind: Servicename: echo-2port: 8090weight: 50
5.4 流量拆分-- 99%比1%
這一次,我們將應用權重改為99比1,并應用配置。
root@server:~# yq load-balancing-http-route.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: load-balancing-route
spec:parentRefs:- name: my-gatewayrules:- matches:- path:type: PathPrefixvalue: /echobackendRefs:- kind: Servicename: echo-1port: 8080weight: 99- kind: Servicename: echo-2port: 8090weight: 1
root@server:~# kubectl apply -f load-balancing-http-route.yaml
httproute.gateway.networking.k8s.io/load-balancing-route configured
讓我們運行另一個循環,并使用以下命令再次計算回復:
for _ in {1..500}; docurl -s -k "http://$GATEWAY/echo" >> curlresponses991.txt;
done
驗證響應是否分散,其中大約 99% 的響應分布到 echo-1
,大約 1% 的響應分布到 echo-2
。
root@server:~# for _ in {1..500}; docurl -s -k "http://$GATEWAY/echo" >> curlresponses991.txt;
done
root@server:~# grep -o "Hostname: echo-." curlresponses991.txt | sort | uniq -c498 Hostname: echo-12 Hostname: echo-2
5.5 小測試
× Ingress API is the long-term replacement for Gateway API
√ One of the benefits of Gateway APIs is that it is role-oriented.
× The Gateway and HTTPRoute configuration is all defined in a single API resource.
√ Cilium Gateway API requires Kube-Proxy Replacement.
× Cilium Gateway API does not support L7 HTTP Routing.
6. 測驗
6.1 題目
為了結束本實驗,我們以一個簡單的實驗結束。我們將重用之前創建的服務(稱為 echo-1
和 echo-2
)。
要成功通過考試,我們需要:
- 可通過網關 API 訪問的服務以及
- 基于 PrefixPath
/exam
到達服務的 HTTP 流量 - 在
echo-1
和echo-2
之間按 75:25 的比例分配流量:75% 的流量將到達echo-1
服務,而其余 25% 的流量將到達echo-2
服務。 - 檢查
/root/exam
文件夾中的exam-gateway.yaml
和exam-http-route.yaml
文件。您需要使用正確的值更新XXXX
字段。 - 服務監聽不同的端口 - 你可以使用
kubectl get svc
檢查它們監聽的端口,或者查看用于部署這些服務的echo-servers.yaml
清單。 - 請記住,您需要將 HTTPRoute 引用到父 Gateway。
- 確保應用清單。
- 假設 G A T E W A Y 是分配給網關的 I P 地址, ‘ c u r l ? ? f a i l ? s h t t p : / / GATEWAY 是分配給網關的 IP 地址, `curl --fail -s http:// GATEWAY是分配給網關的IP地址,‘curl??fail?shttp://GATEWAY/exam | grep Hostname` 則應返回如下輸出:
Hostname: echo-X-aaaaaaa-bbbbb
它返回的服務器與我們正在通信的服務器相同。如果設置正確,echo-1
應該接收大約 3 倍的查詢 echo-2
。
- 如前所述,Gateway API IP 地址也是自動創建的 LoadBalancer Service 的外部 IP。
- 檢查腳本將檢查 curl 是否成功,以及分配給
echo-1
的權重是否正好為 75,而分配給echo-2
的權重是否設置為 25。
6.2 解題
根據題意配置exam-gateway.yaml和exam-http-route.yaml
root@server:~# k get svc| grep echo-
echo-1 ClusterIP 10.96.235.22 <none> 8080/TCP 18m
echo-2 ClusterIP 10.96.204.162 <none> 8090/TCP 18m
root@server:~# yq exam/exam-gateway.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:name: exam-gateway
spec:gatewayClassName: ciliumlisteners:- protocol: HTTPport: 80name: web-gw-echoallowedRoutes:namespaces:from: Same
root@server:~# yq exam/exam-http-route.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: exam-route-1
spec:parentRefs:- name: exam-gatewayrules:- matches:- path:type: PathPrefixvalue: /exambackendRefs:- kind: Servicename: echo-1port: 8080weight: 75- kind: Servicename: echo-2port: 8090weight: 25
部署gateway和route
root@server:~# k apply -f exam/exam-gateway.yaml
gateway.gateway.networking.k8s.io/exam-gateway created
root@server:~# k apply -f exam/exam-http-route.yaml
httproute.gateway.networking.k8s.io/exam-route-1 created
測試
獲取gateway地址
GATEWAY=$(kubectl get gateway exam-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY
測試訪問
curl --fail -s http://$GATEWAY/exam | grep Hostname
比例測試
for _ in {1..500}; docurl -s -k "http://$GATEWAY/exam" >> exam.txt;
done
grep -o "Hostname: echo-." exam.txt | sort | uniq -c
測下來也符合我們的預期,76%比24%
新徽章GET!