Cilium動手實驗室: 精通之旅---15.Isovalent Enterprise for Cilium: Network Policies
- 1. 環境信息
- 2. 測試環境部署
- 3. 默認規則
- 3.1 測試默認規則
- 3.2 小測驗
- 4. 網絡策略可視化
- 4.1 通過可視化創建策略
- 4.2 小測試
- 5. 測試策略
- 5.1 應用策略
- 5.2 流量觀測
- 5.3 Hubble觀測
- 5.4 小測試
- 6. 根據Hubble流更新網絡策略
- 6.1 創建新策略
- 6.2 保存并執行策略
- 6.3 測試策略
- 6.4 測試拒絕策略
- 6.5 小測驗
- 7. Boss戰
- 7.1 題目
- 7.2 解題
1. 環境信息
LAB環境地址
https://isovalent.com/labs/cilium-network-policies/
Kind 部署1控制節點,2個worker
root@server:~# yq /etc/kind/${KIND_CONFIG}.yaml
---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:- role: control-planeextraPortMappings:# localhost.run proxy- containerPort: 32042hostPort: 32042# Hubble relay- containerPort: 31234hostPort: 31234# Hubble UI- containerPort: 31235hostPort: 31235- role: worker- role: worker
networking:disableDefaultCNI: truekubeProxyMode: none
root@server:~# echo $HUBBLE_SERVER
localhost:31234
root@server:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 105m v1.31.0
kind-worker Ready <none> 105m v1.31.0
kind-worker2 Ready <none> 105m v1.31.0
2. 測試環境部署
讓我們部署一個簡單的演示應用程序來探索 Isovalent Enterprise for Cilium 的網絡安全能力。我們將創建 3 個命名空間,并在它們之上部署 3 個服務:
kubectl create ns tenant-a
kubectl create ns tenant-b
kubectl create ns tenant-c
kubectl create -f https://docs.isovalent.com/public/tenant-services.yaml -n tenant-a
kubectl create -f https://docs.isovalent.com/public/tenant-services.yaml -n tenant-b
kubectl create -f https://docs.isovalent.com/public/tenant-services.yaml -n tenant-c
當應用程序啟動時,我們檢查一下是否所有 Cilium 組件都已正確部署。請注意,顯示結果可能需要幾秒鐘時間!
root@server:~# cilium status --wait/ˉˉ\/ˉˉ\__/ˉˉ\ Cilium: OK\__/ˉˉ\__/ Operator: OK/ˉˉ\__/ˉˉ\ Envoy DaemonSet: OK\__/ˉˉ\__/ Hubble Relay: OK\__/ ClusterMesh: disabledDaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet cilium-envoy Desired: 3, Ready: 3/3, Available: 3/3
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1
Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 3cilium-envoy Running: 3cilium-operator Running: 2clustermesh-apiserver hubble-relay Running: 1hubble-ui Running: 1
Cluster Pods: 11/11 managed by Cilium
Helm chart version:
Image versions cilium quay.io/isovalent/cilium:v1.17.1-cee.beta.1: 3cilium-envoy quay.io/cilium/cilium-envoy:v1.31.5-1739264036-958bef243c6c66fcfd73ca319f2eb49fff1eb2ae@sha256:fc708bd36973d306412b2e50c924cd8333de67e0167802c9b48506f9d772f521: 3cilium-operator quay.io/isovalent/operator-generic:v1.17.1-cee.beta.1: 2hubble-relay quay.io/isovalent/hubble-relay:v1.17.1-cee.beta.1: 1hubble-ui quay.io/isovalent/hubble-ui-enterprise-backend:v1.3.2: 1hubble-ui quay.io/isovalent/hubble-ui-enterprise:v1.3.2: 1
Configuration: Unsupported feature(s) enabled: EnvoyDaemonSet (Limited). Please contact Isovalent Support for more information on how to grant an exception.
如果一切正常,前 3 行應指示 OK。
某些服務可能尚不可用。您可以稍等片刻,然后重試。
您還可以驗證是否可以使用以下方法正確連接到哈勃中繼(使用我們實驗室中的端口 31234
):
root@server:~# hubble status
Healthcheck (via localhost:31234): Ok
Current/Max Flows: 3,419/12,285 (27.83%)
Flows/s: 20.35
Connected Nodes: 3/3
并且所有節點都在 Hubble 中得到正確管理:
root@server:~# hubble list nodes
NAME STATUS AGE FLOWS/S CURRENT/MAX-FLOWS
kind-control-plane Connected 3m15s 1.88 454/4095 ( 11.09%)
kind-worker Connected 3m14s 2.74 628/4095 ( 15.34%)
kind-worker2 Connected 3m15s 13.11 2665/4095 ( 65.08%)
root@server:~#
在繼續之前,我們檢查一下是否所有 Pod 都已部署:
root@server:~# kubectl get pods --all-namespaces | grep "tenant"
tenant-a backend-service 1/1 Running 0 79s
tenant-a frontend-service 1/1 Running 0 79s
tenant-b backend-service 1/1 Running 0 78s
tenant-b frontend-service 1/1 Running 0 78s
tenant-c backend-service 1/1 Running 0 78s
tenant-c frontend-service 1/1 Running 0 78s
3. 默認規則
3.1 測試默認規則
在 tenant-a
中,我們可以在curl
的幫助下連接到各種服務。
首先,讓我們看看 frontend-service
pod 是否可以訪問 backend-service
服務:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI backend-service.tenant-a
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 00:32:32 GMT
Connection: keep-alive
Keep-Alive: timeout=5
我們收到 HTTP/1.1 200 OK
響應,表明流量不受限制地流動。
現在,讓我們測試集群的 tenant-b
中 backend-service
服務的流量:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI backend-service.tenant-b
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 00:36:39 GMT
Connection: keep-alive
Keep-Alive: timeout=5
我們收到 HTTP/1.1 200 OK
響應,表明流量不受限制地流動。
現在,讓我們測試集群的 tenant-b
中 backend-service
服務的流量:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI backend-service.tenant-b
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 00:34:48 GMT
Connection: keep-alive
Keep-Alive: timeout=5
同樣,允許流量。最后,檢查對集群外部服務的訪問權限,例如 api.twitter.com
:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI api.twitter.com
HTTP/1.1 301 Moved Permanently
Date: Fri, 30 May 2025 00:35:07 GMT
Connection: keep-alive
location: https://api.twitter.com/
x-connection-hash: 1cc2475a532213170f64d1fe4a4c9001be570ef332f3191a37b4cdd7ce23b402
cf-cache-status: DYNAMIC
Set-Cookie: __cf_bm=jslvhdd_RmYVfybX6fDVQnNj_.sn_gbCdv5eIYmXBU8-1748565307-1.0.1.1-k4zQCxDXmmfqx9Kg6nGLrGhR.M1ixe8bZI2434SQ8IvmLwJrb5tnqb.36DjodWCSRl4Sz2y0.WQI4_3bHUp2EvupVvOr5aY7pRr43H92dvk; path=/; expires=Fri, 30-May-25 01:05:07 GMT; domain=.twitter.com; HttpOnly
Server: cloudflare tsa_b
CF-RAY: 947a2650cf8e9ee3-CDG
此響應返回 301
響應,該響應還顯示流量正在流動。
我們可以看到,默認情況下,來自 tenant-a
命名空間中 Pod 的所有流量都是允許的:
- 在 tenant-a 命名空間中
- 到其他命名空間中的服務(例如
tenant-b
) - 到 Kubernetes 集群外部的外部端點(例如
api.twitter.com
)
Hubble
CLI 連接到集群中的 Hubble Relay 組件,并檢索名為“Flows”的日志。然后,此命令行工具允許您可視化和篩選流。
可視化 tenant-a
中的 frontend-service
pod 發送的 TCP 流量 命名空間替換為:
root@server:~# hubble observe --from-pod tenant-a/frontend-service --protocol tcp
May 30 00:34:48.257: tenant-a/frontend-service (ID:61166) <> tenant-b/backend-service:80 (ID:32849) post-xlate-fwd TRANSLATED (TCP)
May 30 00:34:48.257: tenant-a/frontend-service:41388 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: SYN)
May 30 00:34:48.257: tenant-a/frontend-service:41388 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK)
May 30 00:34:48.257: tenant-a/frontend-service:41388 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
May 30 00:34:48.260: tenant-a/frontend-service:41388 (ID:61166) <> tenant-b/backend-service (ID:32849) pre-xlate-rev TRACED (TCP)
May 30 00:34:48.265: tenant-a/frontend-service:41388 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
May 30 00:34:48.267: tenant-a/frontend-service:41388 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK)
May 30 00:35:07.000: tenant-a/frontend-service:51098 (ID:61166) -> 172.66.0.227:80 (world) to-stack FORWARDED (TCP Flags: SYN)
May 30 00:35:07.005: tenant-a/frontend-service:51098 (ID:61166) -> 172.66.0.227:80 (world) to-stack FORWARDED (TCP Flags: ACK)
May 30 00:35:07.005: tenant-a/frontend-service:51098 (ID:61166) -> 172.66.0.227:80 (world) to-stack FORWARDED (TCP Flags: ACK, PSH)
May 30 00:35:07.109: tenant-a/frontend-service:51098 (ID:61166) -> 172.66.0.227:80 (world) to-stack FORWARDED (TCP Flags: ACK, FIN)
May 30 00:35:07.114: tenant-a/frontend-service:51098 (ID:61166) -> 172.66.0.227:80 (world) to-stack FORWARDED (TCP Flags: ACK)
May 30 00:36:39.823: tenant-a/frontend-service (ID:61166) <> 10.96.16.75:80 (world) pre-xlate-fwd TRACED (TCP)
May 30 00:36:39.823: tenant-a/frontend-service (ID:61166) <> tenant-b/backend-service:80 (ID:32849) post-xlate-fwd TRANSLATED (TCP)
May 30 00:36:39.823: tenant-a/frontend-service:33420 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: SYN)
May 30 00:36:39.823: tenant-a/frontend-service:33420 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK)
May 30 00:36:39.823: tenant-a/frontend-service:33420 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
May 30 00:36:39.823: tenant-a/frontend-service:33420 (ID:61166) <> tenant-b/backend-service (ID:32849) pre-xlate-rev TRACED (TCP)
May 30 00:36:39.824: tenant-a/frontend-service:33420 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
May 30 00:36:39.825: tenant-a/frontend-service:33420 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK)
您應該會看到一個日志列表,每個日志都包含:
- 時間戳
- 源 Pod,以及它的命名空間、端口和 Cilium 身份
- 流向(
->
、<-
,如果方向無法確定,有時為<>
) - 目標 Pod,以及它的命名空間、端口和 Cilium 身份
- 跟蹤觀察點(例如
to-endpoint
、to-stack
、to-overlay
) - 判定(例如
FORWARDED
或DROPPED
) - 協議(例如
UDP
、TCP),
可選帶有標志
確定流中的三個請求(到 backend-service.tenant-a
、api.twitter.com
和 backend-service.tenant-b
)。
這些流確認所有三個請求都已轉發到其目標,因為所有流都標記為 FORWARDED
。
3.2 小測驗
這個很明顯所有都是的
√ All traffic is allowed from the namespace's pod to other pods in the same namespace
√ All traffic is allowed from the namespace's pod to pods in other namespaces
√ All traffic is allowed from the namespace's pod to external addresses
√ All traffic is allowed from pods in other namespaces to the namespace's pods
√ All traffic is allowed from external addresses to the namespace's pods
4. 網絡策略可視化
4.1 通過可視化創建策略
- 單擊左側的 Policies 菜單項。
- 在菜單的左側,現在有一個用于選擇命名空間的下拉菜單。
- 選擇
tenant-a
命名空間。由于此命名空間中還沒有網絡策略,因此主窗格為空,并且策略編輯器會顯示一條注釋,指出 “No policy to show”。 - 在右下角,您可以看到一個流列表,所有流都已標記
forwarded
,對應于 Hubble 知道的有關此命名空間的流量。
由于此命名空間當前允許所有內容,因此讓我們創建一個策略!
-
單擊 “Create empty policy” 按鈕。
-
您將在主窗格中看到一個新策略,并在下角的編輯器窗格中看到它的 YAML 表示形式。
-
可視化工具中的中心框對應于策略的目標 Pod。連接到此框的所有箭頭當前均為綠色,因為該策略目前允許所有流量。
📝 單擊中心框右上角的按鈕并指定以下值:
- 策略名稱:
default
- 策略命名空間:
tenant-a
- 端點選擇器:(留空 - 空的 Pod 選擇器匹配命名空間中的所有 Pod)
單擊其下方的綠色 Save 按鈕。
- 策略名稱:
這將更新編輯器窗格中的 YAML 文檔,該文檔現在應為:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:name: defaultnamespace: tenant-a
spec:endpointSelector: {}
在中心塊中,分別單擊 🔓 左下角和右下角的 Ingress Default Allow 和 Egress Default Allow 🔓 按鈕。這將更新策略,使其具有將丟棄來自 tenant-a
命名空間中任何 Pod 的所有入站和出站連接的規則。
可視化工具中的所有箭頭現在都已變為紅色,YAML 規范現在應為:
spec:endpointSelector: {}ingress:- {}egress:- {}
現在我們有了默認拒絕,我們可以開始在策略中允許特定流量。
我們希望允許以下通信模式:
- 來自同一命名空間中的工作負載的 Ingress 。
- 出口到同一命名空間中的工作負載。
- 從命名空間中的工作負載出口到 KubeDNS/CoreDNS,以便命名空間中的 Pod 可以執行 DNS 請求。
為此,在可視化工具的左側(即入口)上,找到第二個框,標題為 {} In Namespace,然后單擊 Any pod
文本。在彈出窗口中,單擊允許來自任何容器 。這會從 {} In Namespace 框向中心框添加一個綠色箭頭,并向 YAML 策略清單添加新的 Ingress 規則:
ingress:- fromEndpoints:- {}
對 Egress In Namespace 框的右側重復此步驟。
然后在右側(即 Egress)的 In Cluster 框中,單擊 Kubernetes DNS
部分,然后在彈出窗口中單擊 Allow rule 按鈕。再次將鼠標懸停在同一個 Kubernetes DNS
部分上,然后切換 DNS 代理
選項。這會向 YAML 清單添加一個完整的塊,允許 DNS (UDP/53) 流量到 kube-system/kube-dns
pod。
現在,可視化工具中應該有三個綠色箭頭,YAML 清單應如下所示:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:name: defaultnamespace: tenant-a
spec:endpointSelector: {}ingress:- fromEndpoints:- {}egress:- toEndpoints:- {}- toEndpoints:- matchLabels:io.kubernetes.pod.namespace: kube-systemk8s-app: kube-dnstoPorts:- ports:- port: "53"protocol: UDPrules:dns:- matchPattern: "*"
現在,我們想將策略保存到我們的集群中。在編輯器窗格中,選擇所有 YAML 代碼并復制它。
并將它保存到文件tenant-a-default-policy.yaml 中
4.2 小測試
√ Cilium supports standard Kubernetes Network Policies
× The Hubble UI only allows you create Cilium Network Policies
√ Adding an empty ingress rule blocks incoming traffic
× Adding an empty egress rule blocks incoming traffic
√ Cilium Network Policies allow to filter DNS requests to Kube DNS
5. 測試策略
5.1 應用策略
應用策略:
kubectl apply -f tenant-a-default-policy.yaml
讓我們在 tenant-a
命名空間中測試 frontend-service 和 backend-service pod 之間的連接:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI backend-service.tenant-a
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 01:17:58 GMT
Connection: keep-alive
Keep-Alive: timeout=5
我們可以看到,當我們收到 HTTP 回復時,命令成功了,這表明 tenant-a
命名空間內以及與 KubeDNS 的通信正在正常進行。
我們可以使用 hubble
可視化這些流量:
root@server:~# hubble observe --from-pod tenant-a/frontend-service
May 30 01:17:58.706: tenant-a/frontend-service:42252 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:17:58.706: tenant-a/frontend-service:42252 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.706: tenant-a/frontend-service:42252 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service (ID:61166) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) post-xlate-fwd TRANSLATED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) policy-verdict:L3-L4 EGRESS ALLOWED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-proxy FORWARDED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> 172.18.0.2 (host) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> 172.18.0.2 (host) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) dns-request proxy FORWARDED (DNS Query backend-service.tenant-a.svc.cluster.local. AAAA)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) dns-request proxy FORWARDED (DNS Query backend-service.tenant-a.svc.cluster.local. A)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service (ID:61166) <> backend-service.tenant-a.svc.cluster.local:80 (world) pre-xlate-fwd TRACED (TCP)
May 30 01:17:58.707: tenant-a/frontend-service (ID:61166) <> tenant-a/backend-service:80 (ID:12501) post-xlate-fwd TRANSLATED (TCP)
May 30 01:17:58.707: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) policy-verdict:L3-Only EGRESS ALLOWED (TCP Flags: SYN)
May 30 01:17:58.707: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) policy-verdict:L3-Only INGRESS ALLOWED (TCP Flags: SYN)
May 30 01:17:58.707: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) to-endpoint FORWARDED (TCP Flags: SYN)
May 30 01:17:58.708: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) to-endpoint FORWARDED (TCP Flags: ACK)
May 30 01:17:58.708: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
May 30 01:17:58.708: tenant-a/frontend-service:41038 (ID:61166) <> tenant-a/backend-service (ID:12501) pre-xlate-rev TRACED (TCP)
May 30 01:17:58.709: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
May 30 01:17:58.709: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) to-endpoint FORWARDED (TCP Flags: ACK)
EVENTS LOST: HUBBLE_RING_BUFFER CPU(0) 1
現在讓我們測試被拒絕的策略。
測試與 api.twitter.com
的連接:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 api.twitter.com
command terminated with exit code 28
連接現在掛起(因為它在 L3/L4 被阻止),嘗試 5 次后將出現超時。
同樣,讓我們使用以下命令測試內部集群服務:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 backend-service.tenant-b
command terminated with exit code 28
同樣,連接掛起并超時。
這確認了對外部服務以及其他 Kubernetes 命名空間的策略被正確拒絕。
5.2 流量觀測
可以看到 tenant-a
中的所有請求 Namespace:
root@server:~# hubble observe --namespace tenant-a
May 30 01:19:22.111: tenant-a/frontend-service:38600 (ID:61166) -> kube-system/coredns-6f6b679f8f-w4l8q:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:19:22.111: tenant-a/frontend-service:38600 (ID:61166) <> kube-system/coredns-6f6b679f8f-w4l8q (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.111: tenant-a/frontend-service:38600 (ID:61166) <> kube-system/coredns-6f6b679f8f-w4l8q (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.114: tenant-a/frontend-service:38600 (ID:61166) <- kube-system/coredns-6f6b679f8f-w4l8q:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:19:22.114: tenant-a/frontend-service:49697 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:19:22.114: tenant-a/frontend-service:49697 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.114: tenant-a/frontend-service:49697 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.116: tenant-a/frontend-service:49697 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:19:22.116: tenant-a/frontend-service:37254 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:19:22.116: tenant-a/frontend-service:37254 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.116: tenant-a/frontend-service:37254 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.117: tenant-a/frontend-service:37254 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:20:01.119: tenant-a/frontend-service:37720 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:20:01.119: tenant-a/frontend-service:37720 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.119: tenant-a/frontend-service:37720 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.119: tenant-a/frontend-service:37720 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-proxy FORWARDED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) dns-response proxy FORWARDED (DNS Answer TTL: 4294967295 (Proxy backend-service.tenant-b.svc.cluster.local. AAAA))
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) dns-response proxy FORWARDED (DNS Answer "10.96.16.75" TTL: 30 (Proxy backend-service.tenant-b.svc.cluster.local. A))
May 30 01:20:01.120: kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) <> tenant-a/frontend-service (ID:61166) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.120: 10.96.0.10:53 (world) <> tenant-a/frontend-service (ID:61166) post-xlate-rev TRANSLATED (UDP)
May 30 01:20:01.120: kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) <> tenant-a/frontend-service (ID:61166) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.120: 10.96.0.10:53 (world) <> tenant-a/frontend-service (ID:61166) post-xlate-rev TRANSLATED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service (ID:61166) <> backend-service.tenant-b.svc.cluster.local:80 (world) pre-xlate-fwd TRACED (TCP)
May 30 01:20:01.120: tenant-a/frontend-service (ID:61166) <> tenant-b/backend-service:80 (ID:32849) post-xlate-fwd TRANSLATED (TCP)
May 30 01:20:01.120: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:01.120: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:02.172: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:02.172: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:03.196: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:03.196: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:04.220: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:04.220: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:05.244: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:05.244: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
看到標記為 FORWARDED
或 DROPPED
的流。
您可以使用 --verdict
標志篩選此條件,例如執行:
root@server:~# hubble observe --namespace tenant-a --verdict DROPPED
May 30 01:19:22.126: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:19:22.126: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:19:23.132: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:19:23.132: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:19:24.156: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:19:24.156: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:19:25.619: tenant-a/frontend-service:44836 (ID:61166) <> api.twitter.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:19:25.619: tenant-a/frontend-service:44836 (ID:61166) <> api.twitter.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:19:26.652: tenant-a/frontend-service:44836 (ID:61166) <> api.twitter.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:19:26.652: tenant-a/frontend-service:44836 (ID:61166) <> api.twitter.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:01.120: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:01.120: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:02.172: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:02.172: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:03.196: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:03.196: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:04.220: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:04.220: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:05.244: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:05.244: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
您應該能夠查看在上一個質詢中丟棄的請求。
5.3 Hubble觀測
單擊 Connections 并選擇 tenant-a
命名空間。
這將向您展示 Hubble UI 如何簡化對服務連接的理解,并顯示由于網絡策略導致的丟棄而導致的連接失敗。
在服務地圖中,箭頭末尾的紅線表示已刪除的流,而灰色表示流成功。
窗格底部的 flows (流) 表還顯示了此命名空間的連接的簡化視圖,包括上次看到 flow 的時間。
5.4 小測試
√ The Hubble CLI allows to observe all Kubernetes traffic
× Hubble (CLI & UI) always display external DNS names
√ The Hubble service map displays connection drops
√ The Hubble CLI output can be filtered by pod
6. 根據Hubble流更新網絡策略
在 Hubble UI 中,轉到 Policies ( 策略) 視圖并選擇 tenant-a
命名空間。
在右下角,我們看到 Hubble 已經識別了在 tenant-a
命名空間中觀察到的一組當前策略不允許的流,并將它們標記為已刪除
。
6.1 創建新策略
為了允許其他流量,我們可以向現有網絡策略添加規則。
單擊編輯器窗格左上角的 + New
按鈕。
📝 然后單擊中心框中的圖標,并將策略重命名為 extra
。點擊Save 保存.
查看右下角窗格中的 flows 表。其中兩個請求的判決被丟棄
,即對 tenant-b
中的 backend-service
和 api.twitter.com
的請求。
單擊與 tenant-b
中的 backend-service
對應的行,然后選擇 Add rule to policy。YAML 清單現已更新以接受此流量!
重復該作以允許流量 api.twitter.com
。
這將產生一個精細的網絡策略,該策略允許所需的連接,同時保留 Zero Trust 網絡策略的默認拒絕方面。
這些更改也會反映在策略可視化中。例如,選中右側名為 In Cluster 的下框,現在在 DNS 規則下方顯示了另一條規則。
由于我們正在制定新的網絡策略,因此主窗格僅顯示此特定策略的規則。
在左列的底部,您可以看到此命名空間的策略列表,其中 extra
以粗體字表示,允許您在策略之間切換。
切換列表頂部的 Visualize all 按鈕。主窗格現在顯示同時應用的所有策略的結果。
6.2 保存并執行策略
將extra的內容保存到文件
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:name: extranamespace: tenant-a
spec:endpointSelector: {}egress:- toFQDNs:- matchName: api.twitter.comtoPorts:- ports:- port: "80"
應用新規則:
kubectl apply -f tenant-a-extra-policy.yaml
6.3 測試策略
讓我們驗證一下我們的策略是否正常工作。執行我們之前運行的相同 curl 命令:
租戶內部測試:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 backend-service.tenant-a
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 01:43:50 GMT
Connection: keep-alive
Keep-Alive: timeout=5
外部服務測試:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 api.twitter.com
HTTP/1.1 301 Moved Permanently
Date: Fri, 30 May 2025 01:44:05 GMT
Connection: keep-alive
location: https://api.twitter.com/
x-connection-hash: 4f4309bf4e4addf64c0e07844f4bb940265e0c08bfc6401ff84ab5dc0f1c11de
cf-cache-status: DYNAMIC
Set-Cookie: __cf_bm=jH7pT_6q5Ahf1bi5orj4hLi5ds4iwg.B59McL_hIEd4-1748569445-1.0.1.1-uUluHlV3wQ905kI4lnii8I9CfYkTZO.HCh0j53.4wH7NB0gdlqwxgrVb4rmXAxrR7YnCuEfYhgzDM8RalcHCWMcqJ484UI0ZINnbgz_Gov4; path=/; expires=Fri, 30-May-25 02:14:05 GMT; domain=.twitter.com; HttpOnly
Server: cloudflare tsa_b
CF-RAY: 947a8b56cce20358-CDG
其他租戶服務測試:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 backend-service.tenant-b
command terminated with exit code 28
顯然這與我們的預期相一致
6.4 測試拒絕策略
讓我們檢查一下其他外部目標是否仍然被拒絕:
另一個外部服務:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 www.google.com
command terminated with exit code 28
另一個內部服務:
root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 backend-service.tenant-c
command terminated with exit code 28
如您所見,這些仍然無法訪問,我們可以使用以下方法檢查流:
root@server:~# hubble observe --namespace tenant-a
May 30 01:45:15.992: tenant-a/frontend-service:37240 (ID:5591) -> kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-endpoint FORWARDED (UDP)
May 30 01:45:15.992: tenant-a/frontend-service:37240 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.992: tenant-a/frontend-service:37240 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.994: tenant-a/frontend-service:37240 (ID:5591) <- kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:15.994: tenant-a/frontend-service:33400 (ID:5591) -> kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-endpoint FORWARDED (UDP)
May 30 01:45:15.994: tenant-a/frontend-service:33400 (ID:5591) <> kube-system/coredns-6f6b679f8f-wbmhb (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.994: tenant-a/frontend-service:33400 (ID:5591) <> kube-system/coredns-6f6b679f8f-wbmhb (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.995: tenant-a/frontend-service:33400 (ID:5591) <- kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:15.996: tenant-a/frontend-service:52771 (ID:5591) -> kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-endpoint FORWARDED (UDP)
May 30 01:45:15.996: tenant-a/frontend-service:52771 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.996: tenant-a/frontend-service:52771 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.998: tenant-a/frontend-service:52771 (ID:5591) <- kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:29.005: tenant-a/frontend-service:59030 (ID:5591) -> kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-endpoint FORWARDED (UDP)
May 30 01:45:29.005: tenant-a/frontend-service:59030 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.005: tenant-a/frontend-service:59030 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.006: tenant-a/frontend-service:59030 (ID:5591) <- kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:29.006: tenant-a/frontend-service:49123 (ID:5591) -> kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-endpoint FORWARDED (UDP)
May 30 01:45:29.006: tenant-a/frontend-service:49123 (ID:5591) <> kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:29.006: tenant-a/frontend-service:49123 (ID:5591) <> kube-system/coredns-6f6b679f8f-wbmhb (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.006: tenant-a/frontend-service:49123 (ID:5591) <> kube-system/coredns-6f6b679f8f-wbmhb (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.007: tenant-a/frontend-service:49123 (ID:5591) <- kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:29.007: tenant-a/frontend-service:49123 (ID:5591) <- kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-proxy FORWARDED (UDP)
May 30 01:45:29.007: tenant-a/frontend-service:49123 (ID:5591) <- kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) dns-response proxy FORWARDED (DNS Answer TTL: 4294967295 (Proxy backend-service.tenant-c.svc.cluster.local. AAAA))
May 30 01:45:29.007: tenant-a/frontend-service:49123 (ID:5591) <- kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) dns-response proxy FORWARDED (DNS Answer "10.96.102.16" TTL: 30 (Proxy backend-service.tenant-c.svc.cluster.local. A))
May 30 01:45:29.007: kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) <> tenant-a/frontend-service (ID:5591) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.007: 10.96.0.10:53 (world) <> tenant-a/frontend-service (ID:5591) post-xlate-rev TRANSLATED (UDP)
May 30 01:45:29.007: kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) <> tenant-a/frontend-service (ID:5591) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.007: 10.96.0.10:53 (world) <> tenant-a/frontend-service (ID:5591) post-xlate-rev TRANSLATED (UDP)
May 30 01:45:29.007: tenant-a/frontend-service (ID:5591) <> backend-service.tenant-c.svc.cluster.local:80 (world) pre-xlate-fwd TRACED (TCP)
May 30 01:45:29.007: tenant-a/frontend-service (ID:5591) <> tenant-c/backend-service:80 (ID:3374) post-xlate-fwd TRANSLATED (TCP)
May 30 01:45:29.007: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:29.007: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:30.051: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:30.051: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:31.075: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:31.075: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:32.099: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:32.099: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:33.123: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:33.123: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
僅顯示被DROP的流量
root@server:~# hubble observe --namespace tenant-a --verdict DROPPED
May 30 01:45:18.083: tenant-a/frontend-service:49892 (ID:5591) <> www.google.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:18.083: tenant-a/frontend-service:49892 (ID:5591) <> www.google.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:19.095: tenant-a/frontend-service:52790 (ID:5591) <> www.google.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:19.095: tenant-a/frontend-service:52790 (ID:5591) <> www.google.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:20.364: tenant-a/frontend-service:43234 (ID:5591) <> www.google.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:20.364: tenant-a/frontend-service:43234 (ID:5591) <> www.google.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:20.677: tenant-a/frontend-service:53370 (ID:5591) <> www.google.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:20.677: tenant-a/frontend-service:53370 (ID:5591) <> www.google.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:20.831: tenant-a/frontend-service:35940 (ID:5591) <> www.google.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:20.831: tenant-a/frontend-service:35940 (ID:5591) <> www.google.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:29.007: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:29.007: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:30.051: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:30.051: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:31.075: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:31.075: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:32.099: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:32.099: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:33.123: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:33.123: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
6.5 小測驗
√ Rules can be added to an existing Network Policy
√ Rules can be added by creating a new Network Policy
√ The Hubble Network Policy editor allows to edit existing Kubernetes Network Policies
× Modifying Network Policies in Hubble automatically applies them to the cluster
× Hubble cannot let you view all policies applying to namespace at the same time
7. Boss戰
7.1 題目
對于此實踐考試,您需要:
- 在命名空間
tenant-b
中創建名為default-exam
的策略(使用default-exam.yaml
文件) - 允許來自命名空間
tenant-b
中所有 Pod 的流量在端口443
上google.com
- 允許
tenant-b
中的 Kubernetes DNS 流量 - 允許流量流向端口
80
上命名空間tenant-c
中的 Pod backend-service - apply the policy 應用策略
可以使用以下命令做測試
kubectl exec -n tenant-b frontend-service -- curl -sI --max-time 5 backend-service.tenant-c
7.2 解題
根據題目1-3創建default-exam.yaml
root@server:~# yq default-exam.yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:name: default-examnamespace: tenant-b
spec:endpointSelector: {}ingress:- {}egress:- toFQDNs:- matchName: google.comtoPorts:- ports:- port: "443"- toEndpoints:- matchLabels:any:io.kubernetes.pod.namespace: kube-systemany:k8s-app: kube-dnstoPorts:- ports:- port: "53"protocol: UDPrules:dns:- matchPattern: "*"- toEndpoints:- {}
部署策略
k apply -f default-exam.yaml
訪問測試
root@server:~# k apply -f default-exam.yaml
ciliumnetworkpolicy.cilium.io/default-exam created
root@server:~# kubectl exec -n tenant-b frontend-service -- curl -sI --max-time 5 backend-service.tenant-c
command terminated with exit code 28
將被drop的添加到策略中
復制CiliumNetworkPolicy到文件default-exam.yaml
確認文件你內容,并應用配置
root@server:~# yq default-exam.yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:name: default-examnamespace: tenant-b
spec:endpointSelector: {}
# ingress:
# - {}egress:- toEndpoints:- matchLabels:io.kubernetes.pod.namespace: kube-systemk8s-app: kube-dnstoPorts:- ports:- port: "53"protocol: UDPrules:dns:- matchPattern: "*"- toFQDNs:- matchName: google.comtoPorts:- ports:- port: "443"- toEndpoints:- matchLabels:k8s:app: backend-servicek8s:io.kubernetes.pod.namespace: tenant-ctoPorts:- ports:- port: "80"- toEndpoints:- {}
root@server:~# k apply -f default-exam.yaml
ciliumnetworkpolicy.cilium.io/default-exam configured
再次測試訪問tenant-c的service
root@server:~# kubectl exec -n tenant-b frontend-service -- curl -sI --max-time 5 backend-service.tenant-c
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 01:57:28 GMT
Connection: keep-alive
Keep-Alive: timeout=5
顯然它成功了,我們再測試到google.com的
root@server:~# kubectl exec -n tenant-b frontend-service -- curl -sI --max-time 5 https://google.com
HTTP/2 301
location: https://www.google.com/
content-type: text/html; charset=UTF-8
content-security-policy-report-only: object-src 'none';base-uri 'self';script-src 'nonce--v6mW7uA2tULH4CO1MAfKQ' 'strict-dynamic' 'report-sample' 'unsafe-eval' 'unsafe-inline' https: http:;report-uri https://csp.withgoogle.com/csp/gws/other-hp
date: Fri, 30 May 2025 02:03:29 GMT
expires: Sun, 29 Jun 2025 02:03:29 GMT
cache-control: public, max-age=2592000
server: gws
content-length: 220
x-xss-protection: 0
x-frame-options: SAMEORIGIN
顯然也沒問題
好了,提交下試試看.
新徽標GET!