Cilium動手實驗室: 精通之旅---15.Isovalent Enterprise for Cilium: Network Policies

Cilium動手實驗室: 精通之旅---15.Isovalent Enterprise for Cilium: Network Policies

  • 1. 環境信息
  • 2. 測試環境部署
  • 3. 默認規則
    • 3.1 測試默認規則
    • 3.2 小測驗
  • 4. 網絡策略可視化
    • 4.1 通過可視化創建策略
    • 4.2 小測試
  • 5. 測試策略
    • 5.1 應用策略
    • 5.2 流量觀測
    • 5.3 Hubble觀測
    • 5.4 小測試
  • 6. 根據Hubble流更新網絡策略
    • 6.1 創建新策略
    • 6.2 保存并執行策略
    • 6.3 測試策略
    • 6.4 測試拒絕策略
    • 6.5 小測驗
    • 7. Boss戰
    • 7.1 題目
    • 7.2 解題

1. 環境信息

LAB環境地址

https://isovalent.com/labs/cilium-network-policies/

Kind 部署1控制節點,2個worker

root@server:~# yq /etc/kind/${KIND_CONFIG}.yaml
---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:- role: control-planeextraPortMappings:# localhost.run proxy- containerPort: 32042hostPort: 32042# Hubble relay- containerPort: 31234hostPort: 31234# Hubble UI- containerPort: 31235hostPort: 31235- role: worker- role: worker
networking:disableDefaultCNI: truekubeProxyMode: none
root@server:~# echo $HUBBLE_SERVER
localhost:31234
root@server:~# kubectl get nodes
NAME                 STATUS   ROLES           AGE    VERSION
kind-control-plane   Ready    control-plane   105m   v1.31.0
kind-worker          Ready    <none>          105m   v1.31.0
kind-worker2         Ready    <none>          105m   v1.31.0

2. 測試環境部署

讓我們部署一個簡單的演示應用程序來探索 Isovalent Enterprise for Cilium 的網絡安全能力。我們將創建 3 個命名空間,并在它們之上部署 3 個服務:

kubectl create ns tenant-a
kubectl create ns tenant-b
kubectl create ns tenant-c
kubectl create -f https://docs.isovalent.com/public/tenant-services.yaml -n tenant-a
kubectl create -f https://docs.isovalent.com/public/tenant-services.yaml -n tenant-b
kubectl create -f https://docs.isovalent.com/public/tenant-services.yaml -n tenant-c

當應用程序啟動時,我們檢查一下是否所有 Cilium 組件都已正確部署。請注意,顯示結果可能需要幾秒鐘時間!

root@server:~# cilium status --wait/ˉˉ\/ˉˉ\__/ˉˉ\    Cilium:             OK\__/ˉˉ\__/    Operator:           OK/ˉˉ\__/ˉˉ\    Envoy DaemonSet:    OK\__/ˉˉ\__/    Hubble Relay:       OK\__/       ClusterMesh:        disabledDaemonSet              cilium                   Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet              cilium-envoy             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator          Desired: 2, Ready: 2/2, Available: 2/2
Deployment             hubble-relay             Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-ui                Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 3cilium-envoy             Running: 3cilium-operator          Running: 2clustermesh-apiserver    hubble-relay             Running: 1hubble-ui                Running: 1
Cluster Pods:          11/11 managed by Cilium
Helm chart version:    
Image versions         cilium             quay.io/isovalent/cilium:v1.17.1-cee.beta.1: 3cilium-envoy       quay.io/cilium/cilium-envoy:v1.31.5-1739264036-958bef243c6c66fcfd73ca319f2eb49fff1eb2ae@sha256:fc708bd36973d306412b2e50c924cd8333de67e0167802c9b48506f9d772f521: 3cilium-operator    quay.io/isovalent/operator-generic:v1.17.1-cee.beta.1: 2hubble-relay       quay.io/isovalent/hubble-relay:v1.17.1-cee.beta.1: 1hubble-ui          quay.io/isovalent/hubble-ui-enterprise-backend:v1.3.2: 1hubble-ui          quay.io/isovalent/hubble-ui-enterprise:v1.3.2: 1
Configuration:                            Unsupported feature(s) enabled: EnvoyDaemonSet (Limited). Please contact Isovalent Support for more information on how to grant an exception.

如果一切正常,前 3 行應指示 OK。 某些服務可能尚不可用。您可以稍等片刻,然后重試。
您還可以驗證是否可以使用以下方法正確連接到哈勃中繼(使用我們實驗室中的端口 31234):

root@server:~# hubble status
Healthcheck (via localhost:31234): Ok
Current/Max Flows: 3,419/12,285 (27.83%)
Flows/s: 20.35
Connected Nodes: 3/3

并且所有節點都在 Hubble 中得到正確管理:

root@server:~# hubble list nodes
NAME                 STATUS      AGE     FLOWS/S   CURRENT/MAX-FLOWS
kind-control-plane   Connected   3m15s   1.88      454/4095 ( 11.09%)
kind-worker          Connected   3m14s   2.74      628/4095 ( 15.34%)
kind-worker2         Connected   3m15s   13.11     2665/4095 ( 65.08%)
root@server:~# 

在繼續之前,我們檢查一下是否所有 Pod 都已部署:

root@server:~# kubectl get pods --all-namespaces | grep "tenant"
tenant-a             backend-service                              1/1     Running   0          79s
tenant-a             frontend-service                             1/1     Running   0          79s
tenant-b             backend-service                              1/1     Running   0          78s
tenant-b             frontend-service                             1/1     Running   0          78s
tenant-c             backend-service                              1/1     Running   0          78s
tenant-c             frontend-service                             1/1     Running   0          78s

3. 默認規則

3.1 測試默認規則

tenant-a 中,我們可以在curl 的幫助下連接到各種服務。

首先,讓我們看看 frontend-service pod 是否可以訪問 backend-service 服務:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI backend-service.tenant-a
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 00:32:32 GMT
Connection: keep-alive
Keep-Alive: timeout=5

我們收到 HTTP/1.1 200 OK 響應,表明流量不受限制地流動。

現在,讓我們測試集群的 tenant-bbackend-service 服務的流量:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI backend-service.tenant-b
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 00:36:39 GMT
Connection: keep-alive
Keep-Alive: timeout=5

我們收到 HTTP/1.1 200 OK 響應,表明流量不受限制地流動。

現在,讓我們測試集群的 tenant-bbackend-service 服務的流量:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI backend-service.tenant-b
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 00:34:48 GMT
Connection: keep-alive
Keep-Alive: timeout=5

同樣,允許流量。最后,檢查對集群外部服務的訪問權限,例如 api.twitter.com

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI api.twitter.com
HTTP/1.1 301 Moved Permanently
Date: Fri, 30 May 2025 00:35:07 GMT
Connection: keep-alive
location: https://api.twitter.com/
x-connection-hash: 1cc2475a532213170f64d1fe4a4c9001be570ef332f3191a37b4cdd7ce23b402
cf-cache-status: DYNAMIC
Set-Cookie: __cf_bm=jslvhdd_RmYVfybX6fDVQnNj_.sn_gbCdv5eIYmXBU8-1748565307-1.0.1.1-k4zQCxDXmmfqx9Kg6nGLrGhR.M1ixe8bZI2434SQ8IvmLwJrb5tnqb.36DjodWCSRl4Sz2y0.WQI4_3bHUp2EvupVvOr5aY7pRr43H92dvk; path=/; expires=Fri, 30-May-25 01:05:07 GMT; domain=.twitter.com; HttpOnly
Server: cloudflare tsa_b
CF-RAY: 947a2650cf8e9ee3-CDG

此響應返回 301 響應,該響應還顯示流量正在流動。

我們可以看到,默認情況下,來自 tenant-a 命名空間中 Pod 的所有流量都是允許的:

  • 在 tenant-a 命名空間中
  • 到其他命名空間中的服務(例如 tenant-b
  • 到 Kubernetes 集群外部的外部端點(例如 api.twitter.com

Hubble CLI 連接到集群中的 Hubble Relay 組件,并檢索名為“Flows”的日志。然后,此命令行工具允許您可視化和篩選流。

可視化 tenant-a 中的 frontend-service pod 發送的 TCP 流量 命名空間替換為:

root@server:~# hubble observe --from-pod tenant-a/frontend-service --protocol tcp
May 30 00:34:48.257: tenant-a/frontend-service (ID:61166) <> tenant-b/backend-service:80 (ID:32849) post-xlate-fwd TRANSLATED (TCP)
May 30 00:34:48.257: tenant-a/frontend-service:41388 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: SYN)
May 30 00:34:48.257: tenant-a/frontend-service:41388 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK)
May 30 00:34:48.257: tenant-a/frontend-service:41388 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
May 30 00:34:48.260: tenant-a/frontend-service:41388 (ID:61166) <> tenant-b/backend-service (ID:32849) pre-xlate-rev TRACED (TCP)
May 30 00:34:48.265: tenant-a/frontend-service:41388 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
May 30 00:34:48.267: tenant-a/frontend-service:41388 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK)
May 30 00:35:07.000: tenant-a/frontend-service:51098 (ID:61166) -> 172.66.0.227:80 (world) to-stack FORWARDED (TCP Flags: SYN)
May 30 00:35:07.005: tenant-a/frontend-service:51098 (ID:61166) -> 172.66.0.227:80 (world) to-stack FORWARDED (TCP Flags: ACK)
May 30 00:35:07.005: tenant-a/frontend-service:51098 (ID:61166) -> 172.66.0.227:80 (world) to-stack FORWARDED (TCP Flags: ACK, PSH)
May 30 00:35:07.109: tenant-a/frontend-service:51098 (ID:61166) -> 172.66.0.227:80 (world) to-stack FORWARDED (TCP Flags: ACK, FIN)
May 30 00:35:07.114: tenant-a/frontend-service:51098 (ID:61166) -> 172.66.0.227:80 (world) to-stack FORWARDED (TCP Flags: ACK)
May 30 00:36:39.823: tenant-a/frontend-service (ID:61166) <> 10.96.16.75:80 (world) pre-xlate-fwd TRACED (TCP)
May 30 00:36:39.823: tenant-a/frontend-service (ID:61166) <> tenant-b/backend-service:80 (ID:32849) post-xlate-fwd TRANSLATED (TCP)
May 30 00:36:39.823: tenant-a/frontend-service:33420 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: SYN)
May 30 00:36:39.823: tenant-a/frontend-service:33420 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK)
May 30 00:36:39.823: tenant-a/frontend-service:33420 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
May 30 00:36:39.823: tenant-a/frontend-service:33420 (ID:61166) <> tenant-b/backend-service (ID:32849) pre-xlate-rev TRACED (TCP)
May 30 00:36:39.824: tenant-a/frontend-service:33420 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
May 30 00:36:39.825: tenant-a/frontend-service:33420 (ID:61166) -> tenant-b/backend-service:80 (ID:32849) to-endpoint FORWARDED (TCP Flags: ACK)

您應該會看到一個日志列表,每個日志都包含:

  • 時間戳
  • 源 Pod,以及它的命名空間、端口和 Cilium 身份
  • 流向(-><-,如果方向無法確定,有時為 <>
  • 目標 Pod,以及它的命名空間、端口和 Cilium 身份
  • 跟蹤觀察點(例如 to-endpointto-stackto-overlay
  • 判定(例如 FORWARDEDDROPPED
  • 協議(例如 UDPTCP), 可選帶有標志

確定流中的三個請求(到 backend-service.tenant-aapi.twitter.combackend-service.tenant-b)。

這些流確認所有三個請求都已轉發到其目標,因為所有流都標記為 FORWARDED

3.2 小測驗

這個很明顯所有都是的

√	All traffic is allowed from the namespace's pod to other pods in the same namespace
√	All traffic is allowed from the namespace's pod to pods in other namespaces
√	All traffic is allowed from the namespace's pod to external addresses
√	All traffic is allowed from pods in other namespaces to the namespace's pods
√	All traffic is allowed from external addresses to the namespace's pods

4. 網絡策略可視化

4.1 通過可視化創建策略

  1. 單擊左側的 Policies 菜單項。
  2. 在菜單的左側,現在有一個用于選擇命名空間的下拉菜單。
  3. 選擇 tenant-a 命名空間。由于此命名空間中還沒有網絡策略,因此主窗格為空,并且策略編輯器會顯示一條注釋,指出 “No policy to show”。
  4. 在右下角,您可以看到一個流列表,所有流都已標記 forwarded,對應于 Hubble 知道的有關此命名空間的流量。

請添加圖片描述

由于此命名空間當前允許所有內容,因此讓我們創建一個策略!

  1. 單擊 “Create empty policy” 按鈕。

  2. 您將在主窗格中看到一個新策略,并在下角的編輯器窗格中看到它的 YAML 表示形式。

  3. 可視化工具中的中心框對應于策略的目標 Pod。連接到此框的所有箭頭當前均為綠色,因為該策略目前允許所有流量。

    📝 單擊中心框右上角的按鈕并指定以下值:

    1. 策略名稱:default
    2. 策略命名空間: tenant-a
    3. 端點選擇器:(留空 - 空的 Pod 選擇器匹配命名空間中的所有 Pod)

    單擊其下方的綠色 Save 按鈕。

請添加圖片描述

這將更新編輯器窗格中的 YAML 文檔,該文檔現在應為:

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:name: defaultnamespace: tenant-a
spec:endpointSelector: {}

在中心塊中,分別單擊 🔓 左下角和右下角的 Ingress Default Allow 和 Egress Default Allow 🔓 按鈕。這將更新策略,使其具有將丟棄來自 tenant-a 命名空間中任何 Pod 的所有入站和出站連接的規則。

請添加圖片描述

可視化工具中的所有箭頭現在都已變為紅色,YAML 規范現在應為:

spec:endpointSelector: {}ingress:- {}egress:- {}

現在我們有了默認拒絕,我們可以開始在策略中允許特定流量。

我們希望允許以下通信模式:

  • 來自同一命名空間中的工作負載的 Ingress 。
  • 出口到同一命名空間中的工作負載。
  • 從命名空間中的工作負載出口到 KubeDNS/CoreDNS,以便命名空間中的 Pod 可以執行 DNS 請求。

為此,在可視化工具的左側(即入口)上,找到第二個框,標題為 {} In Namespace,然后單擊 Any pod 文本。在彈出窗口中,單擊允許來自任何容器 。這會從 {} In Namespace 框向中心框添加一個綠色箭頭,并向 YAML 策略清單添加新的 Ingress 規則:

  ingress:- fromEndpoints:- {}

對 Egress In Namespace 框的右側重復此步驟。

然后在右側(即 Egress)的 In Cluster 框中,單擊 Kubernetes DNS 部分,然后在彈出窗口中單擊 Allow rule 按鈕。再次將鼠標懸停在同一個 Kubernetes DNS 部分上,然后切換 DNS 代理選項。這會向 YAML 清單添加一個完整的塊,允許 DNS (UDP/53) 流量到 kube-system/kube-dns pod。

現在,可視化工具中應該有三個綠色箭頭,YAML 清單應如下所示:

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:name: defaultnamespace: tenant-a
spec:endpointSelector: {}ingress:- fromEndpoints:- {}egress:- toEndpoints:- {}- toEndpoints:- matchLabels:io.kubernetes.pod.namespace: kube-systemk8s-app: kube-dnstoPorts:- ports:- port: "53"protocol: UDPrules:dns:- matchPattern: "*"

現在,我們想將策略保存到我們的集群中。在編輯器窗格中,選擇所有 YAML 代碼并復制它。

并將它保存到文件tenant-a-default-policy.yaml 中

4.2 小測試

√	Cilium supports standard Kubernetes Network Policies
×	The Hubble UI only allows you create Cilium Network Policies
√	Adding an empty ingress rule blocks incoming traffic
×	Adding an empty egress rule blocks incoming traffic
√	Cilium Network Policies allow to filter DNS requests to Kube DNS

5. 測試策略

5.1 應用策略

應用策略:

kubectl apply -f tenant-a-default-policy.yaml

讓我們在 tenant-a 命名空間中測試 frontend-service 和 backend-service pod 之間的連接:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI backend-service.tenant-a
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 01:17:58 GMT
Connection: keep-alive
Keep-Alive: timeout=5

我們可以看到,當我們收到 HTTP 回復時,命令成功了,這表明 tenant-a 命名空間內以及與 KubeDNS 的通信正在正常進行。

我們可以使用 hubble 可視化這些流量:

root@server:~# hubble observe --from-pod tenant-a/frontend-service
May 30 01:17:58.706: tenant-a/frontend-service:42252 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:17:58.706: tenant-a/frontend-service:42252 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.706: tenant-a/frontend-service:42252 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service (ID:61166) <> 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) post-xlate-fwd TRANSLATED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) policy-verdict:L3-L4 EGRESS ALLOWED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-proxy FORWARDED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> 172.18.0.2 (host) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> 172.18.0.2 (host) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) dns-request proxy FORWARDED (DNS Query backend-service.tenant-a.svc.cluster.local. AAAA)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) dns-request proxy FORWARDED (DNS Query backend-service.tenant-a.svc.cluster.local. A)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service:40047 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:17:58.707: tenant-a/frontend-service (ID:61166) <> backend-service.tenant-a.svc.cluster.local:80 (world) pre-xlate-fwd TRACED (TCP)
May 30 01:17:58.707: tenant-a/frontend-service (ID:61166) <> tenant-a/backend-service:80 (ID:12501) post-xlate-fwd TRANSLATED (TCP)
May 30 01:17:58.707: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) policy-verdict:L3-Only EGRESS ALLOWED (TCP Flags: SYN)
May 30 01:17:58.707: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) policy-verdict:L3-Only INGRESS ALLOWED (TCP Flags: SYN)
May 30 01:17:58.707: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) to-endpoint FORWARDED (TCP Flags: SYN)
May 30 01:17:58.708: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) to-endpoint FORWARDED (TCP Flags: ACK)
May 30 01:17:58.708: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
May 30 01:17:58.708: tenant-a/frontend-service:41038 (ID:61166) <> tenant-a/backend-service (ID:12501) pre-xlate-rev TRACED (TCP)
May 30 01:17:58.709: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
May 30 01:17:58.709: tenant-a/frontend-service:41038 (ID:61166) -> tenant-a/backend-service:80 (ID:12501) to-endpoint FORWARDED (TCP Flags: ACK)
EVENTS LOST: HUBBLE_RING_BUFFER CPU(0) 1

現在讓我們測試被拒絕的策略。

測試與 api.twitter.com 的連接:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 api.twitter.com
command terminated with exit code 28

連接現在掛起(因為它在 L3/L4 被阻止),嘗試 5 次后將出現超時。

同樣,讓我們使用以下命令測試內部集群服務:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 backend-service.tenant-b
command terminated with exit code 28

同樣,連接掛起并超時。

這確認了對外部服務以及其他 Kubernetes 命名空間的策略被正確拒絕。

5.2 流量觀測

可以看到 tenant-a 中的所有請求 Namespace:

root@server:~# hubble observe --namespace tenant-a
May 30 01:19:22.111: tenant-a/frontend-service:38600 (ID:61166) -> kube-system/coredns-6f6b679f8f-w4l8q:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:19:22.111: tenant-a/frontend-service:38600 (ID:61166) <> kube-system/coredns-6f6b679f8f-w4l8q (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.111: tenant-a/frontend-service:38600 (ID:61166) <> kube-system/coredns-6f6b679f8f-w4l8q (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.114: tenant-a/frontend-service:38600 (ID:61166) <- kube-system/coredns-6f6b679f8f-w4l8q:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:19:22.114: tenant-a/frontend-service:49697 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:19:22.114: tenant-a/frontend-service:49697 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.114: tenant-a/frontend-service:49697 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.116: tenant-a/frontend-service:49697 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:19:22.116: tenant-a/frontend-service:37254 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:19:22.116: tenant-a/frontend-service:37254 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.116: tenant-a/frontend-service:37254 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:19:22.117: tenant-a/frontend-service:37254 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:20:01.119: tenant-a/frontend-service:37720 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:20:01.119: tenant-a/frontend-service:37720 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.119: tenant-a/frontend-service:37720 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.119: tenant-a/frontend-service:37720 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) -> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-endpoint FORWARDED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <> kube-system/coredns-6f6b679f8f-rlcsr (ID:64246) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-overlay FORWARDED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) to-proxy FORWARDED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) dns-response proxy FORWARDED (DNS Answer  TTL: 4294967295 (Proxy backend-service.tenant-b.svc.cluster.local. AAAA))
May 30 01:20:01.120: tenant-a/frontend-service:56405 (ID:61166) <- kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) dns-response proxy FORWARDED (DNS Answer "10.96.16.75" TTL: 30 (Proxy backend-service.tenant-b.svc.cluster.local. A))
May 30 01:20:01.120: kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) <> tenant-a/frontend-service (ID:61166) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.120: 10.96.0.10:53 (world) <> tenant-a/frontend-service (ID:61166) post-xlate-rev TRANSLATED (UDP)
May 30 01:20:01.120: kube-system/coredns-6f6b679f8f-rlcsr:53 (ID:64246) <> tenant-a/frontend-service (ID:61166) pre-xlate-rev TRACED (UDP)
May 30 01:20:01.120: 10.96.0.10:53 (world) <> tenant-a/frontend-service (ID:61166) post-xlate-rev TRANSLATED (UDP)
May 30 01:20:01.120: tenant-a/frontend-service (ID:61166) <> backend-service.tenant-b.svc.cluster.local:80 (world) pre-xlate-fwd TRACED (TCP)
May 30 01:20:01.120: tenant-a/frontend-service (ID:61166) <> tenant-b/backend-service:80 (ID:32849) post-xlate-fwd TRANSLATED (TCP)
May 30 01:20:01.120: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:01.120: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:02.172: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:02.172: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:03.196: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:03.196: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:04.220: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:04.220: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:05.244: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:05.244: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)

看到標記為 FORWARDEDDROPPED 的流。

您可以使用 --verdict 標志篩選此條件,例如執行:

root@server:~# hubble observe --namespace tenant-a --verdict DROPPED
May 30 01:19:22.126: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:19:22.126: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:19:23.132: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:19:23.132: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:19:24.156: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:19:24.156: tenant-a/frontend-service:47400 (ID:61166) <> api.twitter.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:19:25.619: tenant-a/frontend-service:44836 (ID:61166) <> api.twitter.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:19:25.619: tenant-a/frontend-service:44836 (ID:61166) <> api.twitter.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:19:26.652: tenant-a/frontend-service:44836 (ID:61166) <> api.twitter.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:19:26.652: tenant-a/frontend-service:44836 (ID:61166) <> api.twitter.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:01.120: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:01.120: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:02.172: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:02.172: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:03.196: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:03.196: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:04.220: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:04.220: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:20:05.244: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:20:05.244: tenant-a/frontend-service:40778 (ID:61166) <> tenant-b/backend-service:80 (ID:32849) Policy denied DROPPED (TCP Flags: SYN)

您應該能夠查看在上一個質詢中丟棄的請求。

5.3 Hubble觀測

單擊 Connections 并選擇 tenant-a 命名空間。

這將向您展示 Hubble UI 如何簡化對服務連接的理解,并顯示由于網絡策略導致的丟棄而導致的連接失敗。

在服務地圖中,箭頭末尾的紅線表示已刪除的流,而灰色表示流成功。

請添加圖片描述

窗格底部的 flows (流) 表還顯示了此命名空間的連接的簡化視圖,包括上次看到 flow 的時間。

5.4 小測試

√	The Hubble CLI allows to observe all Kubernetes traffic
×	Hubble (CLI & UI) always display external DNS names
√	The Hubble service map displays connection drops
√	The Hubble CLI output can be filtered by pod

6. 根據Hubble流更新網絡策略

在 Hubble UI 中,轉到 Policies ( 策略) 視圖并選擇 tenant-a 命名空間。

在右下角,我們看到 Hubble 已經識別了在 tenant-a 命名空間中觀察到的一組當前策略不允許的流,并將它們標記為已刪除

6.1 創建新策略

為了允許其他流量,我們可以向現有網絡策略添加規則。

單擊編輯器窗格左上角的 + New 按鈕。

📝 然后單擊中心框中的圖標,并將策略重命名為 extra。點擊Save 保存.

請添加圖片描述

查看右下角窗格中的 flows 表。其中兩個請求的判決被丟棄 ,即對 tenant-b 中的 backend-serviceapi.twitter.com 的請求。

單擊與 tenant-b 中的 backend-service 對應的行,然后選擇 Add rule to policy。YAML 清單現已更新以接受此流量!

請添加圖片描述

重復該作以允許流量 api.twitter.com

這將產生一個精細的網絡策略,該策略允許所需的連接,同時保留 Zero Trust 網絡策略的默認拒絕方面。

這些更改也會反映在策略可視化中。例如,選中右側名為 In Cluster 的下框,現在在 DNS 規則下方顯示了另一條規則。

請添加圖片描述

由于我們正在制定新的網絡策略,因此主窗格僅顯示此特定策略的規則。

在左列的底部,您可以看到此命名空間的策略列表,其中 extra 以粗體字表示,允許您在策略之間切換。

切換列表頂部的 Visualize all 按鈕。主窗格現在顯示同時應用的所有策略的結果。

6.2 保存并執行策略

將extra的內容保存到文件

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:name: extranamespace: tenant-a
spec:endpointSelector: {}egress:- toFQDNs:- matchName: api.twitter.comtoPorts:- ports:- port: "80"

應用新規則:

kubectl apply -f tenant-a-extra-policy.yaml

6.3 測試策略

讓我們驗證一下我們的策略是否正常工作。執行我們之前運行的相同 curl 命令:

租戶內部測試:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 backend-service.tenant-a
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 01:43:50 GMT
Connection: keep-alive
Keep-Alive: timeout=5

外部服務測試:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 api.twitter.com
HTTP/1.1 301 Moved Permanently
Date: Fri, 30 May 2025 01:44:05 GMT
Connection: keep-alive
location: https://api.twitter.com/
x-connection-hash: 4f4309bf4e4addf64c0e07844f4bb940265e0c08bfc6401ff84ab5dc0f1c11de
cf-cache-status: DYNAMIC
Set-Cookie: __cf_bm=jH7pT_6q5Ahf1bi5orj4hLi5ds4iwg.B59McL_hIEd4-1748569445-1.0.1.1-uUluHlV3wQ905kI4lnii8I9CfYkTZO.HCh0j53.4wH7NB0gdlqwxgrVb4rmXAxrR7YnCuEfYhgzDM8RalcHCWMcqJ484UI0ZINnbgz_Gov4; path=/; expires=Fri, 30-May-25 02:14:05 GMT; domain=.twitter.com; HttpOnly
Server: cloudflare tsa_b
CF-RAY: 947a8b56cce20358-CDG

其他租戶服務測試:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 backend-service.tenant-b
command terminated with exit code 28

顯然這與我們的預期相一致

6.4 測試拒絕策略

讓我們檢查一下其他外部目標是否仍然被拒絕:

另一個外部服務:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 www.google.com
command terminated with exit code 28

另一個內部服務:

root@server:~# kubectl exec -n tenant-a frontend-service -- \curl -sI --max-time 5 backend-service.tenant-c
command terminated with exit code 28

如您所見,這些仍然無法訪問,我們可以使用以下方法檢查流:

root@server:~# hubble observe --namespace tenant-a
May 30 01:45:15.992: tenant-a/frontend-service:37240 (ID:5591) -> kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-endpoint FORWARDED (UDP)
May 30 01:45:15.992: tenant-a/frontend-service:37240 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.992: tenant-a/frontend-service:37240 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.994: tenant-a/frontend-service:37240 (ID:5591) <- kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:15.994: tenant-a/frontend-service:33400 (ID:5591) -> kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-endpoint FORWARDED (UDP)
May 30 01:45:15.994: tenant-a/frontend-service:33400 (ID:5591) <> kube-system/coredns-6f6b679f8f-wbmhb (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.994: tenant-a/frontend-service:33400 (ID:5591) <> kube-system/coredns-6f6b679f8f-wbmhb (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.995: tenant-a/frontend-service:33400 (ID:5591) <- kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:15.996: tenant-a/frontend-service:52771 (ID:5591) -> kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-endpoint FORWARDED (UDP)
May 30 01:45:15.996: tenant-a/frontend-service:52771 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.996: tenant-a/frontend-service:52771 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:15.998: tenant-a/frontend-service:52771 (ID:5591) <- kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:29.005: tenant-a/frontend-service:59030 (ID:5591) -> kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-endpoint FORWARDED (UDP)
May 30 01:45:29.005: tenant-a/frontend-service:59030 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.005: tenant-a/frontend-service:59030 (ID:5591) <> kube-system/coredns-6f6b679f8f-54xhj (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.006: tenant-a/frontend-service:59030 (ID:5591) <- kube-system/coredns-6f6b679f8f-54xhj:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:29.006: tenant-a/frontend-service:49123 (ID:5591) -> kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-endpoint FORWARDED (UDP)
May 30 01:45:29.006: tenant-a/frontend-service:49123 (ID:5591) <> kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:29.006: tenant-a/frontend-service:49123 (ID:5591) <> kube-system/coredns-6f6b679f8f-wbmhb (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.006: tenant-a/frontend-service:49123 (ID:5591) <> kube-system/coredns-6f6b679f8f-wbmhb (ID:5794) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.007: tenant-a/frontend-service:49123 (ID:5591) <- kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-overlay FORWARDED (UDP)
May 30 01:45:29.007: tenant-a/frontend-service:49123 (ID:5591) <- kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) to-proxy FORWARDED (UDP)
May 30 01:45:29.007: tenant-a/frontend-service:49123 (ID:5591) <- kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) dns-response proxy FORWARDED (DNS Answer  TTL: 4294967295 (Proxy backend-service.tenant-c.svc.cluster.local. AAAA))
May 30 01:45:29.007: tenant-a/frontend-service:49123 (ID:5591) <- kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) dns-response proxy FORWARDED (DNS Answer "10.96.102.16" TTL: 30 (Proxy backend-service.tenant-c.svc.cluster.local. A))
May 30 01:45:29.007: kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) <> tenant-a/frontend-service (ID:5591) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.007: 10.96.0.10:53 (world) <> tenant-a/frontend-service (ID:5591) post-xlate-rev TRANSLATED (UDP)
May 30 01:45:29.007: kube-system/coredns-6f6b679f8f-wbmhb:53 (ID:5794) <> tenant-a/frontend-service (ID:5591) pre-xlate-rev TRACED (UDP)
May 30 01:45:29.007: 10.96.0.10:53 (world) <> tenant-a/frontend-service (ID:5591) post-xlate-rev TRANSLATED (UDP)
May 30 01:45:29.007: tenant-a/frontend-service (ID:5591) <> backend-service.tenant-c.svc.cluster.local:80 (world) pre-xlate-fwd TRACED (TCP)
May 30 01:45:29.007: tenant-a/frontend-service (ID:5591) <> tenant-c/backend-service:80 (ID:3374) post-xlate-fwd TRANSLATED (TCP)
May 30 01:45:29.007: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:29.007: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:30.051: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:30.051: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:31.075: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:31.075: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:32.099: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:32.099: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:33.123: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:33.123: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)

僅顯示被DROP的流量

root@server:~# hubble observe --namespace tenant-a --verdict DROPPED
May 30 01:45:18.083: tenant-a/frontend-service:49892 (ID:5591) <> www.google.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:18.083: tenant-a/frontend-service:49892 (ID:5591) <> www.google.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:19.095: tenant-a/frontend-service:52790 (ID:5591) <> www.google.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:19.095: tenant-a/frontend-service:52790 (ID:5591) <> www.google.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:20.364: tenant-a/frontend-service:43234 (ID:5591) <> www.google.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:20.364: tenant-a/frontend-service:43234 (ID:5591) <> www.google.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:20.677: tenant-a/frontend-service:53370 (ID:5591) <> www.google.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:20.677: tenant-a/frontend-service:53370 (ID:5591) <> www.google.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:20.831: tenant-a/frontend-service:35940 (ID:5591) <> www.google.com:80 (world) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:20.831: tenant-a/frontend-service:35940 (ID:5591) <> www.google.com:80 (world) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:29.007: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:29.007: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:30.051: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:30.051: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:31.075: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:31.075: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:32.099: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:32.099: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)
May 30 01:45:33.123: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) policy-verdict:none EGRESS DENIED (TCP Flags: SYN)
May 30 01:45:33.123: tenant-a/frontend-service:44084 (ID:5591) <> tenant-c/backend-service:80 (ID:3374) Policy denied DROPPED (TCP Flags: SYN)

6.5 小測驗

√	Rules can be added to an existing Network Policy
√	Rules can be added by creating a new Network Policy
√	The Hubble Network Policy editor allows to edit existing Kubernetes Network Policies
×	Modifying Network Policies in Hubble automatically applies them to the cluster
×	Hubble cannot let you view all policies applying to namespace at the same time

7. Boss戰

7.1 題目

對于此實踐考試,您需要:

  1. 在命名空間 tenant-b 中創建名為 default-exam 的策略(使用 default-exam.yaml 文件)
  2. 允許來自命名空間 tenant-b 中所有 Pod 的流量在端口 443google.com
  3. 允許 tenant-b 中的 Kubernetes DNS 流量
  4. 允許流量流向端口 80 上命名空間 tenant-c 中的 Pod backend-service
  5. apply the policy 應用策略

可以使用以下命令做測試

kubectl exec -n tenant-b frontend-service -- curl -sI --max-time 5 backend-service.tenant-c

7.2 解題

根據題目1-3創建default-exam.yaml

root@server:~# yq default-exam.yaml 
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:name: default-examnamespace: tenant-b
spec:endpointSelector: {}ingress:- {}egress:- toFQDNs:- matchName: google.comtoPorts:- ports:- port: "443"- toEndpoints:- matchLabels:any:io.kubernetes.pod.namespace: kube-systemany:k8s-app: kube-dnstoPorts:- ports:- port: "53"protocol: UDPrules:dns:- matchPattern: "*"- toEndpoints:- {}

部署策略

k apply -f default-exam.yaml 

訪問測試

root@server:~# k apply -f default-exam.yaml 
ciliumnetworkpolicy.cilium.io/default-exam created
root@server:~# kubectl exec -n tenant-b frontend-service -- curl -sI --max-time 5 backend-service.tenant-c
command terminated with exit code 28

將被drop的添加到策略中

請添加圖片描述

復制CiliumNetworkPolicy到文件default-exam.yaml

請添加圖片描述

確認文件你內容,并應用配置

root@server:~# yq default-exam.yaml 
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:name: default-examnamespace: tenant-b
spec:endpointSelector: {}
#   ingress:
#     - {}egress:- toEndpoints:- matchLabels:io.kubernetes.pod.namespace: kube-systemk8s-app: kube-dnstoPorts:- ports:- port: "53"protocol: UDPrules:dns:- matchPattern: "*"- toFQDNs:- matchName: google.comtoPorts:- ports:- port: "443"- toEndpoints:- matchLabels:k8s:app: backend-servicek8s:io.kubernetes.pod.namespace: tenant-ctoPorts:- ports:- port: "80"- toEndpoints:- {}
root@server:~# k apply -f default-exam.yaml 
ciliumnetworkpolicy.cilium.io/default-exam configured

再次測試訪問tenant-c的service

root@server:~# kubectl exec -n tenant-b frontend-service -- curl -sI --max-time 5 backend-service.tenant-c
HTTP/1.1 200 OK
X-Powered-By: Express
Vary: Origin, Accept-Encoding
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Cache-Control: public, max-age=0
Last-Modified: Thu, 05 Oct 2023 14:24:44 GMT
ETag: W/"809-18b003a03e0"
Content-Type: text/html; charset=UTF-8
Content-Length: 2057
Date: Fri, 30 May 2025 01:57:28 GMT
Connection: keep-alive
Keep-Alive: timeout=5

顯然它成功了,我們再測試到google.com的

root@server:~# kubectl exec -n tenant-b frontend-service -- curl -sI --max-time 5 https://google.com
HTTP/2 301 
location: https://www.google.com/
content-type: text/html; charset=UTF-8
content-security-policy-report-only: object-src 'none';base-uri 'self';script-src 'nonce--v6mW7uA2tULH4CO1MAfKQ' 'strict-dynamic' 'report-sample' 'unsafe-eval' 'unsafe-inline' https: http:;report-uri https://csp.withgoogle.com/csp/gws/other-hp
date: Fri, 30 May 2025 02:03:29 GMT
expires: Sun, 29 Jun 2025 02:03:29 GMT
cache-control: public, max-age=2592000
server: gws
content-length: 220
x-xss-protection: 0
x-frame-options: SAMEORIGIN

顯然也沒問題

好了,提交下試試看.

請添加圖片描述

新徽標GET!

請添加圖片描述

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/diannao/86593.shtml
繁體地址,請注明出處:http://hk.pswp.cn/diannao/86593.shtml
英文地址,請注明出處:http://en.pswp.cn/diannao/86593.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

opencv RGB圖像轉灰度圖

這段代碼的作用是將一個 3通道的 RGB 圖像&#xff08;CV_8UC3&#xff09;轉換為灰度圖像&#xff08;CV_8UC1&#xff09;&#xff0c;并使用 OpenCV 的 parallel_for_ 對圖像處理進行并行加速。 &#x1f50d; 一、函數功能總結 if (CV_8UC3 img.type()) {// 創建灰度圖 d…

React Hooks 的原理、常用函數及用途詳解

1. ??Hooks 是什么&#xff1f;?? Hooks 是 React 16.8 引入的函數式組件特性&#xff0c;允許在不編寫 class 的情況下使用 state 和其他 React 特性&#xff08;如生命周期、副作用等&#xff09;。??本質是一類特殊函數??&#xff0c;它們掛載到 React 的調度系統中…

學習路之PHP--webman協程學習

學習路之PHP--webman協程學習 一、準備二、配置三、啟動四、使用 協程是一種比線程更輕量級的用戶級并發機制&#xff0c;能夠在進程中實現多任務調度。它通過手動控制掛起和恢復來實現協程間的切換&#xff0c;避免了進程上下文切換的開銷 一、準備 PHP > 8.1 Workerman &g…

linux libusb使用libusb_claim_interface失敗(-6,Resource busy)解決方案

linux libusb使用libusb_claim_interface失敗&#xff08;-6&#xff0c;Resource busy&#xff09;解決方案 ? 問題原因&#x1f6e0;? 解決方案&#x1f538; 方法一&#xff1a;分離內核驅動 libusb_detach_kernel_driver()&#x1f538; 方法二&#xff1a;使用 usb-devi…

使用mpu6500/6050, PID,互補濾波實現一個簡單的飛行自穩控制系統

首先&#xff0c;參考ai給出的客機飛機的比較平穩的最大仰府&#xff0c;偏轉&#xff0c;和防滾角度&#xff0c;如下&#xff1a; 客機的最大平穩仰俯&#xff08;Pitch&#xff09;、偏轉&#xff08;Yaw&#xff09;和防滾&#xff08;Roll&#xff09;角度&#xff0c;通…

深度解析AD7685ARMZRL7:16位精密ADC在低功耗系統中的設計價值

產品概述 AD7685ARMZRL7是16位逐次逼近型&#xff08;SAR&#xff09;ADC&#xff0c;采用MSOP-10緊湊封裝。其核心架構基于電荷再分配技術&#xff0c;支持2.3V至5.5V單電源供電&#xff0c;集成低噪聲采樣保持電路與內部轉換時鐘。器件采用偽差分輸入結構&#xff08;IN/-&a…

EXCEL 實現“點擊跳轉到指定 Sheet”的方法

&#x1f4cc; WPS 表格技巧&#xff1a;如何實現點擊單元格跳轉到指定 Sheet 在使用 WPS 表格&#xff08;或 Excel&#xff09;時&#xff0c;我們經常會希望通過點擊一個單元格&#xff0c;直接跳轉到工作簿中的另一個工作表&#xff08;Sheet&#xff09;。這在制作目錄頁…

Python格式化:讓數據輸出更優雅

Python格式化&#xff1a;讓數據輸出更優雅 Python的格式化功能能讓數據輸出瞬間變得優雅又規范。不管是對齊文本、控制數字精度&#xff0c;還是動態填充內容&#xff0c;它都能輕松搞定。 一、基礎格式化&#xff1a;從簡單拼接開始 1. 百分號&#xff08;%&#xff09;格式…

2025年滲透測試面試題總結-小鵬[實習]安全工程師(題目+回答)

安全領域各種資源&#xff0c;學習文檔&#xff0c;以及工具分享、前沿信息分享、POC、EXP分享。不定期分享各種好玩的項目及好用的工具&#xff0c;歡迎關注。 目錄 小鵬[實習]安全工程師 1. 自我介紹 2. 有沒有挖過src&#xff1f; 3. 平時web滲透怎么學的&#xff0c;有…

VSCode科技風主題設計詳細指南

1. 科技風設計的核心特點 科技風設計是一種強調未來感、現代感和高科技感的設計風格,在VSCode主題設計中,可以通過以下幾個核心特點來體現: 1.1 色彩特點 冷色調為主:藍色、紫色、青色等冷色調是科技風設計的主要色彩高對比度:深色背景配合明亮的霓虹色,形成強烈的視覺…

android知識總結

Activity啟動模式 standard (標準模式) 每次啟動該 Activity&#xff08;例如&#xff0c;通過 startActivity()&#xff09;&#xff0c;系統總會創建一個新的實例&#xff0c;并將其放入調用者&#xff08;啟動它的那個 Activity&#xff09;所在的任務棧中。 singleTop (棧…

第3章 MySQL數據類型

MySQL數據類型 1、數字數據類型1.1 整數類型1.2 定點類型1.3 浮點類型1.4位值類型1.5 超出范圍和溢出處理1.5.1 超出范圍處理1.5.2 溢出處理 2、日期和時間數據類型3、字符串數據類型3.1 char和varchar類型3.2 binary和varbinary類型3.3 blob 和 text類型3.4 enum類型3.4.1 創建…

label-studio的使用教程(導入本地路徑)

文章目錄 1. 準備環境2. 腳本啟動2.1 Windows2.2 Linux 3. 安裝label-studio機器學習后端3.1 pip安裝(推薦)3.2 GitHub倉庫安裝 4. 后端配置4.1 yolo環境4.2 引入后端模型4.3 修改腳本4.4 啟動后端 5. 標注工程5.1 創建工程5.2 配置圖片路徑5.3 配置工程類型標簽5.4 配置模型5.…

mysql為什么一個表中不能同時存在兩個字段自增

背景。設置sort自增。會引發錯誤 通常自增字段都是用于表示數據的唯一性。數據庫限制。需要自定義排序字段大小。

牛客round95D

原題鏈接&#xff1a;D-小紅的區間修改&#xff08;一&#xff09;_牛客周賽 Round 95 題目背景&#xff1a; 初始擁有一個長度10^100元素全為0的數組&#xff0c;進行q查詢&#xff0c;每次查詢如果區間內的元素都為0就將區間變為首項為 1、公差為 1 的等差數列&#xff1b;否…

visual studio 2022更改主題為深色

visual studio 2022更改主題為深色 點擊visual studio 上方的 工具-> 選項 在選項窗口中&#xff0c;選擇 環境 -> 常規 &#xff0c;將其中的顏色主題改成深色 點擊確定&#xff0c;更改完成

實踐篇:利用ragas在自己RAG上實現LLM評估②

文章目錄 使用ragas做評估在自己的數據集上評估完整代碼代碼講解1. RAG系統構建核心組件初始化文檔處理流程 2. 評估數據集構建3. RAGAS評估實現1. 評估數據集創建2. 評估器配置3. 執行評估 本系列閱讀&#xff1a; 理論篇&#xff1a;RAG評估指標&#xff0c;檢索指標與生成指…

微軟PowerBI考試 PL300-在 Power BI 中清理、轉換和加載數據

微軟PowerBI考試 PL300-在 Power BI 中清理、轉換和加載數據 Power Query 具有大量專門幫助您清理和準備數據以供分析的功能。 您將了解如何簡化復雜模型、更改數據類型、重命名對象和透視數據。 您還將了解如何分析列&#xff0c;以便知曉哪些列包含有價值的數據&#xff0c;…

PostgreSQL 安裝與配置全指南(適用于 Windows、macOS 與主流 Linux 發行版)

PostgreSQL 是一個功能強大、開源、穩定的對象關系數據庫系統&#xff0c;廣泛用于后端開發、數據處理與分布式架構中。本指南將手把手教你如何在 Windows、macOS 以及主流 Linux 發行版 上安裝 PostgreSQL&#xff0c;并附上安裝驗證命令與基礎配置方法。 一、Windows 安裝與配…

WordPress博客文章SEO的優化技巧

在數字時代&#xff0c;博客不僅用于表達觀點&#xff0c;也能提升品牌影響力并吸引潛在客戶。許多服務器提供商&#xff08;如 Hostease&#xff09;支持 WordPress 一鍵安裝功能&#xff0c;方便新手快速完成安裝&#xff0c;專注于內容創作和 SEO 優化。然而&#xff0c;寫出…