Docker world: Container
K8s world: Pod → wraps 1 or more Containers
同一個 Pod 裡的 container 共享 network (同一個 IP, 用 localhost 互通) 和 storage 大多數情況就是一個 Pod 一個 container
為什麼不直接管 Container
K8s 管理的最小單位是 Pod, 不是 container:
排程: Scheduler 把整個 Pod 放到某個 Node
IP: 分配給 Pod, 同一個 Pod 裡的 container 共用
生死: Pod 死了, 裡面所有 container 一起死
Pod YAML
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion:v1kind:Podmetadata:name:go-apilabels:app:go-api # label, used by Service to find this Podspec:containers:- name:go-apiimage:go-api:0.0.2ports:- containerPort:8080
1
2
kubectl apply -f k8s/pod.yaml
kubectl describe pod go-api
Pod 的誕生過程
kubectl describe pod 的 Events 區塊記錄了完整流程:
1
2
3
4
5
Events:
Scheduled → default-scheduler → Successfully assigned default/go-api to devops-lab-control-plane
Pulled → kubelet → Container image "go-api:0.0.2" already present on machine
Created → kubelet → Container created
Started → kubelet → Container started
kubectl get pod go-api -o wide
# IP: 10.244.0.5kubectl delete pod go-api
kubectl get pods
# No resources found — gone foreverkubectl apply -f k8s/pod.yaml
kubectl get pod go-api -o wide
# IP: 10.244.0.6 — IP changed
兩個關鍵觀察:
刪掉 Pod, 沒有任何東西會幫你重建
重新 apply, Pod 拿到不同的 IP
這就是為什麼不會在 production 直接用 bare Pod — 需要 Deployment 來管理
K8s Namespace ≠ Linux Namespace
名字相同但完全不同:
Linux Namespace (Phase 1 的內容): kernel 層級的隔離機制 (PID, Network, Mount…)
K8s Namespace: 邏輯分群, 像資料夾一樣把 cluster 裡的資源分類
1
2
3
kubectl get namespaces
# default ← your Pod lives here# kube-system ← K8s internal components
Deployment — 管理 Pod 的 Controller
Deployment 解決 bare Pod 的兩個問題: 刪了就沒了, 以及沒辦法做零停機更新
三層架構
1
2
3
4
5
Deployment (you define: I want 3 go-api instances)
└── ReplicaSet (auto-created, maintains the count)
├── Pod 1
├── Pod 2
└── Pod 3
apiVersion:apps/v1kind:Deploymentmetadata:name:go-apispec:replicas:3# always maintain 3 Podsselector:matchLabels:app:go-api # identifies which Pods belong to this Deploymentstrategy:type:RollingUpdaterollingUpdate:maxSurge:1# at most 1 extra Pod during updatemaxUnavailable:0# no Pod downtime allowed (zero-downtime)template:# Pod templatemetadata:labels:app:go-apispec:containers:- name:go-apiimage:go-api:0.0.2ports:- containerPort:8080resources:requests:cpu:50m # minimum 0.05 CPU coresmemory:32Mi # minimum 32MB memorylimits:cpu:100m # maximum 0.1 CPU coresmemory:64Mi # maximum 64MB memory
kubectl get deployment
# go-apikubectl get rs
# go-api-668dcc5dd ← Deployment name + hashkubectl get pods
# go-api-668dcc5dd-72s9z ← ReplicaSet name + random suffix# go-api-668dcc5dd-fbz2q# go-api-668dcc5dd-zbk6l
從名字就能看出 Deployment → ReplicaSet → Pod 的從屬關係
Reconciliation Loop: 自動修復
1
2
3
4
5
kubectl delete pod go-api-668dcc5dd-72s9z
kubectl get pods
# go-api-668dcc5dd-5gcng ← brand new, AGE is seconds# go-api-668dcc5dd-fbz2q ← unchanged# go-api-668dcc5dd-zbk6l ← unchanged
跟 bare Pod 完全不同 — 刪一個, 馬上補一個:
1
2
3
4
5
6
7
one Pod deleted → actual count = 2
↓
ReplicaSet detects: desired=3, actual=2, short by 1
↓
auto-creates a new Pod to compensate
↓
actual count back to 3
這個循環持續不斷在跑, 不管 Pod 怎麼死的 — 手動刪、crash、Node 掛掉 — 都會補回來
Rolling Update
更新 image 版本時, Deployment 會同時管兩個 ReplicaSet:
1
2
3
4
5
6
7
Deployment
├── ReplicaSet v1 (old version, scaling down)
│ └── Pod (old)
└── ReplicaSet v2 (new version, scaling up)
├── Pod (new)
├── Pod (new)
└── Pod (new)
maxSurge: 1, maxUnavailable: 0 代表: 先建好新 Pod 確認 Ready, 再殺舊 Pod 達成零停機
正常流程不會 apply bare Pod
實際工作中直接寫 Deployment:
1
kubectl apply -f k8s/deployment.yaml
Deployment 自動建 ReplicaSet 和 Pod 不需要另外寫 pod.yaml
宣告式管理 (Declarative)
K8s 的核心思維 — 你寫 YAML 描述「我要的狀態」, K8s 負責達成:
1
2
3
4
5
6
7
write YAML (desired state)
↓
kubectl apply
↓
K8s continuously ensures actual state = desired state
↓
need changes → edit YAML → apply again
這跟 Terraform、Docker Compose 是同一套思維, 都屬於 Infrastructure as Code (IaC):