Всем привет. Разворачиваю кластер kubernetes тестовы. Но столкнулся с проблемой:
1. Установил везде кубер и докер, выполнил подготовительные настройки.
2. Инициализировал кластер на мастере.
3. Добавил в кластер 2 рабочих ноды.
4. Установил сетевой плагин Calico.
После этого мастер встал в стату Ready, а 2 рабочих ноды так и остались в NotReady.
Подскажите куда смотреть? В describe вижу что плагин не установлен. Но насколько я понял, он ставиться на местере, на мастере я установил все.
kubectl get pods -A --watchNAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6cdb97b867-2f5rn 1/1 Running 0 41m
kube-system calico-node-j9tw9 1/1 Running 0 41m
kube-system calico-node-lshcn 1/1 Running 1 41m
kube-system calico-node-z9mfj 1/1 Running 0 41m
kube-system coredns-7db6d8ff4d-kf9dj 1/1 Running 0 77m
kube-system coredns-7db6d8ff4d-vrtt8 1/1 Running 0 77m
kube-system etcd-k8s-master-01 1/1 Running 218 77m
kube-system kube-apiserver-k8s-master-01 1/1 Running 216 77m
kube-system kube-controller-manager-k8s-master-01 1/1 Running 0 77m
kube-system kube-proxy-bv7g7 1/1 Running 1 44m
kube-system kube-proxy-f6lhr 1/1 Running 1 49m
kube-system kube-proxy-tk8tj 1/1 Running 0 77m
kube-system kube-scheduler-k8s-master-01 1/1 Running 229 78m
kubectl describe node k8s-worker-01
Name: k8s-worker-01
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-worker-01
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.0.17/24
projectcalico.org/IPv4IPIPTunnelAddr: 172.16.36.192
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 12 Jul 2024 10:10:00 +0500
Taints: node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: k8s-worker-01
AcquireTime: <unset>
RenewTime: Fri, 12 Jul 2024 11:00:47 +0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Fri, 12 Jul 2024 10:33:58 +0500 Fri, 12 Jul 2024 10:33:58 +0500 CalicoIsUp Calico is running on this node
MemoryPressure False Fri, 12 Jul 2024 10:59:35 +0500 Fri, 12 Jul 2024 10:33:53 +0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 12 Jul 2024 10:59:35 +0500 Fri, 12 Jul 2024 10:33:53 +0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 12 Jul 2024 10:59:35 +0500 Fri, 12 Jul 2024 10:33:53 +0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Fri, 12 Jul 2024 10:59:35 +0500 Fri, 12 Jul 2024 10:33:53 +0500 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.0.17
Hostname: k8s-worker-01
Capacity:
cpu: 6
ephemeral-storage: 101639152Ki
hugepages-2Mi: 0
memory: 12247292Ki
pods: 110
Allocatable:
cpu: 6
ephemeral-storage: 93670642329
hugepages-2Mi: 0
memory: 12144892Ki
pods: 110
System Info:
Machine ID: 26969e1f75724ee6b51bdf9c6411fae2
System UUID: 3adbc712-6e6b-4fe7-a2ca-956331191676
Boot ID: 102d47b1-8b56-40b5-a002-34e03aa7999e
Kernel Version: 6.1.0-22-amd64
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.18
Kubelet Version: v1.30.2
Kube-Proxy Version: v1.30.2
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-lshcn 250m (4%) 0 (0%) 0 (0%) 0 (0%) 43m
kube-system kube-proxy-f6lhr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 50m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 250m (4%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 50m kube-proxy
Normal Starting 26m kube-proxy
Normal NodeHasSufficientMemory 50m (x3 over 50m) kubelet Node k8s-worker-01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 50m (x3 over 50m) kubelet Node k8s-worker-01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 50m (x3 over 50m) kubelet Node k8s-worker-01 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 50m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 50m node-controller Node k8s-worker-01 event: Registered Node k8s-worker-01 in Controller
Normal NodeHasSufficientMemory 26m (x3 over 26m) kubelet Node k8s-worker-01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 26m (x3 over 26m) kubelet Node k8s-worker-01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 26m (x3 over 26m) kubelet Node k8s-worker-01 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 26m kubelet Updated Node Allocatable limit across pods
Normal NodeNotReady 26m kubelet Node k8s-worker-01 status is now: NodeNotReady