Задать вопрос

Почему падает контейнер flanner?

Добрый день.
Для погружения в тему тега попробовал развернуть локальный тестовый кластер.

В принципе, он поднялся, и даже позволил первый под создать, но в логах ошибки и контейнер flanner падает.

root@kubemain:~# kubectl get pods -A
NAMESPACE      NAME                               READY   STATUS             RESTARTS          AGE
default        amk                                1/1     Running            0                 18h
kube-flannel   kube-flannel-ds-ckjnd              0/1     CrashLoopBackOff   222 (3m51s ago)   18h
kube-flannel   kube-flannel-ds-kdfsx              0/1     Error              226 (38s ago)     18h
kube-system    coredns-787d4945fb-fwjz9           1/1     Running            0                 18h
kube-system    coredns-787d4945fb-mfjtp           1/1     Running            0                 18h
kube-system    etcd-kubemain                      1/1     Running            0                 18h
kube-system    kube-apiserver-kubemain            1/1     Running            0                 18h
kube-system    kube-controller-manager-kubemain   1/1     Running            0                 18h
kube-system    kube-proxy-9tghs                   1/1     Running            0                 18h
kube-system    kube-proxy-nfrsp                   1/1     Running            0                 18h
kube-system    kube-scheduler-kubemain            1/1     Running            0                 18h

root@kubemain:~# kubectl get nodes -o wide
NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kube3      Ready    <none>          20h   v1.26.1   192.168.29.43   <none>        Ubuntu 20.04.2 LTS   5.4.0-135-generic   containerd://1.6.16
kubemain   Ready    control-plane   20h   v1.26.1   192.168.29.40   <none>        Ubuntu 20.04.2 LTS   5.4.0-135-generic   containerd://1.6.16

root@kubemain:~# kubelet --version
Kubernetes v1.26.1

root@kubemain:~# containerd --version
containerd containerd.io 1.6.16 31aa4358a36870b21a992d3ad2bef29e1d693bec

root@kubemain:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"

root@kubemain:~# cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.25.0.0/16
FLANNEL_SUBNET=172.25.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true


Ошибки в журнале ил kubemain и node касаются невозможности синхронизации:

Feb 01 12:57:19 kubemain kubelet[451880]: I0201 12:57:19.599031  451880 scope.go:115] "RemoveContainer" containerID="6d3843c71904f1b51df3363bff1e81678140307005b6c847bd43e6ee34044e2b"
Feb 01 12:57:19 kubemain kubelet[451880]: E0201 12:57:19.599270  451880 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-ds-kdfsx_kube-flannel(dd5efdcf-385d-4380-a9a1-31beb7264993)\"" pod="kube-flannel/kube-flannel-ds-kdfsx" podUID=dd5efdcf-385d-4380-a9a1-31beb7264993

Feb 01 12:55:26 kube3 kubelet[4354]: I0201 12:55:26.392931    4354 scope.go:115] "RemoveContainer" containerID="9de75bdd2ec840d5bd3f8f9970122d4fb1cf8a83f426a2112e6adae30438fd1f"
Feb 01 12:55:26 kube3 kubelet[4354]: E0201 12:55:26.393741    4354 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-flannel\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-ds-ckjnd_kube-flannel(68a2271a-2ff9-4fbe-93f1-11a9b29918fd)\"" pod="kube-flannel/kube-flannel-ds-ckjnd" podUID=68a2271a-2ff9-4fbe-93f1-11a9b29918fd


root@kubemain:~# kubectl -n kube-flannel describe pod kube-flannel-ds-ckjnd
Name:                 kube-flannel-ds-ckjnd
Namespace:            kube-flannel
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      flannel
Node:                 kube3/192.168.29.43
Start Time:           Tue, 31 Jan 2023 16:26:25 +0300
Labels:               app=flannel
                      controller-revision-hash=6d89ffc7b6
                      pod-template-generation=1
                      tier=node
Annotations:          <none>
Status:               Running
IP:                   192.168.29.43
IPs:
  IP:           192.168.29.43
Controlled By:  DaemonSet/kube-flannel-ds
Init Containers:
  install-cni-plugin:
    Container ID:  containerd://8099a4151a0785e1ff8a4156db96455c6c3a8d09763b6a4fc756380cf830e15f
    Image:         docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
    Image ID:      docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /flannel
      /opt/cni/bin/flannel
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 31 Jan 2023 16:26:35 +0300
      Finished:     Tue, 31 Jan 2023 16:26:35 +0300
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt/cni/bin from cni-plugin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86hsb (ro)
  install-cni:
    Container ID:  containerd://09961bb51319c02acf42b5506b8b4878f3c7271b5116652b73c28c565447f56c
    Image:         docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
    Image ID:      docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 31 Jan 2023 16:26:42 +0300
      Finished:     Tue, 31 Jan 2023 16:26:42 +0300
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86hsb (ro)
Containers:
  kube-flannel:
    Container ID:  containerd://345d4bbd2e7221dcc417eb0cbf4fb0f7025170322ae5d7aa76909dc90b49f7b3
    Image:         docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
    Image ID:      docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
    Args:
      --ip-masq
      --kube-subnet-mgr
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 01 Feb 2023 14:27:32 +0300
      Finished:     Wed, 01 Feb 2023 14:27:33 +0300
    Ready:          False
    Restart Count:  262
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:           kube-flannel-ds-ckjnd (v1:metadata.name)
      POD_NAMESPACE:      kube-flannel (v1:metadata.namespace)
      EVENT_QUEUE_DEPTH:  5000
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86hsb (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:
  cni-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  kube-api-access-86hsb:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 :NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason   Age                     From     Message
  ----     ------   ----                    ----     -------
  Normal   Pulled   43m (x255 over 22h)     kubelet  Container image "docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2" already present on machine
  Warning  BackOff  3m26s (x6057 over 22h)  kubelet  Back-off restarting failed container kube-flannel in pod kube-flannel-ds-ckjnd_kube-flannel(68a2271a-2ff9-4fbe-93f1-11a9b29918fd)
root@kubemain:~#
  • Вопрос задан
  • 826 просмотров
Подписаться 1 Простой 2 комментария
Решения вопроса 1
amk4
@amk4 Автор вопроса
"... поздравляю, Шарик, ты балбес!"

Пересмотр последовательности действий в документации помог...

Взяв готовый манифест на flannel-io/flannel

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml


упустил текст ниже: "If you use custom podCIDR (not 10.244.0.0/16) you first need to download the above manifest and modify the network to match your one."

А инициализация у меня была с другим диапазоном...

kubeadm init --pod-network-cidr=172.25.0.0/16

Соответственно, теперь скачал шаблон локально, исправил в нем диапазон и переприменил его

root@kubemain:~# kubectl apply -f /kube-flannel.yml


Все контейнеры поднялись.
Ответ написан
Пригласить эксперта
Ваш ответ на вопрос

Войдите, чтобы написать ответ

Похожие вопросы