Задать вопрос
@nihi1ist

Почему инициализация кластера завершается ошибкой?

Пытаюсь инициализировать мастер ноду Kubernetes:
kubeadm init --pod-network-cidr=172.16.0.0/16 \
             --control-plane-endpoint "10.2.26.17:6443" \
             --upload-certs

где 10.2.26.17 - VIP haproxy. Но инициализация завершается ошибкой:
Последняя часть лога

I0116 09:48:46.551386  105111 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0116 09:48:46.551532  105111 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0116 09:48:46.551582  105111 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0116 09:48:46.551618  105111 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0116 09:48:46.551658  105111 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.675827ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://10.1.62.52:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000626064s
[control-plane-check] kube-scheduler is not healthy after 4m0.000944424s
[control-plane-check] kube-apiserver is not healthy after 4m0.000945396s

A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'

error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://10.1.62.52:6443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:262
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:450
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:135
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.9.1/command.go:1015
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.9.1/command.go:1148
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.9.1/command.go:1071
k8s.io/kubernetes/cmd/kubeadm/app.Run
        k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:48
main.main
        k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        runtime/proc.go:285
runtime.goexit
        runtime/asm_amd64.s:1693

Конфиг HAproxy
frontend kube-apiserver-front
    bind 10.2.26.17:6443
    mode tcp
    option tcplog
    default_backend kube-apiserver-back

backend kube-apiserver-back
    mode tcp
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100

    server node1 10.1.62.52:6443 check check-ssl verify none inter 10000
    server node2 10.1.62.53:6443 check check-ssl verify none inter 10000
    server node3 10.6.28.53:6443 check check-ssl verify none inter 10000

Команда:
crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause

ничего не выдает. Подскажите, в чем может быть проблема?
  • Вопрос задан
  • 97 просмотров
Подписаться 1 Простой 10 комментариев
Помогут разобраться в теме Все курсы
  • Нетология
    DevOps-инженер с нуля
    19 месяцев
    Далее
  • Академия Eduson
    DevOps-инженер
    7 месяцев
    Далее
  • Skillbox
    Инфраструктурная платформа на основе Kubernetes
    1 месяц
    Далее
Пригласить эксперта
Ваш ответ на вопрос

Войдите, чтобы написать ответ

Похожие вопросы