Я развернул kubernetes cluster на железе через kubespray.
Inventory.ini
[all]
dev-kube-master01 ansible_host=192.168.11.1
dev-kube-master02 ansible_host=192.168.11.2
dev-kube-node01 ansible_host=192.168.11.3
dev-kube-node02 ansible_host=192.168.11.4
dev-kube-node03 ansible_host=192.168.11.5
dev-kube-node04 ansible_host=192.168.11.6
dev-kube-etcd01 ansible_host=192.168.11.7
dev-kube-etcd02 ansible_host=192.168.11.8
dev-kube-etcd03 ansible_host=192.168.11.9
[kube-master]
dev-kube-master01
dev-kube-master02
[etcd]
dev-kube-etcd01
dev-kube-etcd02
dev-kube-etcd03
[kube-node]
dev-kube-node01
dev-kube-node02
dev-kube-node03
dev-kube-node04
[k8s-cluster:children]
kube-master
kube-node
[calico-rr]
group_vars/all/all.yml
...
etcd_kubeadm_enabled: true
...
## Internal loadbalancers for apiservers
loadbalancer_apiserver_localhost: true
# valid options are "nginx" or "haproxy"
loadbalancer_apiserver_type: nginx
...
## Option is "script", "none"
cert_management: script
group_vars/k8s_cluster/k8s-cluster.yml
# configure arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interface
# must be set to true for MetalLB to work
kube_proxy_strict_arp: true
group_vars/k8s-cluster/addons.yml
# Enable the deployment of NGINX Ingress
# But don't enable the HostNetwork stuff as we'll be using MetalLB as LoadBalancer
ingress_nginx_enabled: true
ingress_nginx_host_network: false
# MetalLB Config
# See https://github.com/kubernetes-sigs/kubespray/tree/master/roles/kubernetes-apps/metallb
metallb_enabled: true
metallb_ip_range:
- "192.168.11.3-192.168.11.6" # Choose IP range MetalLB can give out on the L2 network segment
metallb_protocol: "layer2"
Кластер успешно развернулся и работает, однако calico переодически выдаёт warnings:
Liveness probe failed:
Readiness probe failed:
При этом всё работает, лишь иногда рестартуются поды с calico.
В чём причина этих ворнингов и как их исправить?