Запускаю второй mongodb cluster под другим именем в этом же namespace, используя bitnami heml chart. Secret с паролем на mongodb уже имеется, т.к. мы есть уже задеплоенный cluster, который работает нормально уже 2 года.
$ k get secret primary-mongodb -oyaml
apiVersion: v1
data:
mongodb-replica-set-key: MUs..............
mongodb-root-password: z2d......................
kind: Secret
-------------------------------
Тем не менее arbiter летит в CrashLoopBack off теряя связь c testing-mongo-mongodb-0.testing-mongo-mongodb-headless.default.svc.cluster.local, но не сразу - вначале
k logs testing-mongo-mongodb-arbiter-0
mongodb 10:27:40.82
mongodb 10:27:40.82 Welcome to the Bitnami mongodb container
mongodb 10:27:40.82 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 10:27:40.83 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 10:27:40.83
mongodb 10:27:40.84 INFO ==> ** Starting MongoDB setup **
mongodb 10:27:40.86 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 10:27:40.92 INFO ==> Initializing MongoDB...
mongodb 10:27:40.94 INFO ==> Deploying MongoDB from scratch...
MongoNetworkError: connect ECONNREFUSED 10.11.40.81:27017
mongodb 10:27:42.11 INFO ==> Creating users...
mongodb 10:27:42.11 INFO ==> Users created
mongodb 10:27:42.11 INFO ==> Writing keyfile for replica set authentication...
mongodb 10:27:42.14 INFO ==> Configuring MongoDB replica set...
mongodb 10:27:42.14 INFO ==> Stopping MongoDB...
mongodb 10:27:45.71 INFO ==> Trying to connect to MongoDB server testing-mongo-mongodb-0.testing-mongo-mongodb-headless.default.svc.cluster.local...
mongodb 10:27:45.72 INFO ==> Found MongoDB server listening at testing-mongo-mongodb-0.testing-mongo-mongodb-headless.default.svc.cluster.local:27017 !
а позднее:
mongodb 10:53:27.80
mongodb 10:53:27.80 Welcome to the Bitnami mongodb container
mongodb 10:53:27.81 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 10:53:27.81 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 10:53:27.82
mongodb 10:53:27.82 INFO ==> ** Starting MongoDB setup **
mongodb 10:53:27.84 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 10:53:27.88 INFO ==> Initializing MongoDB...
mongodb 10:53:27.90 INFO ==> Deploying MongoDB from scratch...
MongoNetworkError: connect ECONNREFUSED 10.11.40.81:27017
mongodb 10:53:29.04 INFO ==> Creating users...
mongodb 10:53:29.04 INFO ==> Users created
mongodb 10:53:29.04 INFO ==> Writing keyfile for replica set authentication...
mongodb 10:53:29.07 INFO ==> Configuring MongoDB replica set...
mongodb 10:53:29.07 INFO ==> Stopping MongoDB...
mongodb 10:53:33.19 INFO ==> Trying to connect to MongoDB server testing-mongo-mongodb-0.testing-mongo-mongodb-headless.default.svc.cluster.local...
mongodb 10:53:33.21 INFO ==> Found MongoDB server listening at testing-mongo-mongodb-0.testing-mongo-mongodb-headless.default.svc.cluster.local:27017 !
mongodb 10:57:24.07 ERROR ==> Node testing-mongo-mongodb-0.testing-mongo-mongodb-headless.default.svc.cluster.local did not become available
mongodb 10:57:24.08 INFO ==> Stopping MongoDB...
может arbiter не хватает каких-то данных для нормального deployment или нельзя 2 кластера в том же namespace запускать?
Helm install команда:
helm install "testing-mongo-mongodb" bitnami/mongodb --wait --namespace default \
--set architecture=replicaset \
--set replicaCount=2 \
--set persistence.storageClass=efs-sc \
--set-string podLabels."admission\.datadoghq\.com/enabled"=false \
--set-string podAnnotations."podLabels\.admission\.datadoghq\.com/enabled"=false \
--set persistence.enabled=true \
--set-string arbiter.podLabels."admission\.datadoghq\.com/enabled"=false \
--set-string arbiter.podAnnotations."podLabels\.admission\.datadoghq\.com/enabled"=false \
--set resources.limits.cpu=1 \
--set resources.limits.memory=2Gi \
--set resources.requests.cpu=1 \
--set resources.requests.memory=2Gi \
--set auth.existingSecret=primary-mongodb \
--set enableIPv6=false \
--set image.pullPolicy=Always \
--set metrics.enabled=false \
--set podLabels.type=testing-mongo \
--set podLabels.vendor=testing-mongo \
--set volumePermissions.enabled=true \
--set podLabels.os=linux
Логи не дали понимания в чем проблема, events тоже ничего не дали