While following the following tutorial steps: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
I've managed to create a single node elastic search cluster.
But when running the following line of code to add a second elasticsearch node to the existing cluster:
docker run -e ENROLLMENT_TOKEN="<token>" --name es02 --net elastic -it docker.elastic.co/elasticsearch/elasticsearch:8.3.2
I get the following error:
Unable to communicate with the node on https://172.18.0.92:9200/_security/enroll/node. Error was Connection timed out.
ERROR: Aborting enrolling to cluster. Could not communicate with the node on any of the addresses from the enrollment token. All of [172.18.0.92:9200] were attempted.
I would greatly appreciate it if others are getting the same error or not or if you know how to fix this issue. Thanks.
Related
The error message is below
MongoDB shell version: 3.2.11
connecting to: test
2020-05-16T20:53:47.438+0000 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2020-05-16T20:53:47.440+0000 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:229:14
#(connect):1:6
By the way, is there any way to automatically seed the database in the Docker container? I have to manually seed the data base every time.
Thank you, guys.
did you map the port of your localhost to MongoDB container? if not add -p 27017:27017 to your docker run command.
If you are looking to seed the database upon initialization, you can use the following as stated on docker hub:
When a container is started for the first time it will execute files with extensions .sh and .js that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. .js files will be executed by mongo using the database specified by the MONGO_INITDB_DATABASE variable, if it is present, or test otherwise. You may also switch databases within the .js script.
You can do this by mounting a javascript or shell script file via -v "./path/to/file.js:/docker-entrypoint-initdb.d/file.js" or via the volumes: ["./path/to/file.js:/docker-entrypoint-initdb.d/file.js"] key of your mongodb service if you are using docker-compose.
i have a db2 luw running in a docker container. How can I increase the increase transaction log size of it.
I tried to run "db2 connect to UEQ1D" in the docker Container cli. But docker gives the Respond: command not found. Do I need to somehow install db2cmd or so on the container or how can I run db2 commands on docker. Or is there a simpler way or so?
Appreciate your help
For some reason my master node can no longer connect to my cluster after upgrading from kubernetes 1.11.9 to 1.12.9 via kops (version 1.13.0). In the manifest I'm upgrading kubernetesVersion from 1.11.9 -> 1.12.9. This is the only change I'm making. However when I run kops rolling-update cluster --yes I get the following error:
Cluster did not pass validation, will try again in "30s" until duration "5m0s" expires: machine "i-01234567" has not yet joined cluster.
Cluster did not validate within 5m0s
After that if I run a kubectl get nodes I no longer see that master node in my cluster.
Doing a little bit of debugging by sshing into the disconnected master node instance I found the following error in my api-server log by running sudo cat /var/log/kube-apiserver.log:
controller.go:135] Unable to perform initial IP allocation check: unable to refresh the service IP block: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:4001: connect: connection refused
I suspect the issue might be related to etcd, because when I run sudo netstat -nap | grep LISTEN | grep etcd there is no output.
Anyone have any idea how I can get my master node back in the cluster or have advice on things to try?
I have made some research I got few ideas for you:
If there is no output for the etcd grep it means that your etcd server is down. Check the logs for the 'Exited' etcd container | grep Exited | grep etcd and than logs <etcd-container-id>
Try this instruction I found:
1 - I removed the old master from de etcd cluster using etcdctl. You
will need to connect on the etcd-server container to do this.
2 - On the new master node I stopped kubelet and protokube services.
3 - Empty Etcd data dir. (data and data-events)
4 - Edit /etc/kubernetes/manifests/etcd.manifests and
etcd-events.manifest changing ETCD_INITIAL_CLUSTER_STATE from new to
existing.
5 - Get the name and PeerURLS from new master and use etcdctl to add
the new master on the cluster. (etcdctl member add "name"
"PeerULR")You will need to connect on the etcd-server container to do
this.
6 - Start kubelet and protokube services on the new master.
If that is not the case than you might have a problem with the certs. They are provisioned during the creation of the cluster and some of them have the allowed master's endpoints. If that is the case you'd need to create new certs and roll them for the api server/etcd clusters.
Please let me know if that helped.
Actually I'm trying to deploy Kubernetes via Rancher on a single server.
I created a new Cluster and added a new node.
But after a several time, an error occurred:
This cluster is currently Provisioning; areas that interact directly with it will not be available until the API is ready.
[controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service [kube-apiserver] on host [172.26.116.42]: Get https://localhost:6443/healthz: dial tcp [::1]:6443: connect: connection refused, log: standard_init_linux.go:190: exec user process caused "permission denied"
And when I'm checking my docker container, one of them is always restarting, the rancher/hyperkube:v1.11.3-rancher1
I'm run docker logs my_container_id
And I show standard_init_linux.go:190: exec user process caused "permission denied"
On the cloud vm, the config is:
OS: Ubuntu 18.04.1 LTS
Docker Version: 18.06.1-ce
Rancher: Rancher v2
Do you have any issues about this error ?
Thank a lot ;)
What is your type of architecture?
Please run:
uname --all
or
docker info | grep -i "Architecture"
to check this.
Rancher is only supported on x86.
Finally, I called the vm sub-contractor and they created a vm with a nonexec var partition.
After a remount, it's worked.
I created cluster in gcloud with three nodes. So far so good.Thereafter i tried to run the pod.. it is giving error.. I found out the kubectl is not configured correct.. Getting following error when I try to run the pod.. Appreciate any help in this regard.
error: could not read an encoded object from nodejs.yaml: unable to connect to a server to handle "pods": couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused
thx
If your kubectl configuration is incorrect after creating a cluster, you can always run gcloud container clusters get-credentials NAME (see configuring kubectl) to restore a working kubeconfig file.