Installing Rancher by run docker image on RHEL 8.5 fails - rhel
I’m have a new VM based on RHEL 8.5 and tried to install Rancher using the below simple command from formal rancher page here (Getting Started with Kubernetes | Rancher Quick Start)
sudo docker run --privileged -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
but after the container start it keep restarting & the below errors appears.
Any Idea?
[root#redhat-vm-01 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NA MES
e8fbd2a4d67e rancher/rancher “entrypoint.sh” 5 minutes ago Up 12 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp tr usting_shockley
[root#redhat-vm-01 ~]# docker logs -f e8fbd2a4d67e
2022/01/05 20:42:24 [INFO] Rancher version v2.6.3 (3c1d5fac3) is starting
2022/01/05 20:42:24 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embed ded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:fal se Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit. log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Feat ures: ClusterRegistry:}
2022/01/05 20:42:24 [INFO] Listening on /tmp/log.sock
2022/01/05 20:42:24 [INFO] Waiting for k3s to start
time=“2022-01-05T20:42:24Z” level=info msg=“Acquiring lock file /var/lib/rancher /k3s/data/.lock”
time=“2022-01-05T20:42:24Z” level=info msg=“Preparing data dir /var/lib/rancher/ k3s/data/e61cd97f31a54dbcd9893f8325b7133cfdfd0229ff3bfae5a4f845780a93e84c”
2022/01/05 20:42:25 [INFO] Waiting for k3s to start
2022/01/05 20:42:26 [INFO] Waiting for k3s to start
2022/01/05 20:42:28 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:42:30 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:42:32 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:42:34 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:42:36 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:42:48 [INFO] Running in single server mode, will not peer connecti ons
2022/01/05 20:42:48 [INFO] Applying CRD features.management.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD navlinks.ui.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD clusters.management.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD apiservices.management.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD clusterregistrationtokens.management.cat tle.io
2022/01/05 20:42:50 [INFO] Applying CRD settings.management.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD preferences.management.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD features.management.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD clusterrepos.catalog.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD operations.catalog.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD apps.catalog.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD fleetworkspaces.management.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD bundles.fleet.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD clusters.fleet.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD managedcharts.management.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD clusters.provisioning.cattle.io
2022/01/05 20:42:50 [INFO] Applying CRD clusters.provisioning.cattle.io
2022/01/05 20:42:51 [INFO] Applying CRD rkeclusters.rke.cattle.io
2022/01/05 20:42:51 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io
2022/01/05 20:42:51 [INFO] Applying CRD rkebootstraps.rke.cattle.io
2022/01/05 20:42:51 [INFO] Applying CRD rkebootstraptemplates.rke.cattle.io
2022/01/05 20:42:51 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io
2022/01/05 20:42:51 [INFO] Applying CRD custommachines.rke.cattle.io
2022/01/05 20:42:51 [INFO] Applying CRD clusters.cluster.x-k8s.io
2022/01/05 20:42:51 [INFO] Applying CRD machinedeployments.cluster.x-k8s.io
2022/01/05 20:42:51 [INFO] Applying CRD machinehealthchecks.cluster.x-k8s.io
2022/01/05 20:42:51 [INFO] Applying CRD machines.cluster.x-k8s.io
2022/01/05 20:42:51 [INFO] Applying CRD machinesets.cluster.x-k8s.io
2022/01/05 20:42:51 [INFO] Waiting for CRD machinesets.cluster.x-k8s.io to becom e available
2022/01/05 20:42:52 [INFO] Done waiting for CRD machinesets.cluster.x-k8s.io to become available
2022/01/05 20:42:52 [INFO] Creating CRD authconfigs.management.cattle.io
2022/01/05 20:42:52 [INFO] Creating CRD groupmembers.management.cattle.io
2022/01/05 20:43:23 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:43:25 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:43:27 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:43:29 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:43:35 [FATAL] k3s exited with: exit status 255
2022/01/05 20:43:53 [INFO] Rancher version v2.6.3 (3c1d5fac3) is starting
2022/01/05 20:43:53 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embed ded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:fal se Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit. log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Feat ures: ClusterRegistry:}
2022/01/05 20:43:53 [INFO] Listening on /tmp/log.sock
2022/01/05 20:43:53 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:43:55 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:43:57 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:43:59 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:44:01 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:44:03 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:44:05 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:44:07 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:44:09 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:44:15 [FATAL] k3s exited with: exit status 255
2022/01/05 20:44:31 [INFO] Rancher version v2.6.3 (3c1d5fac3) is starting
2022/01/05 20:44:31 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embed ded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:fal se Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit. log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Feat ures: ClusterRegistry:}
2022/01/05 20:44:31 [INFO] Listening on /tmp/log.sock
2022/01/05 20:44:31 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:44:33 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:44:35 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:44:37 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:44:39 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:44:41 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:44:43 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:44:45 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:44:50 [FATAL] k3s exited with: exit status 255
2022/01/05 20:45:08 [INFO] Rancher version v2.6.3 (3c1d5fac3) is starting
2022/01/05 20:45:08 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embed ded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:fal se Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit. log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Feat ures: ClusterRegistry:}
2022/01/05 20:45:08 [INFO] Listening on /tmp/log.sock
2022/01/05 20:45:08 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:45:10 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:45:12 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:45:14 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:45:16 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:45:18 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:45:20 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:45:22 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:45:24 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:45:26 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:45:28 [INFO] Running in single server mode, will not peer connecti ons
2022/01/05 20:45:28 [INFO] Applying CRD features.management.cattle.io
2022/01/05 20:45:29 [INFO] Applying CRD navlinks.ui.cattle.io
2022/01/05 20:45:29 [INFO] Applying CRD clusters.management.cattle.io
2022/01/05 20:45:29 [INFO] Applying CRD apiservices.management.cattle.io
2022/01/05 20:45:29 [INFO] Applying CRD clusterregistrationtokens.management.cat tle.io
2022/01/05 20:45:29 [INFO] Applying CRD settings.management.cattle.io
2022/01/05 20:45:29 [INFO] Applying CRD preferences.management.cattle.io
2022/01/05 20:45:30 [INFO] Applying CRD features.management.cattle.io
2022/01/05 20:45:30 [INFO] Applying CRD clusterrepos.catalog.cattle.io
2022/01/05 20:45:30 [INFO] Applying CRD operations.catalog.cattle.io
2022/01/05 20:45:30 [INFO] Applying CRD apps.catalog.cattle.io
2022/01/05 20:45:30 [INFO] Applying CRD fleetworkspaces.management.cattle.io
2022/01/05 20:45:30 [INFO] Applying CRD managedcharts.management.cattle.io
2022/01/05 20:45:31 [FATAL] failed to update managedcharts.management.cattle.io apiextensions.k8s.io/v1, Kind=CustomResourceDefinition for managedcharts.manage ment.cattle.io: Patch “https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/custo mresourcedefinitions/managedcharts.management.cattle.io?timeout=15m0s”: EOF
2022/01/05 20:45:47 [INFO] Rancher version v2.6.3 (3c1d5fac3) is starting
2022/01/05 20:45:47 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embed ded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:fal se Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit. log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Feat ures: ClusterRegistry:}
2022/01/05 20:45:47 [INFO] Listening on /tmp/log.sock
2022/01/05 20:45:47 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:45:49 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:45:51 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:45:53 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:45:55 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:45:57 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:45:59 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:46:01 [INFO] Running in single server mode, will not peer connecti ons
2022/01/05 20:46:01 [INFO] Applying CRD features.management.cattle.io
2022/01/05 20:46:03 [INFO] Applying CRD navlinks.ui.cattle.io
2022/01/05 20:46:03 [INFO] Applying CRD clusters.management.cattle.io
2022/01/05 20:46:03 [FATAL] failed to update clusters.management.cattle.io apiex tensions.k8s.io/v1, Kind=CustomResourceDefinition for clusters.management.cattl e.io: Patch “https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourced efinitions/clusters.management.cattle.io?timeout=15m0s”: EOF
2022/01/05 20:46:22 [INFO] Rancher version v2.6.3 (3c1d5fac3) is starting
2022/01/05 20:46:22 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embed ded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:fal se Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit. log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Feat ures: ClusterRegistry:}
2022/01/05 20:46:22 [INFO] Listening on /tmp/log.sock
2022/01/05 20:46:22 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:46:24 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:46:26 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:46:28 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:46:30 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:46:32 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:46:34 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:46:36 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:46:38 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:46:40 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:46:46 [FATAL] k3s exited with: exit status 255
2022/01/05 20:47:03 [INFO] Rancher version v2.6.3 (3c1d5fac3) is starting
2022/01/05 20:47:03 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embed ded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:fal se Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit. log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Feat ures: ClusterRegistry:}
2022/01/05 20:47:03 [INFO] Listening on /tmp/log.sock
2022/01/05 20:47:03 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:47:05 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:47:07 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:47:09 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:47:11 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:47:13 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:47:15 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:47:17 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:47:19 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:47:21 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:47:23 [INFO] Running in single server mode, will not peer connecti ons
2022/01/05 20:47:23 [INFO] Applying CRD features.management.cattle.io
2022/01/05 20:47:24 [INFO] Applying CRD navlinks.ui.cattle.io
2022/01/05 20:47:24 [INFO] Applying CRD clusters.management.cattle.io
2022/01/05 20:47:24 [INFO] Applying CRD apiservices.management.cattle.io
2022/01/05 20:47:24 [INFO] Applying CRD clusterregistrationtokens.management.cat tle.io
2022/01/05 20:47:24 [INFO] Applying CRD settings.management.cattle.io
2022/01/05 20:47:24 [INFO] Applying CRD preferences.management.cattle.io
2022/01/05 20:47:24 [INFO] Applying CRD features.management.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD clusterrepos.catalog.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD operations.catalog.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD apps.catalog.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD fleetworkspaces.management.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD managedcharts.management.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD clusters.provisioning.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD clusters.provisioning.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD rkeclusters.rke.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD rkebootstraps.rke.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD rkebootstraptemplates.rke.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD custommachines.rke.cattle.io
2022/01/05 20:47:25 [INFO] Applying CRD clusters.cluster.x-k8s.io
2022/01/05 20:47:26 [INFO] Applying CRD machinedeployments.cluster.x-k8s.io
2022/01/05 20:47:26 [INFO] Applying CRD machinehealthchecks.cluster.x-k8s.io
2022/01/05 20:47:26 [FATAL] failed to list apiextensions.k8s.io/v1, Kind=CustomR esourceDefinition for machinehealthchecks.cluster.x-k8s.io: Get “https://127.0. 0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?labelSelector=ob jectset.rio.cattle.io%!F(MISSING)hash%!D(MISSING)7372e5228a5d4df32e62db137ea6bf4 69c35d4c4&timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connection refused
2022/01/05 20:47:44 [INFO] Rancher version v2.6.3 (3c1d5fac3) is starting
2022/01/05 20:47:44 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embed ded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:fal se Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit. log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Feat ures: ClusterRegistry:}
2022/01/05 20:47:44 [INFO] Listening on /tmp/log.sock
2022/01/05 20:47:44 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:47:46 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:47:48 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:47:50 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:47:52 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:47:54 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:47:56 [INFO] Waiting for server to become available: an error on t he server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:48:02 [FATAL] k3s exited with: exit status 255
2022/01/05 20:48:20 [INFO] Rancher version v2.6.3 (3c1d5fac3) is starting
2022/01/05 20:48:20 [INFO] Rancher arguments {ACMEDomains: AddLocal:true Embed ded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:fal se Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit. log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Feat ures: ClusterRegistry:}
2022/01/05 20:48:20 [INFO] Listening on /tmp/log.sock
2022/01/05 20:48:21 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:48:23 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:48:25 [INFO] Waiting for server to become available: Get “https:// 127.0.0.1:6443/version?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connect ion refused
2022/01/05 20:48:27 [INFO] Waiting for server to become available: an error on the server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:48:29 [INFO] Waiting for server to become available: an error on the server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:48:31 [INFO] Waiting for server to become available: an error on the server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:48:33 [INFO] Waiting for server to become available: an error on the server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:48:35 [INFO] Waiting for server to become available: an error on the server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:48:37 [INFO] Waiting for server to become available: an error on the server (“apiserver not ready”) has prevented the request from succeeding
2022/01/05 20:48:39 [INFO] Running in single server mode, will not peer connections
2022/01/05 20:48:39 [INFO] Applying CRD features.management.cattle.io
2022/01/05 20:48:39 [INFO] Applying CRD navlinks.ui.cattle.io
2022/01/05 20:48:40 [INFO] Applying CRD clusters.management.cattle.io
2022/01/05 20:48:40 [INFO] Applying CRD apiservices.management.cattle.io
2022/01/05 20:48:40 [INFO] Applying CRD clusterregistrationtokens.management.cattle.io
2022/01/05 20:48:40 [INFO] Applying CRD settings.management.cattle.io
2022/01/05 20:48:40 [INFO] Applying CRD preferences.management.cattle.io
2022/01/05 20:48:40 [INFO] Applying CRD features.management.cattle.io
2022/01/05 20:48:40 [INFO] Applying CRD clusterrepos.catalog.cattle.io
2022/01/05 20:48:41 [FATAL] Get “https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterrepos.catalog.cattle.io?timeout=15m0s”: dial tcp 127.0.0.1:6443: connect: connection refused
Container restarted every 40 second
Apparently something is incompatible with the kernel from 8.5.
It has to do with systemd and something else.
I have tried a lot to get it to work, but the only thing that worked for me was to downgrade systemd+kernel to the versions from 8.4.
So to do that you need to add the 8.4 repo back - and do a
sudo dnf update
sudo dnf install kernel-4.18.0-305.25.1.el8_4.x86_64 systemd-239-45.el8_4.3.x86_64
That is the only thing I have gotten to work so far. Obviously I hope that the people working on rancher fixes it. It has something to do with cgroups - and apparently the rancher version is not compatible with the 8.5 version of systemd & kernel.
You can check out this post
Related
502 Bad Gateway issue while starting Jfrog
Am trying to bring Jfrog up, in local tomcat is running and artifactory service also looking fine. But in UI jfrog is not coming up. Getting 502 Bad Gateway error. I have shared the console log details below. Below is the console log [TRACE] [Service registry ping] operation attempt #94 failed. retrying in 1s. current error: error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [TRACE] [Service registry ping] running retry attempt #95 [INFO ] Cluster join: Retry 95: Service registry ping failed, will retry. Error: error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [TRACE] [Service registry ping] operation attempt #95 failed. retrying in 1s. current error: error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused 2022-09-10T06:14:20.271Z [jffe ] [INFO ] [ ] [ ] [main ] - pinging artifactory, attempt number 90 2022-09-10T06:14:20.274Z [jffe ] [INFO ] [ ] [ ] [main ] - pinging artifactory attempt number 90 failed with code : ECONNREFUSED [TRACE] [Service registry ping] running retry attempt #96 [DEBUG] Cluster join: Retry 96: Service registry ping failed, will retry. Error: error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [TRACE] [Service registry ping] operation attempt #96 failed. retrying in 1s. current error: error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused 2022-09-10T06:14:21.188Z [jfrou] [INFO ] [2b4bfed554e45cf6] [join_executor.go:169 ] [main ] [] - Cluster join: Retry 100: Service registry ping failed, will retry. Error: could not parse error from service registry, status code: 404, raw body: <!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1></body></html> [TRACE] [Service registry ping] running retry attempt #97 [DEBUG] Cluster join: Retry 97: Service registry ping failed, will retry. Error: error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [TRACE] [Service registry ping] operation attempt #97 failed. retrying in 1s. current error: error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused 2022-09-10T06:14:22.016Z [jfmd ] [INFO ] [ ] [accessclient.go:60 ] [main ] - Cluster join: Retry 100: Service registry ping failed, will retry. Error: Error while trying to connect to local router at address 'http://localhost:8046/access': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused [access_client] [TRACE] [Service registry ping] running retry attempt #98 [DEBUG] Cluster join: Retry 98: Service registry ping failed, will retry. Error: error while trying to connect to local router at address 'http://localhost:8046/access/api/v1/system/ping': Get "http://localhost:8046/access/api/v1/system/ping": dial tcp 127.0.0.1:8046: connect: connection refused and this is the error am getting in UI. 502 Bad Gateway error
Is it a newly installed Artifactory instance? If yes, we need to verify whether the required ports are in place (open at the firewall level). If the ports are already available, disable the IPv6 address from the VM where Artifactory is installed and restart the Artifactory. There are chances of this error occurrence if the application is trying to pick up the IPv6 address for initialisation instead of Ipv4.
spring cloud kafka streams terminating in azure kubernetes service
I created Kafka streams application with spring cloud stream which reads data from one topic and writes to another topic and I'm trying to deploy and run the job in AKS with ACR image but the stream getting closed without any error after reading all the available messages(lag 0) in the topic. strange thing I'm facing is, it is running fine in Intellj. Here is my AKS pod logs: 2021-03-02 17:30:39,131] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.NetworkClient NetworkClient.java:840] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Received FETCH response from node 3 for request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, correlationId=62): org.apache.kafka.common.requests.FetchResponse#7b021a01 [2021-03-02 17:30:39,131] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.FetchSessionHandler FetchSessionHandler.java:463] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Node 0 sent an incremental fetch response with throttleTimeMs = 3 for session 614342128 with 0 response partition(s), 1 implied partition(s) [2021-03-02 17:30:39,132] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.c.i.Fetcher Fetcher.java:1177] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Added READ_UNCOMMITTED fetch request for partition test.topic at position FetchPosition{offset=128, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[vm3.lab (id: 3 rack: 1)], epoch=1}} to node vm3.lab (id: 3 rack: 1) [2021-03-02 17:30:39,132] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.FetchSessionHandler FetchSessionHandler.java:259] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Built incremental fetch (sessionId=614342128, epoch=49) for node 3. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s) [2021-03-02 17:30:39,132] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.c.i.Fetcher Fetcher.java:261] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(test.topic)) to broker vm3.lab (id: 3 rack: 1) [2021-03-02 17:30:39,132] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.NetworkClient NetworkClient.java:505] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, correlationId=63) and timeout 60000 to node 3: {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=614342128,session_epoch=49,topics=[],forgotten_topics_data=[],rack_id=} [2021-03-02 17:30:39,636] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.NetworkClient NetworkClient.java:840] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Received FETCH response from node 3 for request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, correlationId=63): org.apache.kafka.common.requests.FetchResponse#50fb365c [2021-03-02 17:30:39,636] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.FetchSessionHandler FetchSessionHandler.java:463] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Node 0 sent an incremental fetch response with throttleTimeMs = 3 for session 614342128 with 0 response partition(s), 1 implied partition(s) [2021-03-02 17:30:39,637] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.c.i.Fetcher Fetcher.java:1177] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Added READ_UNCOMMITTED fetch request for partition test.topic at position FetchPosition{offset=128, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[vm3.lab (id: 3 rack: 1)], epoch=1}} to node vm3.lab (id: 3 rack: 1) [2021-03-02 17:30:39,637] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.FetchSessionHandler FetchSessionHandler.java:259] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Built incremental fetch (sessionId=614342128, epoch=50) for node 3. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s) [2021-03-02 17:30:39,637] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.c.i.Fetcher Fetcher.java:261] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(test.topic)) to broker vm3.lab (id: 3 rack: 1) [2021-03-02 17:30:39,637] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.c.NetworkClient NetworkClient.java:505] [Consumer clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, groupId=latest] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1-consumer, correlationId=64) and timeout 60000 to node 3: {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=614342128,session_epoch=50,topics=[],forgotten_topics_data=[],rack_id=} [2021-03-02 17:30:39,710] [DEBUG] [SpringContextShutdownHook] [o.s.c.a.AnnotationConfigApplicationContext AbstractApplicationContext.java:1006] Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#dc9876b, started on Tue Mar 02 17:29:08 GMT 2021 [2021-03-02 17:30:39,715] [DEBUG] [SpringContextShutdownHook] [o.s.c.a.AnnotationConfigApplicationContext AbstractApplicationContext.java:1006] Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#71391b3f, started on Tue Mar 02 17:29:12 GMT 2021, parent: org.springframework.context.annotation.AnnotationConfigApplicationContext#dc9876b [2021-03-02 17:30:39,718] [DEBUG] [SpringContextShutdownHook] [o.s.c.s.DefaultLifecycleProcessor DefaultLifecycleProcessor.java:369] Stopping beans in phase 2147483547 [2021-03-02 17:30:39,718] [DEBUG] [SpringContextShutdownHook] [o.s.c.s.DefaultLifecycleProcessor DefaultLifecycleProcessor.java:242] Bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry' completed its stop procedure [2021-03-02 17:30:39,719] [DEBUG] [SpringContextShutdownHook] [o.a.k.s.KafkaStreams KafkaStreams.java:1016] stream-client [latest-e07d649d-5178-4107-898b-08b8008d822e] Stopping Streams client with timeoutMillis = 10000 ms. [2021-03-02 17:30:39,719] [INFO] [SpringContextShutdownHook] [o.a.k.s.KafkaStreams KafkaStreams.java:287] stream-client [latest-e07d649d-5178-4107-898b-08b8008d822e] State transition from RUNNING to PENDING_SHUTDOWN [2021-03-02 17:30:39,729] [INFO] [kafka-streams-close-thread] [o.a.k.s.p.i.StreamThread StreamThread.java:1116] stream-thread [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] Informed to shut down [2021-03-02 17:30:39,729] [INFO] [kafka-streams-close-thread] [o.a.k.s.p.i.StreamThread StreamThread.java:221] stream-thread [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] State transition from RUNNING to PENDING_SHUTDOWN [2021-03-02 17:30:39,788] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.s.p.i.StreamThread StreamThread.java:772] stream-thread [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] State already transits to PENDING_SHUTDOWN, skipping the run once call after poll request [2021-03-02 17:30:39,788] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.s.p.i.StreamThread StreamThread.java:206] stream-thread [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] Ignoring request to transit from PENDING_SHUTDOWN to PENDING_SHUTDOWN: only DEAD state is a valid next state [2021-03-02 17:30:39,788] [INFO] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.s.p.i.StreamThread StreamThread.java:1130] stream-thread [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] Shutting down [2021-03-02 17:30:39,788] [DEBUG] [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] [o.a.k.s.p.i.AssignedStreamsTasks AssignedStreamsTasks.java:529] stream-thread [latest-e07d649d-5178-4107-898b-08b8008d822e-StreamThread-1] Clean shutdown of all active tasks Please advise.
EB CLI create failed with error: TypeError - expected str, bytes or os.PathLike object, not NoneType
I'm first time I deploy an nodejs expresss application to AWS using EB CLI follow by aws guide First step, I run eb init success. Next step, I run eb create my-env to create an environment. And I get a error message: ERROR: TypeError - expected str, bytes or os.PathLike object, not NoneType Update: This's log when I run eb logs /var/log/eb-engine.log ---------------------------------------- 2021/02/24 02:37:43.600169 [INFO] Executing instruction: RunAppDeployPreDeployHooks 2021/02/24 02:37:43.600179 [INFO] The dir .platform/hooks/predeploy/ does not exist in the application. Skipping this step... 2021/02/24 02:37:43.600191 [INFO] Executing instruction: stop X-Ray 2021/02/24 02:37:43.600196 [INFO] stop X-Ray ... 2021/02/24 02:37:43.600212 [INFO] Running command /bin/sh -c systemctl show -p PartOf xray.service 2021/02/24 02:37:43.607595 [WARN] stopProcess Warning: process xray is not registered 2021/02/24 02:37:43.607611 [INFO] Running command /bin/sh -c systemctl stop xray.service 2021/02/24 02:37:43.615106 [INFO] Executing instruction: stop proxy 2021/02/24 02:37:43.615123 [INFO] Running command /bin/sh -c systemctl show -p PartOf httpd.service 2021/02/24 02:37:43.623248 [WARN] deregisterProcess Warning: process httpd is not registered, skipping... 2021/02/24 02:37:43.623266 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service 2021/02/24 02:37:43.628431 [WARN] deregisterProcess Warning: process nginx is not registered, skipping... 2021/02/24 02:37:43.628457 [INFO] Executing instruction: FlipApplication 2021/02/24 02:37:43.628462 [INFO] Fetching environment variables... 2021/02/24 02:37:43.628470 [INFO] setting default port 8080 to application 2021/02/24 02:37:43.628559 [INFO] Purge old process... 2021/02/24 02:37:43.628593 [INFO] Register application processes... 2021/02/24 02:37:43.628598 [INFO] Registering the proc: web Update 2: Now, I deployed my application to AWS EB by manual (upload source zip). I still find and fix above problem. I search many time but I not found solution to fix it. If you understand my problem, please help me. Thank you so much. FINAL UPDATE: I resolved problem by this comment. problem involve to git config.
How to connect Mist to the private blockchain on remote server (Azure)?
I've installed Mist on my local PC (Windows 10), but I don't want to sync Main/Test networks. So I've used this Ethereum + Azure tutorial and now I can work via SSH on my private network. geth --dev console More than that, I know that it's possible to run Mist on custom blockchain using special flag mist.exe --rpc http://YOUR_IP:PORT So, according to geth --help, I'm running geth --dev --rpc console on Azure's virtual machine, after that I'm running mist.exe --rpc http://VM_IP:8545 and there is an error: [2016-09-24 18:01:21.928] [INFO] Sockets/node-ipc - Connect to {"hostPort":"http://VM_IP:8545"} [2016-09-24 18:01:24.968] [ERROR] Sockets/node-ipc - Connection failed (3000ms elapsed) [2016-09-24 18:01:24.971] [WARN] EthereumNode - Failed to connect to node. Maybe it's not running so let's start our own... [2016-09-24 18:01:24.979] [INFO] EthereumNode - Node type: geth [2016-09-24 18:01:24.982] [INFO] EthereumNode - Network: test [2016-09-24 18:01:24.983] [INFO] EthereumNode - Start node: geth test [2016-09-24 18:01:32.284] [INFO] EthereumNode - 3000ms elapsed, assuming node started up successfully [2016-09-24 18:01:32.286] [INFO] EthereumNode - Started node successfully: geth test [2016-09-24 18:01:32.327] [INFO] Sockets/node-ipc - Connect to {"hostPort":"http://VM_IP:8545"} [2016-09-24 18:02:02.332] [ERROR] Sockets/node-ipc - Connection failed (30000ms elapsed) [2016-09-24 18:02:02.333] [ERROR] EthereumNode - Failed to connect to node Error: Unable to connect to socket: timeout P.S. Mist version - 0.8.2
Your approach is correct. I would say that you have a network configuration issue that prevents your Mist to talk to geth. I would suggest doing the following test and see if you run into the same issue: - on the machine where you have Mist, find the geth.exe executable - run geth with geth --testnet --rpc - start mist with ./Mist --rpc /.../Ethereum/testnet/geth.ipc or ./Mist --rpc http://localhost:8545 I am on a Mac so I guess you will have to reverse the / and add some C: decorations here and there.
Unable to Start Production Profile in Jhipster Version 1.0.0
Helo Everyone, I have just take update of jipster to 1.0.0 but when i created project without webscoket i.e Atmosphere is not working on Production profile giving these exception while try to run Project on Production Profile Please Help Me. [DEBUG] com.application.gom.config.AsyncConfiguration - Creating Async Task Executor [DEBUG] com.application.gom.config.MetricsConfiguration - Registring JVM gauges [INFO] com.application.gom.config.MetricsConfiguration - Initializing Metrics JMX reporting [INFO] com.hazelcast.instance.DefaultAddressPicker - null [dev] [3.2.5] Prefer IPv4 stack is true. [INFO] com.hazelcast.instance.DefaultAddressPicker - null [dev] [3.2.5] Picked Address[192.168.1.11]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true [INFO] com.hazelcast.system - [192.168.1.11]:5701 [dev] [3.2.5] Hazelcast 3.2.5 (20140814) starting at Address[192.168.1.11]:5701 [INFO] com.hazelcast.system - [192.168.1.11]:5701 [dev] [3.2.5] Copyright (coffee) 2008-2014 Hazelcast.com [INFO] com.hazelcast.instance.Node - [192.168.1.11]:5701 [dev] [3.2.5] Creating MulticastJoiner [INFO] com.hazelcast.core.LifecycleService - [192.168.1.11]:5701 [dev] [3.2.5] Address[192.168.1.11]:5701 is STARTING [INFO] com.hazelcast.nio.SocketConnector - [192.168.1.11]:5701 [dev] [3.2.5] Connecting to /192.168.1.12:5701, timeout: 0, bind-any: true [INFO] com.hazelcast.nio.TcpIpConnectionManager - [192.168.1.11]:5701 [dev] [3.2.5] 55994 accepted socket connection from /192.168.1.12:5701 [INFO] com.hazelcast.nio.TcpIpConnection - [192.168.1.11]:5701 [dev] [3.2.5] Connection [Address[192.168.1.12]:5701] lost. Reason: java.io.EOFException[Remote socket closed!] [WARN] com.hazelcast.nio.ReadHandler - [192.168.1.11]:5701 [dev] [3.2.5] hz.gomapplication.IO.thread-in-0 Closing socket to endpoint Address[192.168.1.12]:5701, Cause:java.io.EOFException: Remote socket closed! [INFO] com.hazelcast.nio.SocketConnector - [192.168.1.11]:5701 [dev] [3.2.5] Connecting to /192.168.1.12:5701, timeout: 0, bind-any: true [INFO] com.hazelcast.nio.TcpIpConnectionManager - [192.168.1.11]:5701 [dev] [3.2.5] 55995 accepted socket connection from /192.168.1.12:5701
The error you have has nothing to do with Atmosphere, but HazelCast. It looks like you selected the HazelCast option, and that it can't create its cluster, because of an issue with host 192.168.1.12 -> this is probably a network error, do you have a firewall running?