AKS cannot pull docker image from private registry with letsencryptcertificate - azure

I am gettix x509 certificate issue when AKS is trying to pull docker image from my private repository secured with LetsEncrypt certificate. How can I menage certificate store in AKS to add CA of my certificate etc.

Normal Scheduled 8m8s default-scheduler Successfully assigned default/proxy-deployment-568646f8d4-7gnnt to aks-default-26787434-vmss000000
Normal Pulling 6m34s (x4 over 8m7s) kubelet Pulling image "my registry/my-image:lts"
Warning Failed 6m34s (x4 over 8m7s) kubelet Failed to pull image "my registry/my-image:lts": rpc error: code = Unknown desc = Error response from daemon: Get https://my registry/v2/: x509: certificate signed by unknown authority
Warning Failed 6m34s (x4 over 8m7s) kubelet Error: ErrImagePull
Normal BackOff 6m18s (x6 over 8m7s) kubelet Back-off pulling image "my registry/my-image:lts"
Warning Failed 3m5s (x19 over 8m7s) kubelet Error: ImagePullBackOff

Related

Minikube not staring in linux machine

Initially minikube used to run on the same machine.
Now when I am starting the minikube start command it is not starting tried everything but still the same.
Here are the logs of minikube start
[Company\sainath.reddy#hostm repos]$ minikube start
😄 minikube v1.26.0 on Amazon 2 (xen/amd64)
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🏃 Updating the running docker "minikube" container ...
❗ This container is having trouble accessing https://k8s.gcr.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
▪ kubelet.cgroup-driver=systemd
🤦 Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
▪ Generating certificates and keys ...
▪ Booting up control plane ...
💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.14.285-215.501.amzn2.x86_64
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
stderr:
W0723 18:18:12.089975 49249 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.14.285-215.501.amzn2.x86_64\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
▪ Generating certificates and keys ...
▪ Booting up control plane ...
💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.14.285-215.501.amzn2.x86_64
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
stderr:
W0723 18:22:16.760474 50480 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.14.285-215.501.amzn2.x86_64\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.14.285-215.501.amzn2.x86_64
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
stderr:
W0723 18:22:16.760474 50480 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.14.285-215.501.amzn2.x86_64\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
🍿 Related issue: https://github.com/kubernetes/minikube/issues/4172
[Company\sainath.reddy#host repos]$
I have tried minikube start --extra-config=kubelet.cgroup-driver=systemd but still the same.
I am running the minikube on linux machine which is amazon linux 2
Here is the logs when I check kubelet service.
[Company\sainath.reddy#host ~]$ systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Mon 2022-07-25 12:28:51 IST; 2s ago
Docs: https://kubernetes.io/docs/
Process: 744434 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 744434 (code=exited, status=1/FAILURE)
[Company\sainath.reddy#host ~]$
Error message is, (you can search for this error keyword K8S_KUBELET_NOT_RUNNING):
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
Kubelet has to be up for running. Check it with these:
systemctl status kubelet
journalctl -xeu kubelet
Steps:
Is systemd installed on your machine?
check it with this:
rpm -qa | grep -i systemd
else install with,
yum install -y /usr/bin/systemctl; systemctl --version
Try updating the cgroup driver in a /etc/docker/daemon.json file (create if it is not there!):
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
Restart the services: (use sudo where needed)
systemctl daemon-reload
systemctl restart docker.service
systemctl restart kubelet
After this if kubelet is running, then issue resolved.
And after this if you face issues with swap, you can swap off with instructions here.

not able to invoke/query chaincode usin fabric Node SDK

I have created a sample HLF network with 3 organizations. I have taken an orderer and a peer from each organization ( total 3 orderers, 3 peers, 3 fabric-CA, 3 CouchDB instances).
I have successfully created the certificates, system channel, channel configuration, application channel and also successfully deployed the chaincode on each peer.
I am able to invoke/query any chaincode using peer binary in docker cli but not able to invoke/query the same chaincode through fabric Node SDK.
I have created the connection profile as per the template provided in the test network and also able to register any user for a specific organization. But whenever I am trying to query any chaincode function I am getting the below error:
[ServiceEndpoint]: Error: Failed to connect before the deadline on
Committer- name: orderer.example.com:7050, url:grpcs://localhost:7050,
connected:false, connectAttempted:true [ServiceEndpoint]: waitForReady
Failed to connect to remote gRPC server orderer.example.com:7050 url:grpcs://localhost:7050 timeout:3000
When I check the orderer logs I found this error:
ServerHandshake -> ERRO 087 Server TLS handshake failed in 2.085859ms
with error EOF server=Orderer remoteaddress=172.23.0.1:45678
**Why I am getting this error?
I am trying to just query so why it's connecting to the orderer?
If there is any TLS issue then why I am able to query it through peer binary?**
This link might help Hyperledger Fabric CA releasing wrong certificates (wrong issuer) to Node SDK when TLS enabled
If you are not running as a test network on your local machine, then you will need to specify the connection option of discovery.asLocalhost as false

Why pod terminate it self?

i am trying to install fluend with elasticsearch and kibana using bitnami helm chat.
I am following below mention article
Integrate Logging Kubernetes Kibana ElasticSearch Fluentd
But when I deploy the elasticsearch it's pod goes on Terminating or Back-off state.
I am stuck on this from 3 days, any help is appreciated.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 41m (x2 over 41m) default-scheduler error while running "VolumeBinding" filter plugin for pod "elasticsearch-master-0": pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 41m default-scheduler Successfully assigned default/elasticsearch-master-0 to minikube
Normal Pulling 41m kubelet, minikube Pulling image "busybox:latest"
Normal Pulled 41m kubelet, minikube Successfully pulled image "busybox:latest"
Normal Created 41m kubelet, minikube Created container sysctl
Normal Started 41m kubelet, minikube Started container sysctl
Normal Pulling 41m kubelet, minikube Pulling image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6"
Normal Pulled 39m kubelet, minikube Successfully pulled image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6"
Normal Created 39m kubelet, minikube Created container chown
Normal Started 39m kubelet, minikube Started container chown
Normal Created 38m kubelet, minikube Created container elasticsearch
Normal Started 38m kubelet, minikube Started container elasticsearch
Warning Unhealthy 38m kubelet, minikube Readiness probe failed: Get http://172.17.0.7:9200/_cluster/health?local=true: dial tcp 172.17.0.7:9200: connect: connection refused
Normal Pulled 38m (x2 over 38m) kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Warning FailedMount 32m kubelet, minikube MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition
Normal SandboxChanged 32m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulling 32m kubelet, minikube Pulling image "busybox:latest"
Normal Pulled 32m kubelet, minikube Successfully pulled image "busybox:latest"
Normal Created 32m kubelet, minikube Created container sysctl
Normal Started 32m kubelet, minikube Started container sysctl
Normal Pulled 32m kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Normal Created 32m kubelet, minikube Created container chown
Normal Started 32m kubelet, minikube Started container chown
Normal Pulled 32m (x2 over 32m) kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Normal Created 32m (x2 over 32m) kubelet, minikube Created container elasticsearch
Normal Started 32m (x2 over 32m) kubelet, minikube Started container elasticsearch
Warning Unhealthy 32m kubelet, minikube Readiness probe failed: Get http://172.17.0.6:9200/_cluster/health?local=true: dial tcp 172.17.0.6:9200: connect: connection refused
Warning BackOff 32m (x2 over 32m) kubelet, minikube Back-off restarting failed container
The issue here is the pod has unbound immediate PersistentVolumeClaims. You can set master.persistence.enabled to false while using helm to deploy it. Alternatively you need check if a default storage class exists in the cluster and if it doesn't then create a storage class and make it default.
Short answer: it crashed. You can check the Pod status object for some details like exit status and if was an oomkill and then look at the container logs to see if they show anything.

Error while submitting transactions in Hyperledger Fabric

I am running Hyperledger Fabric with 4 peers of 1 organization, 1 orderer and 1 CA. All 4 peers are on different VMs, orderer and CA are running on different VMs. Chaincode is up and running on all the VMs. I want to setup client on a different VM, which can send transaction requests to the network. Using this code, I have changed the address of VM to my peer0.
I run the following 2 files first:
node enrollAdmin.js
node registerUser.js
I am getting the following error on running the last command:
Store path:/root/gopath/src/github.com/hyperledger/fabric-samples/fabcar/hfc-key-store
Successfully loaded admin from persistence
Failed to register: Error: fabric-ca request register failed with errors [[{"code":20,"message":"Authentication failure"}]]
I checked the logs of CA container on the . Container log is as follows:
2019/04/16 17:34:55 [DEBUG] Received request for /api/v1/register
2019/04/16 17:34:55 [DEBUG] Caller is using a x509 certificate
2019/04/16 17:34:55 [DEBUG] Failed to verify token based on new authentication header requirements: %!s(<nil>)
2019/04/16 17:34:55 [INFO] 192.168.1.22:44826 POST /api/v1/register 401 26 "Untrusted certificate: Failed to verify certificate: x509:
certificate signed by unknown authority (possibly because of
"x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.org1.example.com")"
I have copied the same generated crypto material on all the VMs, including the client. How to resolve this error?
UPDATE: When I place the client code on one of the VMs running peer containers, it works fine. Transactions are executed successfully.

Timeout expired while starting chaincode error when instantiating chaincode

When I run the example here: fabric e2e examples it fails at instantiating chaincode. You can see a screenshot here of the error:
I can see that the chaincode instance/container was started but exited shortly after.
Any ideas on why this is happening and how to resolve?
I had the same issue while testing fabric-samples balance-transfer and fabcar samples, and fabric PTE testing.
Solved it by setting CORE_PEER_CHAINCODELISTENADDRESS to peer's containername:port in the docker compose file.
eg., CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
Issue: Chaincode container (which gets created and killed after certain time) log shows below error, when inspected with docker logs CONTAINER-ID.
UTC [shim] userChaincodeStreamGetter -> ERRO 001 x509: cannot validate certificate for 172.18.0.5 because it doesn't contain any IP SANs
error trying to connect to local peer
github.com/hyperledger/fabric/core/chaincode/shim.userChaincodeStreamGetter
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim/chaincode.go:109
github.com/hyperledger/fabric/core/chaincode/shim.Start
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim/chaincode.go:148
main.main
/chaincode/input/src/github.com/example_cc/go/example_cc.go:199
runtime.main
/opt/go/src/runtime/proc.go:185
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2337
2017-12-26 09:59:52.823 UTC [example_cc0] Errorf -> ERRO 002 Error starting Simple chaincode: error trying to connect to local peer: x509: cannot validate certificate for 172.18.0.5 because it doesn't contain any IP SANs

Resources