Cannot bring down hyperledger-fabric started sample networks - permission denied - linux

I followed instructions, given in: edx.courses (LinuxFoundationX: LFS171x Blockchain for Business - An Introduction to Hyperledger Technologies),
which is similar to to official guide of hlf (https://hyperledger-fabric.readthedocs.io/en/release-1.1/build_network.html).
Snaped Image: Ubuntu18.04 #VMware Workstation (Host: Win10)
Most interesting parts:
$ curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s 1.1.0
$ export PATH=$PWD/bin:$PATH
$ git clone https://github.com/hyperledger/fabric-samples.git
$ cd fabric-samples/first-network
$ ./byfn.sh -m generate
$ ./byfn.sh -m up
========= All GOOD, BYFN execution completed ===========
_____ _ _ ____
| ____| | \ | | | _ \
| _| | \| | | | | |
| |___ | |\ | | |_| |
|_____| |_| \_| |____/
Now network is up and working. So, let's bring it down without changes:
t1#ubuntu:~/fabric-samples/first-network$ ./byfn.sh -m down
Stopping with channel 'mychannel' and CLI timeout of '10' seconds and CLI delay of '3' seconds
Continue? [Y/n]
proceeding ...
Stopping cli ... error
Stopping peer1.org1.example.com ... error
Stopping peer1.org2.example.com ... error
Stopping peer0.org2.example.com ... error
Stopping peer0.org1.example.com ... error
Stopping orderer.example.com ... error
ERROR: for cli cannot stop container: 743f05760adc094bf402c4d80f76212abe4013e274b4ce5fef49ab40d265431d: Cannot kill container 743f05760adc094bf402c4d80f76212abe4013e274b4ce5fef49ab40d265431d: unknown error after kill: docker-runc did not terminate sucessfully: container_linux.go:387: signaling init process caused "permission denied"
: unknown
Removing network net_byfn
ERROR: error while removing network: network net_byfn id d75ceca2566ded50e7e9a2dce912e54df9b6d243baa8b7e2dade3f72da5d3815 has active endpoints
Stopping cli ... error
Stopping peer1.org1.example.com ... error
Stopping peer1.org2.example.com ... error
Stopping peer0.org2.example.com ... error
Stopping peer0.org1.example.com ... error
Stopping orderer.example.com ... error
ERROR: for cli cannot stop container: 743f05760adc094bf402c4d80f76212abe4013e274b4ce5fef49ab40d265431d: Cannot kill container 743f05760adc094bf402c4d80f76212abe4013e274b4ce5fef49ab40d265431d: unknown error after kill: docker-runc did not terminate sucessfully: container_linux.go:387: signaling init process caused "permission denied"
: unknown
Removing network net_byfn
ERROR: error while removing network: network net_byfn id
Troubleshooting
$ docker rmi -f $(docker images -q)
Will not work:
Deleted: sha256:fd96d34cdd7035e9d7c4fdf4dae4e9c8d4a2e9a5f082a13043bafb5109992a0a
Deleted: sha256:809c70fab2ffe494878efb5afda03b2aaeda26a6113428f1a9a907a800c3bbb7
Deleted: sha256:833649a3e04c96faf218d8082b3533fa0674664f4b361c93cad91cf97222b733
Error response from daemon: conflict: unable to delete be773bfc074c (cannot be forced) - image is being used by running container 546cfd593673
Error response from daemon: conflict: unable to delete 0592b563eec8 (cannot be forced) - image is being used by running container d36402eba8e7
Error response from daemon: conflict: unable to delete 4460ed7ada01 (cannot be forced) - image is being used by running container 6247fdfca8f2
Error: No such image: 72617b4fa9b4
Error response from daemon: conflict: unable to delete b7bfddf508bc (cannot be forced) - image is being used by running container 743f05760adc
Error response from daemon: conflict: unable to delete b7bfddf508bc (cannot be forced) - image is being used by running container 743f05760adc
Error response from daemon: conflict: unable to delete ce0c810df36a (cannot be forced) - image is being used by running container 2314bf8b86b0
Error response from daemon: conflict: unable to delete ce0c810df36a (cannot be forced) - image is being used by running container 2314bf8b86b0
Error response from daemon: conflict: unable to delete b023f9be0771 (cannot be forced) - image is being used by running container 98307e956bd5
Error response from daemon: conflict: unable to delete b023f9be0771 (cannot be forced) - image is being used by running container 541ff05a7925
Error: No such image: 82098abb1a17
Error: No such image: c8b4909d8d46
Error: No such image: 92cbb952b6f8
Error: No such image: 554c591b86a8
Error: No such image: 7e73c828fc5b
Error response from daemon: conflict: unable to delete 220e5cf3fb7f (cannot be forced) - image has dependent child images
Thx for ur support.
(is my first post here, hope it's all fine for your expectation.)

First, deleting the images using docker rmi -f $(docker images -q) won't work, because you have unterminated containers using those images.
The ./byfn.sh -m down script tried to stop the containers spawn by fabriq but there was an error as you can see in the log: signaling init process caused "permission denied": unknown.
The cause of this error is usually AppArmor, try to run:
sudo aa-remove-unknown
and/or to stop the AppArmor service using:
sudo service apparmor stop
sudo update-rc.d -f apparmor remove

Try running below set of command's one by one, it will help you to clean docker containers so you can start fresh
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker volume prune
docker network prune
Note: This wont remove Hyperledger images so no need to worry about reinstalling anything

The same problem in case of Linux Ubuntu:
$sudo systemctl daemon-reload
$sudo systemctl restart docker
$docker ps -qa|xargs docker rm

Related

Problem executing "minikube start" command

malik#malik:~$ minikube start
😄 minikube v1.12.0 on Ubuntu 18.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🎉 minikube 1.12.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.12.1
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.18.3 preload ...
E0727 07:25:35.757871 14015 cache.go:63] save image to file "k8s.gcr.io/kube-apiserver:v1.18.3" -> "/home/malik/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.3" failed: write: Get https://k8s.gcr.io/v2/kube-apiserver/blobs/sha256:83b4483280e5187b2801b449338d5755e5874ab80c44bf1ce615d258142e7c8b: dial tcp: lookup k8s.gcr.io: no such host
E0727 07:25:35.757643 14015 cache.go:63] save image to file "k8s.gcr.io/coredns:1.6.7" -> "/home/malik/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" failed: write: Get https://k8s.gcr.io/v2/coredns/blobs/sha256:c6568d217a0023041ef9f729e8836b19f863bcdb612bb3a329ebc165539f5a80: dial tcp: lookup k8s.gcr.io: no such host
E0727 07:25:35.757512 14015 cache.go:63] save image to file "k8s.gcr.io/kube-scheduler:v1.18.3" -> "/home/malik/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.3" failed: write: Get https://k8s.gcr.io/v2/kube-scheduler/blobs/sha256:83b4483280e5187b2801b449338d5755e5874ab80c44bf1ce615d258142e7c8b: dial tcp: lookup k8s.gcr.io: no such host
E0727 07:26:22.529729 14015 cache.go:63] save image to file "kubernetesui/dashboard:v2.0.1" -> "/home/malik/.minikube/cache/images/kubernetesui/dashboard_v2.0.1" failed: nil image for kubernetesui/dashboard:v2.0.1: Get https://index.docker.io/v2/: dial tcp: lookup index.docker.io: no such host
E0727 07:26:22.544151 14015 cache.go:63] save image to file "kubernetesui/metrics-scraper:v1.0.4" -> "/home/malik/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.4" failed: nil image for kubernetesui/metrics-scraper:v1.0.4: Get https://index.docker.io/v2/: dial tcp: lookup index.docker.io: no such host
E0727 07:26:22.579102 14015 cache.go:63] save image to file "k8s.gcr.io/etcd:3.4.3-0" -> "/home/malik/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" failed: write: error calculating manifest: Get https://storage.googleapis.com/eu.artifacts.k8s-artifacts-prod.appspot.com/containers/images/sha256:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f: dial tcp: lookup storage.googleapis.com: no such host
E0727 07:26:22.579102 14015 cache.go:63] save image to file "k8s.gcr.io/kube-controller-manager:v1.18.3" -> "/home/malik/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.3" failed: write: error calculating manifest: Get https://storage.googleapis.com/eu.artifacts.k8s-artifacts-prod.appspot.com/containers/images/sha256:da26705ccb4b5eb623a7cc42e566d21b0e23c1f59a0b4d6acac3fb810538c0d5: dial tcp: lookup storage.googleapis.com: no such host
E0727 07:26:22.579194 14015 cache.go:63] save image to file "k8s.gcr.io/kube-proxy:v1.18.3" -> "/home/malik/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.3" failed: write: error calculating manifest: Get https://storage.googleapis.com/eu.artifacts.k8s-artifacts-prod.appspot.com/containers/images/sha256:3439b7546f29bec22edd737bc0a5770ead18b5ee5ce0aea5af9047a554715f9f: dial tcp: lookup storage.googleapis.com: no such host
E0727 07:26:22.579229 14015 cache.go:63] save image to file "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" -> "/home/malik/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1" failed: write: error calculating manifest: Get https://storage.googleapis.com/artifacts.k8s-minikube.appspot.com/containers/images/sha256:4689081edb103a9e8174bf23a255bfbe0b2d9ed82edc907abab6989d1c60f02c: dial tcp: lookup storage.googleapis.com: no such host
E0727 07:26:22.619544 14015 cache.go:172] Error downloading kic artifacts: failed to download kic base image or any fallback image
❗ Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 5.369799649s
💡 Restarting the docker service may improve performance.
🤷 docker "minikube" container is missing, will recreate.
🔥 Creating docker container (CPUs=2, Memory=2200MB) ...
🤦 StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 --memory=2200mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10#sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: exit status 125
stdout:
stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.10#sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438' locally
docker: Error response from daemon: Get https://gcr.io/v2/k8s-minikube/kicbase/manifests/sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: Get https://gcr.io/v2/token?scope=repository%3Ak8s-minikube%2Fkicbase%3Apull&service=gcr.io: net/http: request canceled (Client.Timeout exceeded while awaiting headers).
See 'docker run --help'.
🤷 docker "minikube" container is missing, will recreate.
🔥 Creating docker container (CPUs=2, Memory=2200MB) ...
😿 Failed to start docker container. "minikube start" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 --memory=2200mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10#sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: exit status 125
stdout:
stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.10#sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438' locally
docker: Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io: no such host.
See 'docker run --help'.
❌ [INVALID_PROXY_HOSTNAME] error provisioning host Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 --memory=2200mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10#sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: exit status 125
stdout:
stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.10#sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438' locally
docker: Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io: no such host.
See 'docker run --help'.
💡 Suggestion: Verify that your HTTP_PROXY and HTTPS_PROXY environment variables are set correctly.
📘 Documentation: https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/
When I run the "minikube start" command the Docker fails to load the images that are needed as it is being used as the virtual machine manager by Minikube and Kubectl is providing the interface for Minikube to be used on the terminal. I executed commands and operations using the Virtualbox as well but no use. The version of Docker, Kubectl & Minikube is up-to-date.
I have tried installing Minikube and Kubectl several times using different packages and methods but useless
Please help me so that I can be able to start a cluster and make my PC a Worker Node and get going on the road of Cloud Computing Development
You might have a minikube VM that has an old version or/and minikube cannot connect to. You can try deleting the VM and/or wipe out ~/.minikube
$ minikube delete
$ rm -rf ~/.minikube
If that doesn't work then you have a problem with VirtualBox. Uninstall/Re-install
3 years ago, I spent a lot of time struggling with minikube even though it has been the official way to run kubernetes locally.
If you get stuck, i would suggest to have a KinD cluster up in few seconds by just running this script:
kind_version="v0.8.1"
kind_bin_path=/usr/local/bin/kind
if [ ! -f ${kind_bin_path} ]; then
curl -Lo ./kind "https://kind.sigs.k8s.io/dl/${kind_version}/kind-$(uname)-amd64"
chmod +x ./kind
sudo mv ./kind ${kind_bin_path}
fi
cat <<EOF | kind create cluster --config -
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
# Now check
kubectl get nodes
# Congrats!
To customize more the cluster, check other YAML config here.
Good luck for whatever works for you.

Unable to run byfn.sh up

I'm trying to run Hyperledger Fabric's byfn on Amazon lightsail instances. I have ran the following launch script:
curl -o lightsail-compose.sh https://raw.githubusercontent.com/KY-Leung/Catena/master/setup/lightsail-compose.sh
chmod +x ./lightsail-compose.sh
./lightsail-compose.sh
curl -o /fabric-setup.sh https://raw.githubusercontent.com/KY-Leung/Catena/master/setup/fabric-setup.sh
chmod +x ./fabric-setup.sh
Subsequently, I SSH-ed into the isntance and executed the follwing:
/fabric-setup.sh
cd fabric-samples/first-network/
./byfn.sh generate
./byfn.sh up
However, the following error occured:
Starting for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds
Continue? [Y/n] y
proceeding ...
LOCAL_VERSION=1.4.0
DOCKER_IMAGE_VERSION=1.4.0
Creating network "net_byfn" with the default driver
Creating volume "net_orderer.example.com" with default driver
Creating volume "net_peer0.org1.example.com" with default driver
Creating volume "net_peer1.org1.example.com" with default driver
Creating volume "net_peer0.org2.example.com" with default driver
Creating volume "net_peer1.org2.example.com" with default driver
Creating peer1.org2.example.com ...
Creating peer0.org1.example.com ...
Creating orderer.example.com ...
Creating peer1.org1.example.com ...
Creating peer0.org2.example.com ...
./byfn.sh: line 151: 9978 Killed IMAGE_TAG=$IMAGETAG docker-compose -f $COMPOSE_FILE up -d 2>&1
ERROR !!!! Unable to start network
I tried to google but to no avail. Any help is appreciated. Thanks!
Following are the versions:
Docker version 18.09.3, build 774a1f4
docker-compose version 1.23.2, build 1110ad01

Hyperledger Fabric chaincode instantiation error

I'm using fabric tools provided for composer to deploy fabric network as it deploys 1 peer, 1 orderer, 1 couchdb, & 1 fabric-ca. I am able to install chain code on peer but instantiation fails with following error. I am using command on fabric-peer.
peer chaincode instantiate -o orderer.example.com:7050 -C composerchannel -n test -l node -v 1.0 -c '{"Args":["init","a", "100", "b","200"]}'
Error: could not assemble transaction, err Proposal response was not
successful, error code 500, msg failed to execute transaction
83b806a14ec33d47e11950581357cc0ab05ef51dfb53d35c6b9f00eca7a49051:
timeout expired while starting chaincode test:1.0 for transaction
83b806a14ec33d47e11950581357cc0ab05ef51dfb53d35c6b9f00eca7a49051
And if I check the logs of orderer I get:
2018-09-01 11:09:16.205 UTC [orderer/common/broadcast] Handle -> WARN
973 Error reading from 172.19.0.14:33674: rpc error: code = Canceled
desc = context canceled
in my case (windows 10) I stopped the network, removed all containers then restarted, worked fine:
$ docker stop $(docker ps -a -q)
$ docker ps -qa|xargs docker rm
$ ./startFabric.sh
Check logs on node (VM) which host peer0 with:
docker ps -a
you will find chaincode container ID with exit code.
CONTAINER ID: **718e367bf1db**
IMAGE: dev-peer1-org1-**mycc-0.2**-9c1906
COMMAND: "/bin/sh -c 'cd /usr…"
where mycc-0.2 is you chaincode name and version. Once you find the container ID - you can check the error log with:
docker logs <container_id>
I assume there is a bug in the your chaincode and the application can't start.

Docker daemon throwing error while starting in Linux RHEL

I am trying to start my dockerd daemon by this command - dockerd &
Then i start getting the error as below -
ERRO[0036] libcontainerd: failed to receive event from containerd: rpc error: code = 12 desc = unknown service types.API
This keeps rolling again and again and i am unable to start any container after that. If i close the session and open a new session, i could see docker ps is accessible. But i am unable to start any container. While starting the container I am getting error -
docker run hello-world
docker: Error response from daemon: unknown service types.API. ERRO[0000] error waiting for container: context canceled
Please let me know if any logs are needed.
Why do you start the docker daemon using dockerd & and not systemctl start docker.service? This is probably the cause of your problem.
In order to start the daemon at boot, you need to run systemctl enable docker.service. See Getting Started with Containers.
Note that the kernel for Red Hat Enterprise Linux 6 only supports a limited subset of the functionality needed for container support, and I don't think anyone tests either the daemon or container images on that operating system version.

dockerd: Error running deviceCreate (CreatePool) dm_task_run failed

I'm building some CentOS VM with VMWare, with no access to internet, so I've downloaded and made local repositories, including this one
Then I have installed docker-engine.x86_64, and when starting the docker daemon, I get the following errors :
[root]# dockerd
DEBU[0000] docker group found. gid: 993
...
...
DEBU[0001] Error retrieving the next available loopback: open /dev/loop-control: no such device
ERRO[0001] **There are no more loopback devices available.**
ERRO[0001] [graphdriver] prior storage driver "devicemapper" failed: loopback attach failed
DEBU[0001] Cleaning up old mountid : start.
FATA[0001] Error starting daemon: error initializing graphdriver: loopback attach failed
After manually add the loop module which control loop device with this command :
insmod /lib/modules/3.10.0-327.36.2.el7.x86_64/kernel/drivers/block/loop.ko
The error changes to :
[graphdriver] prior storage driver "devicemapper" failed: devicemapper: Error running deviceCreate (CreatePool) dm_task_run failed
I've read that it could be because I have not enough space disk, I think it's not that, any idea?
[root]# df -k .
Filesystem blocs de 1K Used Available Used Mounted on
/dev/mapper/centos-root 51887356 2436256 49451100 5% /
I got the "There are no more loopback devices available" error, which stopped dockerd from running.
I fixed it by ensuring the storage driver was 'overlay':
# /usr/bin/dockerd -D --storage-driver=overlay
This was on Debian Jessie and docker running as a systemd service/unit.
To make it permanent, I created a systemd drop-in:
$ cat /etc/systemd/system/docker.service.d/docker.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --storage-driver=overlay

Resources