/.byfn bring down consists of warning message - hyperledger-fabric

From Hyperledger tutorial, byfn.sh tool to disconnect the network peers from hyperledger network
$ ./byfn.sh -m down
It show following warning messages:
WARNING: Network net_byfn not found.
Removing volume net_peer0.org2.example.com
WARNING: Volume net_peer0.org2.example.com not found.
Removing volume net_peer1.org2.example.com
WARNING: Volume net_peer1.org2.example.com not found.
Removing volume net_peer1.org1.example.com
WARNING: Volume net_peer1.org1.example.com not found.
Removing volume net_peer0.org1.example.com
WARNING: Volume net_peer0.org1.example.com not found.
Removing volume net_orderer.example.com
WARNING: Volume net_orderer.example.com not found.
I wonder about the message whether it could lead to an error.

It's not a problem ... there was actually an extra line in byfn.sh which results in calling docker-compose down twice. The issue has been resolved in the master branch but was never backported to the release-1.1 branch.

Related

error [connectors/v2/FabricGateway] Failed to perform query transaction [ReadAsset] using arguments

Can someone please help me with this problem.
error [connectors/v2/FabricGateway] Failed to perform query transaction [ReadAsset] using arguments [2_4], with error: Error: error in simulation: failed to execute transaction 9ca49b08603ab086104fec8777546bbbc24d826a3900136b4a0e66aadf4bb6e4: could not launch chaincode basic_1:9820659c595e662a849033ca23b4424e87a126e8f40b5f81ace59820b81fe8e7: chaincode registration failed: error starting container: error starting container: API error (404): network _test not found
The report has been generated but all the transactions has failed.
It looks like the chaincode's Docker container failed to start for some reason. You will need to use the docker logs command to inspect the logs for the failure reason. Use the docker ps -a command to see what containers are available, including stopped / failed containers. Both the chaincode container (if it exists) and peer container logs may hold useful information.

0-glusterfs: failed to set volfile server: File exists

my kafka use the glusterfs as the storage, and when i apply the yaml of the kafka, the pod is always in the status of ContainerCreating, then i check the describe of the pod. I get the following err:
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.155:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-10840.scope.
[2020-03-14 13:56:14.771098] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:14.782472] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:14.782519] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11012.scope.
[2020-03-14 13:56:15.441030] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:15.452832] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:15.452871] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11236.scope.
[2020-03-14 13:56:16.646525] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:16.658118] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:16.658168] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11732.scope.
How can I solve the problem?
Ensure you have the right name of your volume in the yaml file under path: <the_volume_name>.
To show all gluster volumes use:
sudo gluster volume status all
Restart the volume (in this case my volume is just called gfs):
gluster volume stop gfs
gluster volume start gfs
Now delete your pod and create it again.
Alternatively try Kadlu.io or Ceph Storage.

How to solve this error Blockchain_chaincode

Error: endorsement failure during invoke. response: status:500 message:"make sure the chaincode irscc has been successfully instantiated and try again: chaincode irscc not found"
This can occur due to various reasons on reason could be that docker fails to update the volumes in that case you can try docker volume prune
If you could provide the orderer logs then it will be easy to debug

Monitoring JHipster error starting jhipster-alerter

I have installed monitoring out of the box according to this link:
http://www.jhipster.tech/monitoring/
When I start with:
docker-compose up -d
Everything starts but not Elastalert:
First log:
ERROR: for monitoring_jhipster-alerter_1 Cannot start service jhipster-alerter: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"rootfs_linux.go:58: mounting \\"/Users/john/source/intellij/company/app/myservice/alerts/config.yaml\\" to rootfs \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged\\" at \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged/opt/elastalert/config.yaml\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a direStarting monitoring_jhipster-import-dashboards_1
Second log:
ERROR: for jhipster-alerter Cannot start service jhipster-alerter: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"rootfs_linux.go:58: mounting \\"/Users/john/source/intellij/company/app/myservice/alerts/config.yaml\\" to rootfs \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged\\" at \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged/opt/elastalert/config.yaml\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
Using the default docker-compose.yml file that I got with:
curl -O https://raw.githubusercontent.com/jhipster/jhipster-console/master/bootstrap/docker-compose.yml
Im not sure what this messages says?
This is because the volumes path for JHipster Alerter is incorrect. Change
jhipster-alerter:
image: jhipster/jhipster-alerter:latest
environment:
- ES_HOST=jhipster-elasticsearch
- ES_PORT=9200
volumes:
- ../jhipster-alerter/rules/:/opt/elastalert/rules/
- ../alerts/config.yaml:/opt/elastalert/config.yaml
To
- ../alerts/rules/:/opt/elastalert/rules/
- ../jhipster-alerter/config.yaml:/opt/elastalert/config.yaml
As shown in https://github.com/jhipster/jhipster-console/pull/102/commits/fa5bc75ec29ca357477ac1a22203ae6cbe2af2f7.

Error mounting azure vhd to kubernetes pod

On kubernetes v1.4.3 I'm trying to mount the azure disk (vhd) to a pod using following configuration:
volumes:
- name: "data"
azureDisk:
diskURI: "https://testdevk8disks685.blob.core.windows.net/vhds/test-disk-01.vhd"
diskName: "test-disk-01"
But it returns following error while creating pod
MountVolume.SetUp failed for volume "kubernetes.io/azure-disk/0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480-data" (spec.Name: "data") pod "0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480" (UID: "0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480") with: mount failed: exit status 32
Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/falkonry-dev-k8-ampool-locator-01 /var/lib/kubelet/pods/0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480/volumes/kubernetes.io~azure-disk/data [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/test-disk-01 does not exist
There was a bug in v1.4.3 which was the cause of this problem. The bug has been solved in v1.4.7+. Upgrading the kubernetes cluster to appropriate version solved the problem.

Resources