Hyperledger Fabric 2.x and Explorer connection - hyperledger-fabric

I run the hyperledger fabric test-network from documentation but How can I integrate hyperledger explorer with it. In documentation of hyperldger fabric there is nothing about UI part. Hyperledger fabric test-network docker containers below. I pull explorer container but I couldn't start also.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b96963643368 hyperledger/fabric-tools:latest "/bin/bash" About an hour ago Up About an hour cli
836a3be4c897 hyperledger/fabric-peer:latest "peer node start" About an hour ago Up About an hour 0.0.0.0:9051->9051/tcp, 7051/tcp, 0.0.0.0:9445->9445/tcp peer0.org2.example.com
49564f47ce2b hyperledger/fabric-peer:latest "peer node start" About an hour ago Up About an hour 0.0.0.0:7051->7051/tcp, 0.0.0.0:9444->9444/tcp peer0.org1.example.com
8957033f991b hyperledger/fabric-orderer:latest "orderer" About an hour ago Up About an hour 0.0.0.0:7050->7050/tcp, 0.0.0.0:7053->7053/tcp, 0.0.0.0:9443->9443/tcp orderer.example.com

There is documentation on how to use Hyperledger Explorer with the test-network from the Fabric sample here (although, at the time of writing, I notice that the project has recently been marked "end of life"):
https://github.com/hyperledger/blockchain-explorer#quick-start-using-docker
If you are looking for a more complete UI-based management experience, it might be worth looking at this Hyperledger Lab project:
https://github.com/hyperledger-labs/fabric-operations-console#readme

Related

Why does my docker processes keep restarting on my Raspberry Pi?

I'm attempting to use deluge on my Raspberry Pi.
I've followed the guide as per: https://hub.docker.com/r/linuxserver/deluge
I've created a docker-compose.yml file which consists of the following:
version: "2.1"
services:
deluge:
image: lscr.io/linuxserver/deluge:latest
container_name: deluge
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- DELUGE_LOGLEVEL=error #optional
volumes:
- /path/to/deluge/config:/config
- /path/to/your/downloads:/downloads
ports:
- 8112:8112
- 6881:6881
- 6881:6881/udp
restart: unless-stopped
I can run the above using the command docker compose up -d
Once the service is running I check it using docker ps which shows the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2524b4bb191b lscr.io/linuxserver/deluge:latest "/init" 5 minutes ago Restarting (111) 2 seconds ago deluge
When running docker ps sometimes it shows the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2524b4bb191b lscr.io/linuxserver/deluge:latest "/init" 5 minutes ago Up Less than a second 0.0.0.0:6881->6881/tcp, :::6881->6881/tcp, 58846/tcp, 0.0.0.0:8112->8112/tcp, 0.0.0.0:6881->6881/udp, :::8112->8112/tcp, :::6881->6881/udp, 58946/tcp, 58946/udp deluge
But, soon after it shows the following again:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2524b4bb191b lscr.io/linuxserver/deluge:latest "/init" 7 minutes ago Restarting (111) 55 seconds ago deluge
Hence I cannot remote into it via a browser.
Any ideas anybody? I'm pulling my hair out!!!
There was clearly a corruption in my docker.
I was about to "nuke" my Pi and re-image it but decided to following this YouTube video verbatim before doing so: https://www.youtube.com/watch?v=3ahV7DD_Oxk&t=310s
This fixed the issue and now deluge is always up in docker.

Remove Gitlab docker containers

Recently i tried to install Gitlab on Ubuntu machine using docker and docker-compose. This was only done for testing so i can later install it on other machine.
However, i have a problem with removing/deleting gitlab containers.
I tried docker-compose down and killing all processes related to gitlab containers but they keep restarting even if i somehow manage to delete images.
This is my docker-compose.yml file
version: "3.6"
services:
gitlab:
image: gitlab/gitlab-ee:latest
ports:
- "2222:22"
- "8080:80"
- "8081:443"
volumes:
- $GITLAB_HOME/data:/var/opt/gitlab
- $GITLAB_HOME/logs:/var/log/gitlab
- $GITLAB_HOME/config:/etc/gitlab
shm_size: '256m'
environment:
GITLAB_OMNIBUS_CONFIG: "from_file('/omnibus_config.rb')"
configs:
- source: gitlab
target: /omnibus_config.rb
secrets:
- gitlab_root_password
gitlab-runner:
image: gitlab/gitlab-runner:alpine
deploy:
mode: replicated
replicas: 4
configs:
gitlab:
file: ./gitlab.rb
secrets:
gitlab_root_password:
file: ./root_password.txt
Some of the commands i tried to kill processes:
kill -9 $(ps aux | grep gitlab | awk '{print $2}')
docker rm -f $(docker ps -aqf name="gitlab") && docker rmi --force $(docker images | grep gitlab | awk '{print $3}')
I also tried to update containers with no restart policy:
docker update --restart=no container-id
But nothing of this seems to work.
This is docker ps response:
591e43a3a8f8 gitlab/gitlab-ee:latest "/assets/wrapper" 4 minutes ago Up 4 minutes (health: starting) 22/tcp, 80/tcp, 443/tcp mystack_gitlab.1.0r77ff84c9iksmdg6apakq9yr
6f0887a8c4b1 gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.3.639u8ht9vt01r08fegclfyrr8
73febb9bb8ce gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.4.m1z1ntoewtf3ipa6hap01mn0n
53f63187dae4 gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.2.9vo9pojtwveyaqo166ndp1wja
0bc954c9b761 gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.1.pq0njz94v272s8if3iypvtdqo
Any ideas of what i should be looking for?
I found the solution. Problem was that i didn't use
docker-compose up -d
to start my containers. Instead i used
docker stack deploy --compose-file docker-compose.yml mystack
as it is written in documentation.
Since i didn't know much about docker stack, i did a quick internet search.
This is the article that i found:
https://vsupalov.com/difference-docker-compose-and-docker-stack/
The Difference Docker stack is ignoring “build” instructions. You
can’t build new images using the stack commands. It need pre-built
images to exist. So docker-compose is better suited for development
scenarios.
There are also parts of the compose-file specification which are
ignored by docker-compose or the stack commands.
As i understand, the problem is that stack only uses pre-built images and ignores some of the docker-compose commands such as restart policy.
That's why
docker update --restart=no container-id
didn't work.
I still don't understand why killing all the processes and removing containers/images didn't work. I guess there must be some parent process that i didn't found.

Hyperledger Fabric Multi-Org

I am following the official tutorial about Deploying a Hyperledger Composer blockchain business network to Hyperledger Fabric (multiple organizations). I was able to up the network using the provider Org1 and Org2 example. Now I want to customize the organization as my own. But upon execution of ./byfn.sh -m up -s couchdb -a command. I am getting the below error; I inspect all the yaml files but I was not able to find the possible root cause of the error. I just really need a help on this. Thank you.
Starting for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds and using database 'couchdb', and using Fabric CAs
Continue? [Y/n] Y
proceeding ...
LOCAL_VERSION=1.2.0
DOCKER_IMAGE_VERSION=1.2.0
WARNING: The COMPOSE_PROJECT_NAME variable is not set. Defaulting to a blank string.
ERROR: The Compose file is invalid because:
Service peer0.org2.example.com has neither an image nor a build context specified. At least one must be provided.
ERROR !!!! Unable to start network
It looks like your peer-base.yaml file is not correct. One Problem is the COMPOSE_PROJECT_NAME variable. If it is not set, fabric uses the folder as the network-name. But if it is not right there will be some error while bootstrapping the network. We are building a bidding network and it is called trade-network. So the example of the entry in the peer-base.yaml file is:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_basic
Before the boostrapping we define the COMPOSE_PROJECT_NAME with trade-network so the network is called trade-network_basic. I'm not 100% sure but I think after (or while) bootstrapping there is a point where fabric uses the folder name anyway. So we decicded to use the folder name by default and nothing happened wrong.
The other problem could be the image entry for the peer. In our file it is:
image: hyperledger/fabric-peer:x86_64-1.1.0
You can docker images list and will know which images you have, you have to use one for the peers. After the colon you can be more specific and I would suggest it.
Here is an example of our full peer-base.yaml file:
version: '2'
services:
peer-base:
image: hyperledger/fabric-peer:x86_64-1.1.0
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_basic
#- CORE_LOGGING_LEVEL=INFO
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start

How can I deploy a hyperledger business network using docker-compose?

I am trying to deploy a Hyperledger BNA using docker-compose. How can I do it in the cleanest way?
version: '2'
networks:
basic:
services:
playground:
container_name: composer-playground
image: hyperledger/composer-playground
ports:
- 8080:8080
networks:
- basic
command: composer-playground
ca.example.com:
image: hyperledger/fabric-ca
...
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer
...
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer
...
cli:
container_name: cli
image: hyperledger/fabric-tools
...
This is a docker-compose.yml from This github repisitory
I know that I can deploy a network using this tutorial, but as I sad. I want to deploy it using docker-compose.Thanks in advance!
If you want to use Composer, be aware that 'today' Composer supports Fabric 1.1 and you are looking at Fabric 1.2 docs, and Will likely download the 'latest' Fabric 1.2 unless you specifically do something different. (Composer support for Fabric 1.2 is coming shortly. Keep an out out for the Release notes for the latest information.)
There is a Docker container for the Composer CLI which will enable you to deploy your BNA.
There are Composer tutorials on creating and deploying BNAs, but they start with the assumption that the Composer CLI is installed locally, not in a Container so you will need to adapt the approach. The key to connecting Composer to the Fabric is the connection profile, so you will need to understand the addressing and networking aspects of your Docker environment to build the right connection profile.

Docker service not running container

I need help understanding why my dockerfile does not work properly.
I created the image which I called hello-nodemon:
FROM node:latest
ENV HOME=/src/jv-agricultor
RUN mkdir -p $HOME/
WORKDIR $HOME/
ADD package* $HOME/
RUN npm install
EXPOSE 3000
ADD . $HOME/
CMD ["npm", "start"]
it works because when I run docker run -p 3000:3000 it works perfectly. But I want to use docker-compose.yml:
version: "3"
services:
web:
image: hello-nodemon
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "3000:3000"
networks:
- webnet
networks:
webnet:
So i used the commands: docker stack deploy -c docker-compose.yml webservice this return me:
ID NAME MODE REPLICAS IMAGE PORTS
y0furo1g22zs webservice_web replicated 5/5 hello-nodemon:latest *:3000->3000/tcp
So docker service ps y0furo1g22zs return me:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
nbgq8ln188dm webservice_web.1 hello-nodemon:latest abner Running Running 4 minutes ago
rrxjwudtorsm webservice_web.2 hello-nodemon:latest abner Running Running 4 minutes ago
7qrz9gtd4fan webservice_web.3 hello-nodemon:latest abner Running Running 4 minutes ago
lljmj01zlya8 webservice_web.4 hello-nodemon:latest abner Running Running 4 minutes ago
raqw3z0pdxqt webservice_web.5 hello-nodemon:latest abner Running Running 4 minutes ago
My containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6daf6afadfdc hello-nodemon:latest "npm start" 6 minutes ago Up 6 minutes 3000/tcp webservice_web.1.nbgq8ln188dmz8q8qeb60scbz
2d74f8e9a728 hello-nodemon:latest "npm start" 6 minutes ago Up 6 minutes 3000/tcp webservice_web.2.rrxjwudtorsm6to56t0srkzda
e3a3a039fdf9 hello-nodemon:latest "npm start" 6 minutes ago Up 6 minutes 3000/tcp webservice_web.3.7qrz9gtd4fanju4zt6zx3afsf
7f08dbdf0c8d hello-nodemon:latest "npm start" 6 minutes ago Up 6 minutes 3000/tcp webservice_web.5.raqw3z0pdxqtvkmkp00bp6tve
c6ce3762d6ae hello-nodemon:latest "npm start" 6 minutes ago Up 6 minutes 3000/tcp webservice_web.4.lljmj01zlya89gvmip5z0cf6f
but it does not work. The browser does not refuse but does not load the page; is infinitely searching.
I do not know what is happening, if someone helps me I will be very grateful.
This seems to be an issue with chrome only. I have the same issue in chrome, however, when I open in Firefox it works fine.
Here's how I fixed Chrome:
Looking into this more I think this was either and chrome issue or network issue as I was having the same issue:
Here is how I resolved it:
Make sure your /etc/hosts file has 127.0.0.1 localhost (more than likely it's already there)
Cleared Cookies and Cached files
Cleared host cache
Go to:chrome://net-internals/#dns click Clear Host Cache
Restarted chrome
Reset Network Adapter
Note: This was unintentional so not sure if it was part of the fix or not, but wanted to include it in case.
Unfortunately I'm not sure which step fixed the problem

Resources