Hyperledger composer got stuck when upgrading the network - hyperledger-fabric

Here are the steps that I went through:
Stop and tear down fabric
Start fabric
Create a Business network using yo hyperledger-composer
Create .bna archive and install it
Start network with version 0.0.1
Import card to the playground
All these steps work fine, but when I start playground and try to upgrade business network with my changes, in a browser it gets stuck on
Please Wait: Your new business network is being upgraded
Upgrading business network using PeerAdmin#hlfv1 (2/2)
and never responds
Here what I see in logs of composer--playground:
info: [Hyperledger-Composer] :ConnectionProfileManager :getConnectionManagerByTyp Looking up a connection manager for type 0=hlfv1
Maybe someone has already faced this kind of issue and knows how solve it? Or in the local environment, I should upgrade it manually?
P.S I am new to Composer, so all these steps I found in the
Developer tutorial

The composer network upgrade command and its equivalent action in the Composer Playground generate a new docker "chaincode image" and "chaincode container". Creating the image and starting the container is what takes the time. You will see that you now have redundant docker containers and images of previous versions of the Business Network. This is intended behaviour of Hyperledger Fabric (and Composer) but you may want to do some housekeeping to remove the old versions.
If you are in early versions of development and experimentation - generating lots of versions of Networks, you can use the 'Web Profile' in the Playground which simulates a Fabric in the LocalStorage of the Browser - it is much faster but if you use it be sure to periodically export to a BNA otherwise you might lose work if there is a browser issue or upgrade.
Updated following Comment
The command docker ps can be used to see all running containers (docker ps -a will also show stopped containers.) docker stop is used to stop a container and docker rm to remove the container.
Docker containers are running (or stopped) instances of docker images so you will also want to remove the redundant images. You list the images with docker images and remove them with docker rmi.
The docker web site has a full list of commands.

Interestingly, but the process of upgrading the network took more time than I thought, so the solution will be simple:
Wait 3-4 minutes until the process finishes and do not click anywhere
in the browser (by mistake I tried to reconnect to the card, and in
that case, the process of upgrading fails).
Additionally, important to mention, in the manual process of upgrading of the card(using CLI), it takes the same amount of time

Related

Restoring Hyperledger Fabric Network After It Is Off

I am working on a project using hyperledger fabric.
Currently, I use the test-network in the hyperledger fabric document to develop it fabric document.
But what I'm worried about is if my computer turns off and the hyperledger fabric network turns off, will I be able to restore the previous network values again?
What should I do if I can?
In the current practice example, if you enter the test-network ./network.sh down command, you can see that the generated msp(?) files and authentication-related files. Should I modify this network.sh file so that the generated authentication-related files are not deleted when the server goes down, and if the file exists when the server is turned on again, should I modify it so that the network can be configured?
If you want to retain your current network, even in a stopped state, run the following command in the docker folder.
docker-compose -f docker-compose-couch.yaml -f docker-compose-ca.yaml -f docker-compose-test-net.yaml stop
This command stops the current containers from running. When you want to start the same network again, use the same command and use start instead of stop, as shown next.
docker-compose -f docker-compose-couch.yaml -f docker-compose-ca.yaml -f docker-compose-test-net.yaml start
Using ./network down command removes all traces of the network, other than the files that were there from the beginning.
But what I'm worried about is if my computer turns off and the hyperledger fabric network turns off, will I be able to restore the previous network values again?
If the containers are in Exited(not due to errors) state you can bring those containers Up again.
Normally we maintain persistence DB to store data.
In Test network we are not bothered about restarting the network and generating new certs repeatedly. If it is a prod env we just execute the script(network.js) only once.
You can also make changes(micro-scripts) to the existing scripts based on your use-case/requirements.

Block chain data will lose in Hyperleger composer network

I Just create a hyper ledger composer network in production level. There are many data(Participant and asset) is existing in my composer blockchain( That is on CouchDB). My main problem is I need to set up a hyper ledger explorer for my existing network. I already use https://github.com/hyperledger/blockchain-explorer. But the issue is my network orderer port is not synced with explorer( I already post a question regarding this issue Hyperledger explorer starting problem- orderer port communication issue. Unfortunately no replay).
At this moment I decided to stop the running hyper ledger composer network and start again it without any data(participant and asset data) lose. Actually restart the network without data loss is possible...???
Have any other suggestion is available to resolve my issue..???
Any suggestion is much appreciated..
Thank you.
OS: Ubuntu 16.04
Composer: 0.19.16
Fabric: 1.1.0
When you stop your business network using stopFaric.sh under fabric-dev-servers (or fabric-tools). It will stop fabric Containers and after that run startFabric.sh it will recreate new Containers from the Docker Images. Impact of this is that you lose all data(assets, participant, transaction etc) of your business network.
So if you want to stop and start your fabric without loss of existing data. Then follow below commands :
Need to change the directory where the docker-compose.yml(/home/<user>/fabric-dev-servers/fabric-scripts/hlfv11/composer) file is, and
Run docker-compose stop to stop the Fabric then
Run docker-compose start to restart Fabric it will start your network with existing data. Make sure you are in the correct folder.
Hope, it will help you :)

How can I update my network in playground other than Deploy changes?

I use 0.19.0 of composer.
when I make some changes in model file,then click Deploy changes button.
This action didn't update the network only, and it had created a new images and startd a new docker container.Finally, my docker will have many container running.
How can I just update the current network in contanier ?
Since Native Fabric Deployment was introduced in Composer version 0.19.0 business networks are deployed, and therefore updated, as their own chaincode. This brings Composer business networks in line with how Fabric Go or nodejs chaincode is deployed and means that members of a business network have the same control over what gets deployed across all the programming models.
There is no way to update the code in an existing container; the runtime is deployed every time you update your business network. Unfortunately, as you've noticed, that does mean you'll end up with potentially large numbers of docker containers running.

Hyperledger Composer on a local network of Ubuntu servers

I was able to setup Hyperledger Composer on Docker containers on a mac by following the instructions here: https://hyperledger.github.io/composer/installing/development-tools.html. I was also able to develop a proof of concept project with both a mobile web app that connected the blockchain via a rest API.
Now, I am trying to run the nodes on actual Ubuntu servers in a local network but I can't seem to find any tutorial to explains how to do that.
I know I might have some gap in my knowledge of computer architecture or networking in general that's why I am struggling with this.
I was looking at the downloadFabric.sh script in fabric-tools and I see how the Docker images are filled. I was thinking maybe I should just pull the Docker images to the individual Linux servers.
### Pull and tag the latest Hyperledger Fabric base image.
docker pull hyperledger/fabric-peer:$ARCH-1.0.1 on server 1
docker pull hyperledger/fabric-ca:$ARCH-1.0.1 on server 2
docker pull hyperledger/fabric-ccenv:$ARCH-1.0.1 on server 3
docker pull hyperledger/fabric-orderer:$ARCH-1.0.1
docker pull hyperledger/fabric-couchdb:$ARCH-1.0.1
and so on.
Please, how will you do this? Are there any resources I missed while researching how to do this? Can you point me to some resources that I can read to help understand how to do this?
So Hyperledger Composer will connect to the Fabric that you configure. So your problem is to configure a Fabric environment and set of nodes, using the host name resolution etc that you want to configure your network.
Would advise to check out the Fabric Docs http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html to build your network. They have some sample-networks (see the Github Repository here -> https://github.com/hyperledger/fabric/tree/release/examples )
The 'Fabric' that is set up by Hyperledger Composer's dev environment is just a Dev environment with one peer configured (via docker containers) to get you going.
You need to understand from the Fabric Docs how to set up your network, then come back to Composer (once all that is set up) and use connection profiles to connect to the runtime Fabric.
You could consider using docker-composer which builds on the docker client and makes it easier to run multiple docker containers. Please consult the manual to install docker-compose.
After installing docker-compose, you can use the startFabric.sh script provided here to start Fabric containers you had downloaded.
You may want to consider deploying Fabric with it's sister project Hyperledger Cello. Specifically, the Ansible driver for Kubernetes.
The samples provided in both Composer and Fabric are mostly for single host deployment. You could adapt the Docker Compose to use Swarm, but even Docker is moving away from Swarm, now.
Of course, if you just want to try to run locally, then the Build Your First Network tutorial in Hyperledger Fabric will get you rolling with the docker images in your question.

Docker continuous deployment workflow

I'm planning to set up a jenkins-based CD workflow with Docker at the end.
My idea is to automatically build (by Jenkins) a docker image for every green build, then deploy that image either by jenkins or by 'hand' (I'm not yet sure whether I want to automatically run each green build).
Getting to the point of having a new image built is easy. My question is about the deployment itself. What's the best practice to 'reload' or 'restart' a running docker container? Suppose the image changed for the container, how do I gracefully reload it while having a service running inside? Do I need to do the traditional dance with multiple running containers and load balancing or is there a 'dockery' way?
Suppose the image changed for the container, how do I gracefully reload it while having a service running inside?
You don't want this.
Docker is a simple system for managing apps and their dependencies. It's simple and robust because ALL dependencies of an application are bundled with it. If your app runs today on your laptop, it will run tomorrow on your server. This is because we have captured 100% of the "inputs" for your application.
As soon as you introduce concepts like "upgrade" and "restart", your application can (accidentally) store state internally. That means it might behave differently tomorrow than it does today (after being restarted and upgraded 100 times).
It's better use a load balancer (or similar) to transition between your versions than to try and muck with the philosophy of Docker.
The Docker machine itself should always be immutable as you have to replace it for a new deployment. Storing state inside the Docker container will not work when you want to ship new releases often that you've built on your CI.
Docker supports Volumes which will let you write files that are permanent into some folder on the host. When you then upgrade the Docker container you use the same volume so you've got access to the same files written by the old container:
https://docs.docker.com/userguide/dockervolumes/

Resources