I have tried to search and research on this topic, but could not find any solution so far.
Has anyone tried this scenario? Start the fabric, create a business network, create a sample app to post transactions to this network. So far so good. Shut down the fabric now and restart it. Has anyone seen that the transactions are lost? So how does one go about making the ledger survive restarts?
You need to mount a volume for the directory /var/hyperledger/production in the orderer and peer containers in docker. This is where all the persistent data is being held containing channel information, transactions and blocks.
If you are spinning up your containers through docker-compose you can add:
volumes:
- <some local dir>:/var/hyperledger/production
If you are spinning up your containers through docker run add the argument:
-v <some local dir>:/var/hyperledger/production
Haven't used composer much myself so not quite sure how composer builds the containers, if you are using that.
You will also need to make sure each node goes to its own directory so it doesn't conflict.
Related
As of now I have used fabric-samples repo and used network.sh to start network . They already have connection-org.yaml file which has necessary information.
When I need to use fabric for my app , I know I need to start fabric network right ? Then I need to also create channel and user into it . How do I do it ? Should I just copy and paste that network.sh from fabric-samples ? What about connection-org.yaml? I think all of them is hardcoded right ? What should I do about it ?
Every tutorial has prebuilt these things never explaining what they are. Any help would be heavily appreciated
As you have mentioned that you have used Fabric Repo, I am expecting you to to be familiar with the Hyperledger Fabric Blockchain Framework.
Following factors related to the network should be decided first.
Channel name.
How many and Which are the Organizations are participating in the Consortium,
How many peers per Organization?
Ordering Network would RAFT based, But how many orderer nodes ?
Whether state database would use Couch DB or LevelDB
How the MSP Crypto would be generated ( Is Fabric CA going to be used[ If yes, then own root certificate/rootCA ?] or Cryptogen Tool)
Once the above has been laid out, then the next step is to start coding the network script.
The images should be already loaded into the local docker repository, and the Fabric binaries should be available in a location accessible to the script. If the docker images are not loaded, then the machine should have connectivity to internet and then to docker-hub.
It would be good to start with a docker based network setup.
The network and persistent data stores ( docker network, ports and volumes) should be planned.
Once that is sorted out, the coding of the docker compose files could start. Following are the points to be noted during this step.
Create a single compose with all the organizations Or create individual compose files for each organizations. Take a look at the docker compose yaml files present along with the network.sh to get an idea.
Decide on the docker subnet ( network reference)
Provide the same network reference against each service / each
individual compose files.
Provide the env variables for the below items.
Map the MSP folders.
Decide on the SSL as applicable
Provide CouchDB ports(if applicable), Peer Ports, Gossip ports,
Orderer ports etc.
If planning to use cryptogen, then create the config files as per the Org structure. If its CA, then write commands as per the org structure.
Now again refer to the network.sh script and try to figure out how the crypto is generated ( as applicable to your choice). Also refer to the cleanup part from network.sh to understand how it is being done, what is being removed, and what is being retained.
Every time the script bombs, make sure that you cleanup and then start. ie, all the docker containers and volumes to be removed. You could retain your MSP cryptos if you want to.
Locate the command to create the channel, and adding peers to the channel.
The content from env.sh is a good example on how to set the environment variables needed within your script.
Once all the members have joined the channel, the setup the anchor peers per organizations.
Write a version of the script after referring to the example.
By the end of proper execution of the steps above, the script should be able to get a Hyperledger Fabric network up and running.
I am working on a project using hyperledger fabric.
Currently, I use the test-network in the hyperledger fabric document to develop it fabric document.
But what I'm worried about is if my computer turns off and the hyperledger fabric network turns off, will I be able to restore the previous network values again?
What should I do if I can?
In the current practice example, if you enter the test-network ./network.sh down command, you can see that the generated msp(?) files and authentication-related files. Should I modify this network.sh file so that the generated authentication-related files are not deleted when the server goes down, and if the file exists when the server is turned on again, should I modify it so that the network can be configured?
If you want to retain your current network, even in a stopped state, run the following command in the docker folder.
docker-compose -f docker-compose-couch.yaml -f docker-compose-ca.yaml -f docker-compose-test-net.yaml stop
This command stops the current containers from running. When you want to start the same network again, use the same command and use start instead of stop, as shown next.
docker-compose -f docker-compose-couch.yaml -f docker-compose-ca.yaml -f docker-compose-test-net.yaml start
Using ./network down command removes all traces of the network, other than the files that were there from the beginning.
But what I'm worried about is if my computer turns off and the hyperledger fabric network turns off, will I be able to restore the previous network values again?
If the containers are in Exited(not due to errors) state you can bring those containers Up again.
Normally we maintain persistence DB to store data.
In Test network we are not bothered about restarting the network and generating new certs repeatedly. If it is a prod env we just execute the script(network.js) only once.
You can also make changes(micro-scripts) to the existing scripts based on your use-case/requirements.
I Just create a hyper ledger composer network in production level. There are many data(Participant and asset) is existing in my composer blockchain( That is on CouchDB). My main problem is I need to set up a hyper ledger explorer for my existing network. I already use https://github.com/hyperledger/blockchain-explorer. But the issue is my network orderer port is not synced with explorer( I already post a question regarding this issue Hyperledger explorer starting problem- orderer port communication issue. Unfortunately no replay).
At this moment I decided to stop the running hyper ledger composer network and start again it without any data(participant and asset data) lose. Actually restart the network without data loss is possible...???
Have any other suggestion is available to resolve my issue..???
Any suggestion is much appreciated..
Thank you.
OS: Ubuntu 16.04
Composer: 0.19.16
Fabric: 1.1.0
When you stop your business network using stopFaric.sh under fabric-dev-servers (or fabric-tools). It will stop fabric Containers and after that run startFabric.sh it will recreate new Containers from the Docker Images. Impact of this is that you lose all data(assets, participant, transaction etc) of your business network.
So if you want to stop and start your fabric without loss of existing data. Then follow below commands :
Need to change the directory where the docker-compose.yml(/home/<user>/fabric-dev-servers/fabric-scripts/hlfv11/composer) file is, and
Run docker-compose stop to stop the Fabric then
Run docker-compose start to restart Fabric it will start your network with existing data. Make sure you are in the correct folder.
Hope, it will help you :)
I have successfully installed a few hyperledger demos, including the marbles one (https://github.com/IBM-Blockchain/marbles)
A few questions,
How can I move some of the marbles demo nodes to another host/s and still get this demo to work?
I have read the following two posts on the same topic already (where
docker-swarm has been used for intra-host communication)
How can I set up hyperledger fabric with multiple hosts using Docker?
hyperledger-fabric-with-multiple-hosts-using-docker &
How can I make a communication between several docker containers on my local network
communication-between-several-docker-containers-on-my-local-net
I still couldn't decipher installing additional nodes and running them
on different hosts.
As running blockchain nodes on multiple hosts seems to be a common task,
how is it being done now? I saw references to Cello and an ansible
script, though they look not so mature and sure shot solutions.
Could I install the fabric nodes manually by pulling the hyperledger/fabric peer images from the docker hub? How do I then install & run the marbles demo on this pulled images?
Thanks
How can I move some of the marbles demo nodes to another host/s and still get this demo to work?
What do you want to do? I don't understand why you want to move a node. Has it got any sense? If you move some nodes, you are removing them from your Blockchain. If they are part of the Ordering Service or they endorsement is required for the endorsement policy, your demo will not continue running.
The intra-host communication and the communication among multiple docker containers are different things from what you are asking.
Could I install the fabric nodes manually by pulling the
hyperledger/fabric peer images from the docker hub? How do I then
install & run the marbles demo on this pulled images?
You can install you nodes manually via the docker-compose. You should define what you want to start up and then execute it. Of course, you should have in your machine the corresponding docker images. Then, you should deploy the marbles Smart Contract in your peers. You have more info about it here.
I have been working on hyperledger-fabric node sdk v 1.0 and successfully created prototype based on dockers . However now I wanted to implement this architecture on real systems. I haven't found any documentation which helps in setting up environment in real systems. All I found is to set up different peers and organization using dockers and then invoke transactions etc.Can we connect different computer machines using dockers and then spin up the network on all these different machine to create private blockchain?
Yes, you can do it. For that, first of all you should define your network configuration. Then, you would create the artifacts that are required for the network: the keys, the channel artifact, the genesis block... You would follow the steps that are defined on the Fabric documentation to Build your first network. Also, you should share the public keys and the genesis block.
Then, in each machine you would install the docker images, like is explained in the Fabric documentation. After that, you would define the containers that you are going to set up in each machine. You do that on the docker configuration files (docker-compose.yaml, docker-base.yaml...). There, be aware of defining well the docker network configuration. You have more info about it in the answer of this question.
At the end you would switch on each container executin the docker-compose.
I don't know if I've given you enough information. If not, ask again please.
Docker is good for production systems. Docker swarm can be used for connecting multiple machines.