Hyperledger-fabric setup in real systems - hyperledger-fabric

I have been working on hyperledger-fabric node sdk v 1.0 and successfully created prototype based on dockers . However now I wanted to implement this architecture on real systems. I haven't found any documentation which helps in setting up environment in real systems. All I found is to set up different peers and organization using dockers and then invoke transactions etc.Can we connect different computer machines using dockers and then spin up the network on all these different machine to create private blockchain?

Yes, you can do it. For that, first of all you should define your network configuration. Then, you would create the artifacts that are required for the network: the keys, the channel artifact, the genesis block... You would follow the steps that are defined on the Fabric documentation to Build your first network. Also, you should share the public keys and the genesis block.
Then, in each machine you would install the docker images, like is explained in the Fabric documentation. After that, you would define the containers that you are going to set up in each machine. You do that on the docker configuration files (docker-compose.yaml, docker-base.yaml...). There, be aware of defining well the docker network configuration. You have more info about it in the answer of this question.
At the end you would switch on each container executin the docker-compose.
I don't know if I've given you enough information. If not, ask again please.

Docker is good for production systems. Docker swarm can be used for connecting multiple machines.

Related

Hyperledger starting project for my own use case

As of now I have used fabric-samples repo and used network.sh to start network . They already have connection-org.yaml file which has necessary information.
When I need to use fabric for my app , I know I need to start fabric network right ? Then I need to also create channel and user into it . How do I do it ? Should I just copy and paste that network.sh from fabric-samples ? What about connection-org.yaml? I think all of them is hardcoded right ? What should I do about it ?
Every tutorial has prebuilt these things never explaining what they are. Any help would be heavily appreciated
As you have mentioned that you have used Fabric Repo, I am expecting you to to be familiar with the Hyperledger Fabric Blockchain Framework.
Following factors related to the network should be decided first.
Channel name.
How many and Which are the Organizations are participating in the Consortium,
How many peers per Organization?
Ordering Network would RAFT based, But how many orderer nodes ?
Whether state database would use Couch DB or LevelDB
How the MSP Crypto would be generated ( Is Fabric CA going to be used[ If yes, then own root certificate/rootCA ?] or Cryptogen Tool)
Once the above has been laid out, then the next step is to start coding the network script.
The images should be already loaded into the local docker repository, and the Fabric binaries should be available in a location accessible to the script. If the docker images are not loaded, then the machine should have connectivity to internet and then to docker-hub.
It would be good to start with a docker based network setup.
The network and persistent data stores ( docker network, ports and volumes) should be planned.
Once that is sorted out, the coding of the docker compose files could start. Following are the points to be noted during this step.
Create a single compose with all the organizations Or create individual compose files for each organizations. Take a look at the docker compose yaml files present along with the network.sh to get an idea.
Decide on the docker subnet ( network reference)
Provide the same network reference against each service / each
individual compose files.
Provide the env variables for the below items.
Map the MSP folders.
Decide on the SSL as applicable
Provide CouchDB ports(if applicable), Peer Ports, Gossip ports,
Orderer ports etc.
If planning to use cryptogen, then create the config files as per the Org structure. If its CA, then write commands as per the org structure.
Now again refer to the network.sh script and try to figure out how the crypto is generated ( as applicable to your choice). Also refer to the cleanup part from network.sh to understand how it is being done, what is being removed, and what is being retained.
Every time the script bombs, make sure that you cleanup and then start. ie, all the docker containers and volumes to be removed. You could retain your MSP cryptos if you want to.
Locate the command to create the channel, and adding peers to the channel.
The content from env.sh is a good example on how to set the environment variables needed within your script.
Once all the members have joined the channel, the setup the anchor peers per organizations.
Write a version of the script after referring to the example.
By the end of proper execution of the steps above, the script should be able to get a Hyperledger Fabric network up and running.

Is anyone done Hyperledger fabric multi org in multi host using docker compose file

Is anyone done Hyperledger fabric multi org in multi host using docker compose file.
I just need to know the feasibility, if possible please share the reference material as well.
I am also tried with the docker compose commands as mentioned in this link https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f
It is very well feasible and we have tried it out.
There are two ways you can achieve
1. Either docker-compose up with splitting the files or passing arguments as params where each org has to be brought up
2. Using stack deploy for a single file and keeping the constraints as per the schema 3.5 for docker compose file.
Rest all process remains the same.
You need to use docker swarm init to initialise the cluster and using the token generated add other VMs. Refer docker swarm guide for help.

Hyperledger Fabric multiple hosts setup marbles demo

I have successfully installed a few hyperledger demos, including the marbles one (https://github.com/IBM-Blockchain/marbles)
A few questions,
How can I move some of the marbles demo nodes to another host/s and still get this demo to work?
I have read the following two posts on the same topic already (where
docker-swarm has been used for intra-host communication)
How can I set up hyperledger fabric with multiple hosts using Docker?
hyperledger-fabric-with-multiple-hosts-using-docker &
How can I make a communication between several docker containers on my local network
communication-between-several-docker-containers-on-my-local-net
I still couldn't decipher installing additional nodes and running them
on different hosts.
As running blockchain nodes on multiple hosts seems to be a common task,
how is it being done now? I saw references to Cello and an ansible
script, though they look not so mature and sure shot solutions.
Could I install the fabric nodes manually by pulling the hyperledger/fabric peer images from the docker hub? How do I then install & run the marbles demo on this pulled images?
Thanks
How can I move some of the marbles demo nodes to another host/s and still get this demo to work?
What do you want to do? I don't understand why you want to move a node. Has it got any sense? If you move some nodes, you are removing them from your Blockchain. If they are part of the Ordering Service or they endorsement is required for the endorsement policy, your demo will not continue running.
The intra-host communication and the communication among multiple docker containers are different things from what you are asking.
Could I install the fabric nodes manually by pulling the
hyperledger/fabric peer images from the docker hub? How do I then
install & run the marbles demo on this pulled images?
You can install you nodes manually via the docker-compose. You should define what you want to start up and then execute it. Of course, you should have in your machine the corresponding docker images. Then, you should deploy the marbles Smart Contract in your peers. You have more info about it here.

Hyperledger Composer on a local network of Ubuntu servers

I was able to setup Hyperledger Composer on Docker containers on a mac by following the instructions here: https://hyperledger.github.io/composer/installing/development-tools.html. I was also able to develop a proof of concept project with both a mobile web app that connected the blockchain via a rest API.
Now, I am trying to run the nodes on actual Ubuntu servers in a local network but I can't seem to find any tutorial to explains how to do that.
I know I might have some gap in my knowledge of computer architecture or networking in general that's why I am struggling with this.
I was looking at the downloadFabric.sh script in fabric-tools and I see how the Docker images are filled. I was thinking maybe I should just pull the Docker images to the individual Linux servers.
### Pull and tag the latest Hyperledger Fabric base image.
docker pull hyperledger/fabric-peer:$ARCH-1.0.1 on server 1
docker pull hyperledger/fabric-ca:$ARCH-1.0.1 on server 2
docker pull hyperledger/fabric-ccenv:$ARCH-1.0.1 on server 3
docker pull hyperledger/fabric-orderer:$ARCH-1.0.1
docker pull hyperledger/fabric-couchdb:$ARCH-1.0.1
and so on.
Please, how will you do this? Are there any resources I missed while researching how to do this? Can you point me to some resources that I can read to help understand how to do this?
So Hyperledger Composer will connect to the Fabric that you configure. So your problem is to configure a Fabric environment and set of nodes, using the host name resolution etc that you want to configure your network.
Would advise to check out the Fabric Docs http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html to build your network. They have some sample-networks (see the Github Repository here -> https://github.com/hyperledger/fabric/tree/release/examples )
The 'Fabric' that is set up by Hyperledger Composer's dev environment is just a Dev environment with one peer configured (via docker containers) to get you going.
You need to understand from the Fabric Docs how to set up your network, then come back to Composer (once all that is set up) and use connection profiles to connect to the runtime Fabric.
You could consider using docker-composer which builds on the docker client and makes it easier to run multiple docker containers. Please consult the manual to install docker-compose.
After installing docker-compose, you can use the startFabric.sh script provided here to start Fabric containers you had downloaded.
You may want to consider deploying Fabric with it's sister project Hyperledger Cello. Specifically, the Ansible driver for Kubernetes.
The samples provided in both Composer and Fabric are mostly for single host deployment. You could adapt the Docker Compose to use Swarm, but even Docker is moving away from Swarm, now.
Of course, if you just want to try to run locally, then the Build Your First Network tutorial in Hyperledger Fabric will get you rolling with the docker images in your question.

Internal infrastructure with docker

I have a small company network with the following services/servers:
Jenkins
Stash (Atlassian)
Confluence (Atlassian)
LDAP
Owncloud
zabbix (monitoring)
puppet
and some Java web apps
all running in separate kvm(libvirt)-vms in separate virtual-subnets on 2 machines (1 internal, 1 hetzner-rootserver) with shorewall inbetween. I'm thinking about switching to Docker.
But I have two questions:
How can I achieve network security between docker containers (i.e. I want to prevent owncloud to access any host in the network except ldap-hosts-sslport)
Just by using docker-linking? If yes: does docker really allow to access only linked containers, but no others?
By using kubernetes?
By adding multiple bridging-network-interfaces for each container?
Would you switch all my infra-services/-servers to docker, or a hybrid solution with just the owncloud and the java-web-apps on docker?
Regarding the multi-host networking: you're right that Docker links won't work across hosts. With Docker 1.9+ you can use "Docker Networking" like described in their blog post http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
They don't explain how to secure the connections, though. I strongly suggest to enable TLS on your Docker daemons, which should also secure your multi-host network (that's an assumption, I haven't tried).
With Kubernetes you're going to add another layer of abstraction, so that you'll need to learn working with the pods and services concept. That's fine, but might be a bit too much. Keep in mind that you can still decide to use Kubernetes (or alternatives) later, so the first step should be to learn how you can wrap your services in Docker containers.
You won't necessarily have to switch everything to Docker. You should start with Jenkins, the Java apps, or OwnCloud and then get a bit more used to the Docker universe. Jenkins and OwnCloud will give you enough challenges to gain some experience in maintaining containers. Then you can evaluate much better if Docker makes sense in your setup and with your needs to be applied to the other services.
I personally tend to wrap everything in Docker, but only due to one reason: keeping the host clean. If you get to the point where everything runs in Docker you'll have much more freedom to choose where a service can run and you can move containers to other hosts much more easily.
You should also explore the Docker Hub, where you can find ready to run solutions, e.g. Atlassian Stash: https://hub.docker.com/r/atlassian/stash/
If you need inspiration for special applications and how to wrap them in Docker, I recommend to have a look in https://github.com/jfrazelle/dockerfiles - you'll find a bunch of good examples there.
You can give containers their own IP from your subnet by creating a network like so:
docker network create \
--driver=bridge \
--subnet=135.181.x.y/28 \
--gateway=135.181.x.y+1 \
network
Your gateway is the IP of your subnet + 1 so if my subnet was 123.123.123.123 then my gateway should be 123.123.123.124
Unfortunately I have not yet figured out how to make the containers appear to the public from their own ip, at the moment they appear as the dedicated servers' ip. Let me know if you know how I can fix that. I am able to access the container using its ip though.

Resources