We are running fabric (1.4.4) in an essentially closed network (no direct internet access, only private docker/npm/maven registries requiring authentication are accessible through a proxy).
While golang chaincode works just fine in this setup, node chaincode prove to be impossible to instantiate, as fabric-ccenv is executing npm install --production, which obviously won't work without internet access. In fabric 1.4, the only thing customizable about fabric-ccenv seems to be the image to be used.
But even if security would allow proxy settings, private registry addresses and credentials to be included in a custom fabric-ccenv image, some native npm builds (x509, grpc) seem to need direct internet access, as they are loading e.g. openssl header files or grpc pre-built binaries. Even if we were able to 'hack' that for a particular node chaincode, it would still preclude the use of arbitrary node chaincode, with arbitrary dependencies.
In fabric 2.0 it seems to be possible to build chaincode containers externally - is there any chance to make this work for fabric 1.4, too?
Are there any other ways to influence that node chaincode container build in fabric 1.4 that we might have missed?
Related
I have a scenario in which it would be required for the chaincode to call an external application to do a complex proprietary job.
I know that it is basically possible (also not recommended) to call an external service e. g. via HTTP.
However, I'd like to call a binary that is locally installed on the peer simply by for example exec.Command("some application") from the chaincode and work with its result.
The problem I am facing is that Fabric runs the chaincode itself in another docker container and not directly in the peer container, making the binary unavailable. Is there a way to share maybe a peer's volume with the runtime containers created by Fabric for chaincode execution?
You can package the binary with the chaincode package and then it will be able to execute it at the time of the chaincode execution.
here's my problem.
I have set a development env for HLF using Vagrant, made some changes to HLF source and built new docker images.
Then I deployed a network with 3 peers and 1 orderer, starting from custom HLF images and installed the basic chaincode that uses 'a' and 'b' variable as assets, making some operation (chaincode_example02).
Now I would like to see blockchain info such as blocks hash, ledger height and so on. How can access this info? Is there a way without using any app made with SDKs? For instance, some command executed by CLI.
If not, what's the fastest way to get this info? Thanks.
Ok, solved the problem.
This does what i needed:
docker exec -it cli bash peer channel getinfo -c <channel-name>
I have successfully installed a few hyperledger demos, including the marbles one (https://github.com/IBM-Blockchain/marbles)
A few questions,
How can I move some of the marbles demo nodes to another host/s and still get this demo to work?
I have read the following two posts on the same topic already (where
docker-swarm has been used for intra-host communication)
How can I set up hyperledger fabric with multiple hosts using Docker?
hyperledger-fabric-with-multiple-hosts-using-docker &
How can I make a communication between several docker containers on my local network
communication-between-several-docker-containers-on-my-local-net
I still couldn't decipher installing additional nodes and running them
on different hosts.
As running blockchain nodes on multiple hosts seems to be a common task,
how is it being done now? I saw references to Cello and an ansible
script, though they look not so mature and sure shot solutions.
Could I install the fabric nodes manually by pulling the hyperledger/fabric peer images from the docker hub? How do I then install & run the marbles demo on this pulled images?
Thanks
How can I move some of the marbles demo nodes to another host/s and still get this demo to work?
What do you want to do? I don't understand why you want to move a node. Has it got any sense? If you move some nodes, you are removing them from your Blockchain. If they are part of the Ordering Service or they endorsement is required for the endorsement policy, your demo will not continue running.
The intra-host communication and the communication among multiple docker containers are different things from what you are asking.
Could I install the fabric nodes manually by pulling the
hyperledger/fabric peer images from the docker hub? How do I then
install & run the marbles demo on this pulled images?
You can install you nodes manually via the docker-compose. You should define what you want to start up and then execute it. Of course, you should have in your machine the corresponding docker images. Then, you should deploy the marbles Smart Contract in your peers. You have more info about it here.
I was able to setup Hyperledger Composer on Docker containers on a mac by following the instructions here: https://hyperledger.github.io/composer/installing/development-tools.html. I was also able to develop a proof of concept project with both a mobile web app that connected the blockchain via a rest API.
Now, I am trying to run the nodes on actual Ubuntu servers in a local network but I can't seem to find any tutorial to explains how to do that.
I know I might have some gap in my knowledge of computer architecture or networking in general that's why I am struggling with this.
I was looking at the downloadFabric.sh script in fabric-tools and I see how the Docker images are filled. I was thinking maybe I should just pull the Docker images to the individual Linux servers.
### Pull and tag the latest Hyperledger Fabric base image.
docker pull hyperledger/fabric-peer:$ARCH-1.0.1 on server 1
docker pull hyperledger/fabric-ca:$ARCH-1.0.1 on server 2
docker pull hyperledger/fabric-ccenv:$ARCH-1.0.1 on server 3
docker pull hyperledger/fabric-orderer:$ARCH-1.0.1
docker pull hyperledger/fabric-couchdb:$ARCH-1.0.1
and so on.
Please, how will you do this? Are there any resources I missed while researching how to do this? Can you point me to some resources that I can read to help understand how to do this?
So Hyperledger Composer will connect to the Fabric that you configure. So your problem is to configure a Fabric environment and set of nodes, using the host name resolution etc that you want to configure your network.
Would advise to check out the Fabric Docs http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html to build your network. They have some sample-networks (see the Github Repository here -> https://github.com/hyperledger/fabric/tree/release/examples )
The 'Fabric' that is set up by Hyperledger Composer's dev environment is just a Dev environment with one peer configured (via docker containers) to get you going.
You need to understand from the Fabric Docs how to set up your network, then come back to Composer (once all that is set up) and use connection profiles to connect to the runtime Fabric.
You could consider using docker-composer which builds on the docker client and makes it easier to run multiple docker containers. Please consult the manual to install docker-compose.
After installing docker-compose, you can use the startFabric.sh script provided here to start Fabric containers you had downloaded.
You may want to consider deploying Fabric with it's sister project Hyperledger Cello. Specifically, the Ansible driver for Kubernetes.
The samples provided in both Composer and Fabric are mostly for single host deployment. You could adapt the Docker Compose to use Swarm, but even Docker is moving away from Swarm, now.
Of course, if you just want to try to run locally, then the Build Your First Network tutorial in Hyperledger Fabric will get you rolling with the docker images in your question.
I have been working on hyperledger-fabric node sdk v 1.0 and successfully created prototype based on dockers . However now I wanted to implement this architecture on real systems. I haven't found any documentation which helps in setting up environment in real systems. All I found is to set up different peers and organization using dockers and then invoke transactions etc.Can we connect different computer machines using dockers and then spin up the network on all these different machine to create private blockchain?
Yes, you can do it. For that, first of all you should define your network configuration. Then, you would create the artifacts that are required for the network: the keys, the channel artifact, the genesis block... You would follow the steps that are defined on the Fabric documentation to Build your first network. Also, you should share the public keys and the genesis block.
Then, in each machine you would install the docker images, like is explained in the Fabric documentation. After that, you would define the containers that you are going to set up in each machine. You do that on the docker configuration files (docker-compose.yaml, docker-base.yaml...). There, be aware of defining well the docker network configuration. You have more info about it in the answer of this question.
At the end you would switch on each container executin the docker-compose.
I don't know if I've given you enough information. If not, ask again please.
Docker is good for production systems. Docker swarm can be used for connecting multiple machines.