How can I deploy a hyperledger business network using docker-compose? - hyperledger-fabric

I am trying to deploy a Hyperledger BNA using docker-compose. How can I do it in the cleanest way?
version: '2'
networks:
basic:
services:
playground:
container_name: composer-playground
image: hyperledger/composer-playground
ports:
- 8080:8080
networks:
- basic
command: composer-playground
ca.example.com:
image: hyperledger/fabric-ca
...
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer
...
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer
...
cli:
container_name: cli
image: hyperledger/fabric-tools
...
This is a docker-compose.yml from This github repisitory
I know that I can deploy a network using this tutorial, but as I sad. I want to deploy it using docker-compose.Thanks in advance!

If you want to use Composer, be aware that 'today' Composer supports Fabric 1.1 and you are looking at Fabric 1.2 docs, and Will likely download the 'latest' Fabric 1.2 unless you specifically do something different. (Composer support for Fabric 1.2 is coming shortly. Keep an out out for the Release notes for the latest information.)
There is a Docker container for the Composer CLI which will enable you to deploy your BNA.
There are Composer tutorials on creating and deploying BNAs, but they start with the assumption that the Composer CLI is installed locally, not in a Container so you will need to adapt the approach. The key to connecting Composer to the Fabric is the connection profile, so you will need to understand the addressing and networking aspects of your Docker environment to build the right connection profile.

Related

How to deploy a multi-container docker with GitHub actions to Azure web application?

I want to deploy a web application, which is built with Docker into a Azure web app.
There are a lot of tutorials and documentation about how to easily deploy a single docker image into Azure. But how to deploy multiple images into Azure?
I want to achieve this:
Local development with Docker-Compose. Works.
Versioning with GitHub. Works.
GitHub actions > Building the Docker images and pushing it to Docker-Hub (maybe not necessary, if the images are registered in Azure). Works.
Deploy everything to Azure and run the web application.
There is a similar question here: How to deploy a multi-container app to Azure with a Github action?
But I want to avoid the manual step, which is mentionend in the answer.
My docker-compose.yml:
version: '3.8'
services:
server-app:
image: tensorflow/serving
command:
- --model_config_file=/models/models.config
ports:
- 8501:8501
container_name: TF_serving
tty: true
volumes:
- type: bind
source: ./content
target: /models/
client-app:
build:
context: ./client-app
dockerfile: dockerfile
image: user1234/client-app:latest
restart: unless-stopped
ports:
- 7862:80

I cannot establish connection between two containers in Heroku

I have a web application built using Node.js and MongoDB. I have containerized the app using Docker and it was working fine locally but once i have tried to deploy it to production I couldn't establish connection between the backend and MongoDB container. For some reason the environment variables are always undefined.
Here is my docker-compose.yml:
version: "3.7"
services:
food-delivery-db:
image: mongo:4.4.10
restart: always
container_name: food-delivery-db
ports:
- "27018:27018"
environment:
MONGO_INITDB_DATABASE: food-delivery-db
volumes:
- food-delivery-db:/data/db
networks:
- food-delivery-network
food-delivery-app:
image: thisk8brd/food-delivery-app:prod
build:
context: .
target: prod
container_name: food-delivery-app
restart: always
volumes:
- .:/app
ports:
- "3000:5000"
depends_on:
- food-delivery-db
environment:
- MONGODB_URI=mongodb://food-delivery-db/food-delivery-db
networks:
- food-delivery-network
volumes:
food-delivery-db:
name: food-delivery-db
networks:
food-delivery-network:
name: food-delivery-network
driver: bridge
This is expected behaviour:
Docker images run in dynos the same way that slugs do, and under the same constraints:
…
Network linking of dynos is not supported.
Your MongoDB container is great for local development, but you can't use it in production on Heroku. Instead, you can select and provision an addon for your app and connect to it from your web container.
For example, ObjectRocket for MongoDB sets an environment variable ORMONGO_RS_URL. Your application would connect to the database via that environment variable instead of MONGODB_URI.
If you'd prefer to host your database elsewhere, that's fine too. I believe MongoDB Atlas is the official offering.

softHSM integration with Hyperledger Fabric

I am trying to integrate softHSM with Hyperledger Fabric. I have followed the below steps:
I have cloned the repo from this link
https://github.com/hyperledger/fabric-ca (main-branch)
Executed the below 3 commands from the above directory. After execution, I got the new binary and the new Fabric-CA image.
make fabric-ca-server GO_TAGS=pkcs11
make fabric-ca-client GO_TAGS=pkcs11
make docker GO_TAGS=pkcs11
I have replaced the old binary(fabric-ca-client and fabric-ca-server)
I am trying to spin up the Fabric-CA in the docker container and passing the environment variables as per the official documentation.
ORG1_RCA:
image: hyperledger/fabric-ca:1.5.1
container_name: ORG1_RCA
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ORG1_RCA
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_PORT=7054
- FABRIC_CA_SERVER_BCCSP_DEFAULT=PKCS11
- FABRIC_CA_SERVER_BCCSP_PKCS11_LIBRARY=/etc/hyperledger/fabric/libsofthsm2.so
- FABRIC_CA_SERVER_BCCSP_PKCS11_PIN=
- FABRIC_CA_SERVER_BCCSP_PKCS11_LABEL=
ports:
- 7054:7054
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
environment:
- SOFTHSM2_CONF=/etc/hyperledger/fabric/config.file
volumes:
- ./fabric-ca/verizon:/etc/hyperledger/fabric-ca-server
- /home/softhsm/config.file:/etc/hyperledger/fabric/config.file
- /usr/local/lib/softhsm/libsofthsm2.so:/etc/hyperledger/fabric/libsofthsm2.so
networks:
- contract
I am not providing the PIN and label for security purposes.When I am running this container, the private keys are still getting saved into the msp/keystore folder instead of HSM.

How to fix the error when trying to bring up first-network

I am working with Hyperledger Fabric 1.3.0. I get the following error when I execute the "byfn.sh -m up" in the fabric-samples/first-network.
Starting for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds
Continue? [Y/n] Y
proceeding ...
LOCAL_VERSION=1.3.0
DOCKER_IMAGE_VERSION=1.3.0
Error: No such container: cli
ERROR !!!! Test failed
Please help
I don't have docker-compose.yaml but what I do have is docker-compose-cli.yaml. The contents are below:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
volumes:
orderer.example.com:
peer0.org1.example.com:
peer1.org1.example.com:
peer0.org2.example.com:
peer1.org2.example.com:
networks:
byfn:
services:
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- byfn
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
networks:
- byfn
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
networks:
- byfn
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org2.example.com
networks:
- byfn
peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org2.example.com
networks:
- byfn
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:`enter code here`
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer0.org2.example.com
- peer1.org2.example.com
networks:
- byfn
Well, it seems you are having some issues with your versions, what I recommend is to clean all the containers and images of docker running:
docker rmi $(docker images -a -q) //for images
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q) //for containers
After that, re-download the fabric samples, I think that they updated the 1.3.0 stable version yesterday, setup again your crytogen path (this is very important if you are pointing to an old version of the cryptogen tool it won't work!
And give a try again, if that don't work, I recommend you to give us more information like.
Where are you running your First-Network? Windows? Mac? Linux?
Version of Linux? Version of docker?
If you are still having troubles you can check out my guide of how yo setup an hyperledger fabric from scratch using the Basic-Network example, it's kinda easy and explain all the concepts that you need.
Setup Hyperledger Fabric in multiple physical machines
Update
Since you are in Windows, don't use your users Folder, create a simple folder structure like C:/HLF for example.
After that in your .env file add this line COMPOSE_CONVERT_WINDOWS_PATHS=1.
This helps docker to understand the windows paths, cause they are different in linux.
Update #2
Let's try another solution then, go to your script.sh inside your script folder, look for the command peer channel create... and add this line just before the IF statement where are them MSYS_NO_PATHCONV=1
Review the Windows Extras section in the fabric solution, check that you have all installed.
Hyperledger Fabric - Windows Extras
After that re generate everything. Run the docker commands in my first answer and add this.
docker network prune
After that.
./byfn.sh down
./byfn.sh generate
./byfn.sh -m up
Update #3
I tested the fisrt-network using Windows 10 and Docker for windows (Using linux containers) with the configurations I mentioned before and it's working fine.
My docker version is: 18.06.1-ce
And I followed the Fabric Hyperledger official tutorial: Build Network
The only difference that I saw it's that I ran ./byfn.sh up instead of ./byfn.sh -m up
I recommend you to reinstall your docker for windows, maybe its something corrupted, that doesn't allow you to start a your network.
Hope that it helps!

VSTS push docker-compose to Azure Container Registry and WebApp

I would like to configure continuous integration from VSTS to Azure Container Registry and then to WebApp.
Here's my docker-compose.yml file:
As you can see I'm using an Asp.Net core + mssql.
version: '3'
services:
api:
image: tbacr.azurecr.io/myservice/api
container_name: api
build:
context: ./Api
dockerfile: Dockerfile
ports:
- "8000:80"
depends_on:
- db
db:
image: "microsoft/mssql-server-linux"
container_name: mssql
environment:
SA_PASSWORD: "testtest3030!"
ACCEPT_EULA: "Y"
MSSQL_PID: "Developer"
ports:
- "127.0.0.1:8001:1433"
Here's my task from VSTS:
And I think the major task is Build Services and PublishServices
So, please take a look below:
Build Services
PublishServices
And finally, in Azure Container Registry I have:
So, the question is how can I deploy it to the WebApp. I have tried right-click to api: latest repository and deploy to WebApp but the endpoint does not respond
There is 2 steps im VSTS : build and release. It seems your build part is ok. So your docker image is pushed in your repo. Then you have to configure the build part in VSTS that will get the image you just pushed on the repo and deploy it on a server.
HTH

Resources