I have question regarding environment variables which are passed through docker-compose file.
I have chaincode which does security checks when security is enabled. It checks if security is enabled through core.SecurityEnabled() api. I enable / disable security using docker-compose env. variable CORE_SECURITY_ENABLED.
This works fine in dev mode. However when I deploy chaincode in non-dev mode, I get core.SecurityEnabled() as false although my env variable is passed as true. I examined the docker containers. Docker container running peer, returns env variable CORE_SECURITY_ENABLED=true on env command. However docker container running chaincode does not have env variable CORE_SECURITY_ENABLED. It would be picking up value from core.yaml which is set as false.
Is this as per design? In production mode should we be making changes in core.yaml file rather than to depend on env variable passed through docker-compose?
I am using the docker-compose as given below to get the CORE_SECURITY_ENABLED=true/false. Have you tried specifying the environment variables in this manner ?
membersrvc:
image: hyperledger/fabric-membersrvc
ports:
- "7054:7054"
command: membersrvc
vp0:
image: hyperledger/fabric-peer
ports:
- "8085:7050"
- "8080:7053"
- "30303:30303"
- "30304:30304"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_SECURITY_ENABLED=true
- CORE_SECURITY_PRIVACY=true
- CORE_VM_ENDPOINT=http://172.17.0.1:2375
- CORE_PEER_PKI_ECA_PADDR=membersrvc:7054
- CORE_PEER_PKI_TCA_PADDR=membersrvc:7054
- CORE_PEER_PKI_TLSCA_PADDR=membersrvc:7054
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=vp0
- CORE_SECURITY_ENROLLID=test_vp0
- CORE_SECURITY_ENROLLSECRET=MwYpmSRjupbT
links:
- membersrvc
command: sh -c "sleep 35; peer node start --logging-level=DEBUG"
If you got your query answered from the FAB jira, kindly ignore this post.
Related
I am trying to access a Docker container which exposes an Express API (using Docker Compose services) in GitLab CI in order to run a number of tests against it.
I setup and instantiate the Docker services necessary as one task, then I attempt accessing it via axios requests in my tests. I have set 0.0.0.0 as the endpoint base.
However, I keep receiving the error:
[Error: connect ECONNREFUSED 0.0.0.0:3000]
My docker-compose.yml:
version: "3"
services:
st-sample:
container_name: st-sample
image: sample
restart: always
build: .
expose:
- "3000"
ports:
- "3000:3000"
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- /sampledb
expose:
- "27017"
ports:
- "27017:27017"
My gitlab-ci.yml:
image: docker:latest
services:
- node
- mongo
- docker:dind
stages:
- prepare_image
- setup_application
- test
- teardown_application
prepare_image:
stage: prepare_image
script:
- docker build -t sample .
setup_application:
stage: setup_application
script:
- docker-compose -f docker-compose.yml up -d
test:
image: node:latest
stage: test
allow_failure: true
before_script:
- npm install
script:
- npm test
teardown_application:
stage: teardown_application
script:
- docker-compose -f docker-compose.yml stop
Note that I also have registered the runner in my machine, giving it privileged permissions.
Locally everything works as expected - docker containers are initiated and are accessed for the tests.
However I am unable to do this via GitLab CI. The Docker containers build and get set up normally, however I am unable to access the exposed API.
I have tried many things, like setting the hostname for accessing the container, setting a static IP, using the container name etc, but to no success - I just keep receiving ECONNREFUSED.
I understand that they have their own network isolation strategy for security reasons, but I am just unable to expose the docker service to be tested.
Can you give an insight to this please? Thank you.
I finally figured this out, following 4 days of reading, searching and lots of trial and error. The job running the tests was in a different container from the ones that exposed the API and the database.
I resolved this by creating a docker network in the device the runner was on:
sudo network create mynetwork
Following that, I set the network to the docker-compose.yml file, with external config, and associated both services with it:
st-sample:
# ....
networks:
- mynetwork
mongo:
# ....
networks:
- mynetwork
networks:
mynetwork:
external: true
Also, I created a custom docker image including tests (name: test),
and in gitlab-ci.yml, I setup the job to run it within mynetwork.
docker run --network=mynetwork test
Following that, the containers/services were accessible by their names along each other, so I was able to run tests against http://st-sample.
It was a long journey to figure it all out, but it was well-worth it - I learned a lot!
docker compose-up locally is able to build and bring up the services, but when doing the same on Azure Container Instances I get the below error
containerinstance.ContainerGroupsClient#CreateOrUpdate: Failure
sending request: StatusCode=400 -- Original Error:
Code="InaccessibleImage" Message="The image
'docker/aci-hostnames-sidecar:1.0' in container group 'djangodeploy'
is not accessible. Please check the image and registry credential."
Also, what is the purpose of this image docker/aci-hostnames-sidecar
The ACI deployment was working fine and now suddenly it doesn't work anymore
The docker-compose.yml contents are provided below
version: '3.7'
services:
django-gunicorn:
image: oasisdatacr.azurecr.io/oasis_django:latest
env_file:
- .env
build:
context: .
command: >
sh -c "
python3 manage.py makemigrations &&
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
python3 manage.py runserver 0:8000"
ports:
- "8000:8000"
celery-worker:
image: oasisdatacr.azurecr.io/oasis_django_celery_worker:latest
restart: always
build:
context: .
command: celery -A oasis worker -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
celery-beat:
image: oasisdatacr.azurecr.io/oasis_django_celery_beat:latest
build:
context: .
command: celery -A oasis beat -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
- celery-worker
UPDATE There might have been some issue from Azure end as I was able to deploy the containers as I usually do without any changes whatsoever
When you use the docker-compose to deploy multiple containers to ACI, firstly you need to build the images locally, and then you need to push the images to the ACR through the command docker-compose push, of course, you need to log in to your ACR first. See the example here.
And if you already push the images to your ACR, then you need to make sure if you log in to your ACR with the right credential and the image name with the tag is absolutely right.
I am trying to integrate softHSM with Hyperledger Fabric. I have followed the below steps:
I have cloned the repo from this link
https://github.com/hyperledger/fabric-ca (main-branch)
Executed the below 3 commands from the above directory. After execution, I got the new binary and the new Fabric-CA image.
make fabric-ca-server GO_TAGS=pkcs11
make fabric-ca-client GO_TAGS=pkcs11
make docker GO_TAGS=pkcs11
I have replaced the old binary(fabric-ca-client and fabric-ca-server)
I am trying to spin up the Fabric-CA in the docker container and passing the environment variables as per the official documentation.
ORG1_RCA:
image: hyperledger/fabric-ca:1.5.1
container_name: ORG1_RCA
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ORG1_RCA
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_PORT=7054
- FABRIC_CA_SERVER_BCCSP_DEFAULT=PKCS11
- FABRIC_CA_SERVER_BCCSP_PKCS11_LIBRARY=/etc/hyperledger/fabric/libsofthsm2.so
- FABRIC_CA_SERVER_BCCSP_PKCS11_PIN=
- FABRIC_CA_SERVER_BCCSP_PKCS11_LABEL=
ports:
- 7054:7054
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
environment:
- SOFTHSM2_CONF=/etc/hyperledger/fabric/config.file
volumes:
- ./fabric-ca/verizon:/etc/hyperledger/fabric-ca-server
- /home/softhsm/config.file:/etc/hyperledger/fabric/config.file
- /usr/local/lib/softhsm/libsofthsm2.so:/etc/hyperledger/fabric/libsofthsm2.so
networks:
- contract
I am not providing the PIN and label for security purposes.When I am running this container, the private keys are still getting saved into the msp/keystore folder instead of HSM.
I am following the official tutorial about Deploying a Hyperledger Composer blockchain business network to Hyperledger Fabric (multiple organizations). I was able to up the network using the provider Org1 and Org2 example. Now I want to customize the organization as my own. But upon execution of ./byfn.sh -m up -s couchdb -a command. I am getting the below error; I inspect all the yaml files but I was not able to find the possible root cause of the error. I just really need a help on this. Thank you.
Starting for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds and using database 'couchdb', and using Fabric CAs
Continue? [Y/n] Y
proceeding ...
LOCAL_VERSION=1.2.0
DOCKER_IMAGE_VERSION=1.2.0
WARNING: The COMPOSE_PROJECT_NAME variable is not set. Defaulting to a blank string.
ERROR: The Compose file is invalid because:
Service peer0.org2.example.com has neither an image nor a build context specified. At least one must be provided.
ERROR !!!! Unable to start network
It looks like your peer-base.yaml file is not correct. One Problem is the COMPOSE_PROJECT_NAME variable. If it is not set, fabric uses the folder as the network-name. But if it is not right there will be some error while bootstrapping the network. We are building a bidding network and it is called trade-network. So the example of the entry in the peer-base.yaml file is:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_basic
Before the boostrapping we define the COMPOSE_PROJECT_NAME with trade-network so the network is called trade-network_basic. I'm not 100% sure but I think after (or while) bootstrapping there is a point where fabric uses the folder name anyway. So we decicded to use the folder name by default and nothing happened wrong.
The other problem could be the image entry for the peer. In our file it is:
image: hyperledger/fabric-peer:x86_64-1.1.0
You can docker images list and will know which images you have, you have to use one for the peers. After the colon you can be more specific and I would suggest it.
Here is an example of our full peer-base.yaml file:
version: '2'
services:
peer-base:
image: hyperledger/fabric-peer:x86_64-1.1.0
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_basic
#- CORE_LOGGING_LEVEL=INFO
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
I'm having an issue running docker compose. Specifically i'm getting this error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.login-service.environment contains {"REDIS_HOST": "redis-server"}, which is an invalid type, it should be a string
And here's my yml file:
version: '3'
services:
redis:
image: redis
ports:
- 6379:6379
networks:
- my-network
login-service:
tty: true
build: .
volumes:
- ./:/usr/src/app
ports:
- 3001:3001
depends_on:
- redis
networks:
- my-network
environment:
- REDIS_HOST: redis
command: bash -c "./wait-for-it.sh redis:6379 -- npm install && npm run dev"
networks:
my-network:
Clearly the issue is where I set my environment variable even though i've seen multiple tutorials that use the same syntax. The purpose of it is to set REDIS_HOST to whatever ip address docker assigns to Redis when building the image. Any insights what I may need to change to get this working?
There are two different ways of implementing it. One with = sign and other with : sign. Check the following examples for more information.
Docker compose environments with = sign.
version: '3'
services:
webserver:
environment:
- USER=john
- EMAIL=johh#gmail.com
Docker compose environments with : sign
version: '3'
services:
webserver:
environment:
USER:john
EMAIL:johh#gmail.com
It happens because the leading dash. You can try without it like below:
environment:
REDIS_HOST: redis
For me, when I had two environment variables for a service in my docker-compose.yml file, I was getting the error services.web.environment.1 must be a string because one of the environment variables was
- REDIS_DATABASE_PASSWORD: ${REDIS_DATABASE_PASSWORD}
This syntax means that the REDIS_DATABASE_PASSWORD variable is not set probably because for docker compose version 3.9 setting an enviornment variable with this syntax is not allowed and does not work.
When using REDIS_DATABASE_PASSWORD: ${REDIS_DATABASE_PASSWORD}, your IDE's syntax highlighting will show REDIS_DATABASE_PASSWORD in blue and ${REDIS_DATABASE_PASSWORD} in red. This means that REDIS_DATABASE_PASSWORD: ${REDIS_DATABASE_PASSWORD} is not a string, and when setting an environment variable it must be a string. The way to solve this problem is to set the environment variable in your docker-compose.yml file using the syntax:
REDIS_DATABASE_PASSWORD=${REDIS_DATABASE_PASSWORD}
Now your IDE should show REDIS_DATABASE_PASSWORD=${REDIS_DATABASE_PASSWORD} in red, meaning it is a string.