I can run the Hyperledger Fabric 1.0 "first network" sample fine.
Now I am trying to add CouchDB persistence to this sample as described at https://hyperledger-fabric.readthedocs.io/en/latest/build_network.html#using-couchdb and https://hyperledger-fabric.readthedocs.io/en/latest/build_network.html#a-note-on-data-persistence
I edit the networkUp() function in fabric-samples/first-network/byfn.sh changing the line from:
CHANNEL_NAME=$CHANNEL_NAME TIMEOUT=$CLI_TIMEOUT docker-compose -f $COMPOSE_FILE up -d 2>&1
to:
CHANNEL_NAME=$CHANNEL_NAME TIMEOUT=$CLI_TIMEOUT docker-compose -f docker-compose-cli.yaml -f docker-compose-couch.yaml up -d 2>&1
I also edit file fabric-samples/first-network/docker-compose-couch.yaml changing block:
services:
couchdb0:
container_name: couchdb0
image: hyperledger/fabric-couchdb
ports:
- "5984:5984"
networks:
- byfn
to:
services:
couchdb0:
container_name: couchdb0
image: hyperledger/fabric-couchdb
ports:
- "5984:5984"
networks:
- byfn
volumes:
- /var/hyperledger/couchdb0:/opt/couchdb/data
When I run it with commands:
yes | sudo ./byfn.sh -m generate
yes | sudo ./byfn.sh -m up
Right after it lists 'Channel "mychannel" is created successfully', I get the error:
UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 172.18.0.8:7051: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7051 <nil>}
Any help greatly appreciated.
Thanks in advance!
This issue was not happening on RHEL 7.3, so I updated my CentOS from 7.2 to 7.3 as described at http://www.itzgeek.com/how-tos/linux/centos-how-tos/how-to-update-centos-7-07-17-2-to-centos-7-3.html and my issue was resolved.
Change made:
1) Created folder
mkdir /home/vagrant/db0
2) changed in networkUp() of byfn.sh:
# CHANNEL_NAME=$CHANNEL_NAME TIMEOUT=$CLI_TIMEOUT docker-compose -f $COMPOSE_FILE up -d 2>&1
CHANNEL_NAME=$CHANNEL_NAME TIMEOUT=$CLI_TIMEOUT docker-compose -f docker-compose-cli.yaml -f docker-compose-couch.yaml up -d 2>&1
3) added last two "volumes:" lines in docker-compose-couchdb.yaml:
services:
couchdb0:
container_name: couchdb0
image: hyperledger/fabric-couchdb
ports:
- "5984:5984"
networks:
- byfn
volumes:
- /home/vagrant/db0:/opt/couchdb/data
The docker container starts with the user couchdb. So when when you try to share the volume for the data directory, the docker container tries to create this folder and assign ownership to root. So I think the user variable should be left blank in the dockerfile instead of specifying it as couchdb. My environment is RHEL release 7.3 (Maipo). To fix this I started the container with the following commands and specified the user as root
docker run --rm -itd --name couchdb0 --user root \
--publish 5984:5984 \
--volume /var/hyperledger/couchdb0:/opt/couchdb/data \
hyperledger/fabric-couchdb:x86_64-1.0.0
Related
I have a project and I use redis in this project.
This is my .gitlab-cu.yml
image: node:latest
stages:
- build
- deploy
build:
image: docker:stable
services:
- docker:dind
stage: build
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- docker build --pull -t "$DOCKER_IMAGE" .
- docker push "$DOCKER_IMAGE"
deploy:
stage: deploy
services:
- redis:latest
script:
- ssh $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- ssh $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME
docker run --name=$CI_PROJECT_NAME --restart=always --network="host" -v "/app/storage:/src/storage" -d $DOCKER_IMAGE
- ssh -o StrictHostKeyChecking=no -i $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker compose up
And this my docker-compose.yml:
version: "3.2"
services:
redis:
image: redis:latest
volumes:
- redisdata:/data/
command: --port 6379
ports:
- "6379:6379"
expose:
- "6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
volumes:
redisdata:
everything is OK, but last line not work for me:
- ssh -o StrictHostKeyChecking=no -i $SSH_PRIVATE_KEY $USER_NAME#$HOST_NAME docker compose up
So, my redis image not running in this container
Note: I want 1 container with 2 images contains my project and redis
My redis image not running
So, my redis image not running in this container
Note: I want 1 container with 2 images contains my project and redis
In my docker-compose file, there is more than 3 service. I am passing two-variable from docker command with a makefile. But I'm facing a problem - after executing first command similar second command not executing.
See this example for better understanding-
The docker-compose file is -
version: '3.7'
services:
ping:
container_name: ping_svc
image: "${PING_IMAGE_NAME}${PING_IMAGE_TAG}"
ports:
- 8080:8080
command: serve
environment:
- CONSUL_URL=consul_dev:8500
- CONSUL_PATH=ping
tty: true
id:
container_name: id_svc
image: "${ID_IMAGE_NAME}${ID_IMAGE_TAG}"
ports:
- 8081:8081
command: serve
environment:
- CONSUL_URL=consul_dev:8500
- CONSUL_PATH=id
tty: true
And my makefile command is-
# setting ping_image
#PING_IMAGE_NAME="ping-svc:" PING_IMAGE_TAG="1.0" docker-compose up -d
# setting id_image
#ID_IMAGE_NAME="id-svc:" ID_IMAGE_TAG="1.0" docker-compose up -d
The PING_IMAGE_NAME and PING_IMAGE_TAG settings were successfully but from the next line not executing. why?
Is there any better way to do this?
I solved this by putting all variables in one line.
Like this-
#ID_IMAGE_NAME="id-svc:" ID_IMAGE_TAG="1.0" \
PING_IMAGE_NAME="ping-svc:" PING_IMAGE_TAG="1.0" \
docker-compose up -d ping id
Here ping and id is my container name.
Maybe the issue was every time I'm upping docker-compose.
I'm trying to set up a lab using docker containers with base image centos7 and docker-compose.
Here is my docker-compose.yaml file
version: "3"
services:
base:
image: centos_base
build:
context: base
master:
links:
- base
build:
context: master
image: centos_master
container_name: master01
hostname: master01
volumes:
- ansible_vol:/var/ans
networks:
- net
host01:
links:
- base
- master
build:
context: host
image: centos_host
container_name: host01
hostname: host01
command: ["/var/run.sh"]
volumes:
- ansible_vol:/var/ans
networks:
- net
networks:
net:
volumes:
ansible_vol:
My Docker files are as below
Base Image docker file:
# For centos7.0
FROM centos:7
RUN yum install -y net-tools man vim initscripts openssh-server
RUN echo "12345" | passwd root --stdin
RUN mkdir /root/.ssh
Master Dockerfile :
FROM centos_base:latest
# install ansible package
RUN yum install -y epel-release
RUN yum install -y ansible openssh-clients
RUN mkdir /var/ans
# change working directory
WORKDIR /var/ans
RUN ssh-keygen -t rsa -N 12345 -C "master key" -f master_key
CMD /usr/sbin/sshd -D
Host Image Dockerfile:
FROM centos_base:latest
RUN mkdir /var/ans
COPY run.sh /var/
RUN chmod 755 /var/run.sh
My run.sh file:
#!/bin/bash
cat /var/ans/master_key.pub >> /root/.ssh/authorized_keys
# start SSH server
/usr/sbin/sshd -D
My Problems are:
If I run docker-compose up -d --build, I see no containers coming up. they all getting created but exiting.
Successfully tagged centos_host:latest
Creating working_base_1 ... done
Creating master01 ... done
Creating host01 ... done
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
433baf2dd0d8 centos_host "/var/run.sh" 12 minutes ago Exited (1) 12 minutes ago host01
a2a57e480635 centos_master "/bin/sh -c '/usr/sb…" 13 minutes ago Exited (1) 12 minutes ago master01
a4acf6fb3e7b centos_base "/bin/bash" 13 minutes ago Exited (0) 13 minutes ago working_base_1
ssh keys generated in 'centos_master' image are not available in centos_host container, even though I have added a volume mapping 'ansible_vol:/var/ans' in docker-compose file
My intention is these ssh key files generated in master should be available in host containers ,therefore the run.sh script can copy them to authorized_keys section of host containers.
Any help is greatly appreciated.
Try to put in base/Dockerfile :
RUN echo "12345" | passwd root --stdin; \
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -b 4096 -t rsa
and rerun docker-compose build
/etc/ssh/ssh_host_rsa_key is a key used by sshd (ssh daemon), so that containers can be started properly.
The key you generated and copied into authorized_keys will be used to allow ssh client to connect to container via ssh.
Try to use external: false, to not attempt the container to create it and override the previous data at creation
version: "3"
services:
base:
image: centos_base
build:
context: base
master:
links:
- base
build:
context: master
image: centos_master
container_name: master01
hostname: master01
volumes:
- ansible_vol:/var/ans
networks:
- net
host01:
links:
- base
- master
build:
context: host
image: centos_host
container_name: host01
hostname: host01
command: ["/var/run.sh"]
volumes:
- ansible_vol:/var/ans
networks:
- net
networks:
net:
volumes:
ansible_vol:
external: false
I am working with Hyperledger Fabric 1.3.0. I get the following error when I execute the "byfn.sh -m up" in the fabric-samples/first-network.
Starting for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds
Continue? [Y/n] Y
proceeding ...
LOCAL_VERSION=1.3.0
DOCKER_IMAGE_VERSION=1.3.0
Error: No such container: cli
ERROR !!!! Test failed
Please help
I don't have docker-compose.yaml but what I do have is docker-compose-cli.yaml. The contents are below:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
volumes:
orderer.example.com:
peer0.org1.example.com:
peer1.org1.example.com:
peer0.org2.example.com:
peer1.org2.example.com:
networks:
byfn:
services:
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- byfn
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
networks:
- byfn
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
networks:
- byfn
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org2.example.com
networks:
- byfn
peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org2.example.com
networks:
- byfn
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:`enter code here`
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer0.org2.example.com
- peer1.org2.example.com
networks:
- byfn
Well, it seems you are having some issues with your versions, what I recommend is to clean all the containers and images of docker running:
docker rmi $(docker images -a -q) //for images
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q) //for containers
After that, re-download the fabric samples, I think that they updated the 1.3.0 stable version yesterday, setup again your crytogen path (this is very important if you are pointing to an old version of the cryptogen tool it won't work!
And give a try again, if that don't work, I recommend you to give us more information like.
Where are you running your First-Network? Windows? Mac? Linux?
Version of Linux? Version of docker?
If you are still having troubles you can check out my guide of how yo setup an hyperledger fabric from scratch using the Basic-Network example, it's kinda easy and explain all the concepts that you need.
Setup Hyperledger Fabric in multiple physical machines
Update
Since you are in Windows, don't use your users Folder, create a simple folder structure like C:/HLF for example.
After that in your .env file add this line COMPOSE_CONVERT_WINDOWS_PATHS=1.
This helps docker to understand the windows paths, cause they are different in linux.
Update #2
Let's try another solution then, go to your script.sh inside your script folder, look for the command peer channel create... and add this line just before the IF statement where are them MSYS_NO_PATHCONV=1
Review the Windows Extras section in the fabric solution, check that you have all installed.
Hyperledger Fabric - Windows Extras
After that re generate everything. Run the docker commands in my first answer and add this.
docker network prune
After that.
./byfn.sh down
./byfn.sh generate
./byfn.sh -m up
Update #3
I tested the fisrt-network using Windows 10 and Docker for windows (Using linux containers) with the configurations I mentioned before and it's working fine.
My docker version is: 18.06.1-ce
And I followed the Fabric Hyperledger official tutorial: Build Network
The only difference that I saw it's that I ran ./byfn.sh up instead of ./byfn.sh -m up
I recommend you to reinstall your docker for windows, maybe its something corrupted, that doesn't allow you to start a your network.
Hope that it helps!
At command line i am executing :
fabric-ca-client register --id.name <> --id.type peer --id.affiliation peerorgs.1A --id.attrs <>
I am getting the error below:
"Failed getting affiliation" .
But, the affiliation entry is present in the fabric-ca-server.db. Can some one help me understand why I'm getting this error?
Thanks,
Smitha
Assuming you are using Docker and Docker Compose, then you should be able to do the following:
1) Use a docker-compose.yaml with the following contents (this is a slightly modified version of the one in the repo):
#
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
fabric-ca-server:
image: hyperledger/fabric-ca
container_name: fabric-ca-server
ports:
- "7054:7054"
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- "./fabric-ca-server:/etc/hyperledger/fabric-ca-server"
command: sh -c 'fabric-ca-server start -b admin:adminpw'
2) The above mounts a volume where you can place your customized fabric-ca-server-config.yaml file. Simply create a directory named fabric-ca-server in the same directory as the docker-compose.yaml and then copy your fabric-ca-server-config.yaml there.
3) Run docker-compose up and check the logs. You should see that your affiliations have been created