Azure Hyperledger Fabric Single Member Blockchain setup - azure

I'm starting to use Azure to host a multi node Hyperledger network. I've previously been running on a local environment, but would like to use Azure. I've deployed the 'Hyperledger Fabric Single Member Blockchain' template, which creates five VM's (one each for a CA, orderer and three peers). My local environment uses a CLI, but this doesn't seem to be in the Azure template. How do i interact with the blockchain network without the CLI?
Are there any tuturials for how to use the deployed environment after its been setup?
I've SSH'd to all of the three VM's that are hosting the peers, there seems to be an error with the docker container that has been automatically setup by deploying the Azure template. Running 'docker ps' shows that they are restarting reguarly, and i can't connect using the command docker exec -it bash (i get an error saying that its restarting). Eg:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
817bbb256e5b hyperledger/fabric-peer:x86_64-1.0.0-alpha "peer node start -..." 7 hours ago Restarting (2) 6 seconds ago sad_clarke
Does anyone have the Hyperledger Fabric Single Member Blockchain template working?
Thanks
Paul

To interact with Hyperledger on Azure, you may indeed ssh into one of the VMs. Once there, you can use docker exec -it <container-name> bash to get a prompt in the the docker image running on that VM (ca, peer or orderer). In there you can issue commands.
I got the docker restarting error you mentioned after rebooting the VMs (but not immediately after deployment). I believe there is a bug in the MS deployment script. The configure-fabric-azureuser.sh uses cacert="/etc/hyperledger/fabric-ca-server-config/${PEER_ORG_DOMAIN}-cert.pem" but after replacing with cacert="/etc/hyperledger/fabric-ca-server-config/ca.${PEER_ORG_DOMAIN}-cert.pem" all my problems with docker image restarting or running Hyperledger Composer tutorial were gone. I posted a fix on https://github.com/fabienpe/azure-hyperledger-artifacts
Have you checked the config.log files on each VM?

Related

Azure app service doesn't install docker properly

I am currently trying to start ElasticSearch on an Azure app service using Docker. I install docker through the ssh available in azure app services. Docker seem to install alright in the console, however when I run
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.6.2
I get the following error in the ssh console:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I have installed and uninstall Docker several times, however I still get the same error
Azure App service does not allow you to run Elastic Search due to its limitations
You may use Elastic as a Service on Azure or install it in AKS or VM.
https://azuremarketplace.microsoft.com/en-us/marketplace/apps/elastic.ec-azure?tab=Overview
You are trying to install Docker inside the Docker container
App Service for Linux comes with a bunch of preconfigured containers such as Node, PHP, Java, Python, Ruby and .NET Core.
https://anthonychu.ca/post/jekyll-azure-app-service-linux/
The exact issue you mentioned means that Docker daemon is not started in your Linux environment
To start the Docker daemon use command:
systemctl start docker

How to run a docker container as a windows service

I have a windows service that I want to run in a docker container on Azure.
I would like to have the same setup when running the service locally, so I would like to run the same docker container locally as a windows service (I think?).
How would I do that? Or is there a better approach?
Thanks,
Michael
IMHO Michael asked how to start docker images without the need to have a user logged in. The docker restart flag actually only deals with starting images after docker is running. To get docker to run without logged in user (or after automatic windows updates) it seems to me you will also need to make a windows service that runs docker.
A good explanation for this part of the problem can be found here (no good solution has been found yet without paying for it - docker team ignored request to make this work without third party so far):
How to start Docker daemon (windows service) at startup without the need to log-in?
You can use the flag --restart=unless-stopped with the docker run command and the docker container will run automatically even if the server was shutdown.
Further read for the restart policy and flag here
but conditions apply - docker itself should always run on startup. which is default setting by itself.

Login to azure container

I used following quick start doc to spin up my first Azure container.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-quickstart#feedback
It worked fine. but how do I connect to container if I want to debug something?
You cannot connect to the container itself directly to debug, IE you can't SSH or RDP to it. Take a look at this graphic which highlights how a container differs from virtual machines:
You can however pull logs from your container from the container engine. In your case you would want to use the following command in the Azure CLI: az container logs.
https://aka.ms/container_logs
When you invoke CLI through the Portal, you should already be connected through your subscription.To debug or troubleshoot you can look at the container logs. Check out this documentation for the exact commands
https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-logs
When I am building containers to run on ACI, I build them first in a local docker instance where they can be connected to and interactively debugged. When you're happy with how they run locally push them into ACI, and debug from the output logs if needed.
I get to the bash shell in my Azure containers by either the azure-cli package, as the OP noted in a comment:
az container exec --exec-command "/bin/bash"
Or by navigating to a container instance in the Azure portal, then under Settings/Containers there is a "Connect" tab:

Hyperledger Composer cannot connect with dockerized Node.js app

I am working in a POC using Hyperledger Composer v0.16.0 and Node.js SDK. I have deployed my Hyperledger Fabric instance following this developer tutorial and when I run locally my Node.js app via node server.js command it works correctly, I can retrieve participants, assets, etc.
However, when I Dockerize my Node.js app and I run the container for this app I am not able to reach the Hyperledger Fabric instance. So, how can I set the credentials to be able to reach my Hyperledger Fabric or another one since my Node.js app?
My Dockerfile looks like this:
FROM node:8.9.1
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
I run my docker/node.js image with this command:
docker run --network composer_default -p 3000:3000 -d myuser/node-web-app
There are 2 pitfalls to watch out for with Dockerizing your app. 1. Location of Cards and 2. Network Address of Fabric servers.
The Business Network Card(s) used by your app to connect to the Fabric. These cards are in a hidden folder under your default home folder e.g. /home/thatcher/.composer on a Linux machine. You need to 'pass' these into the container or share them with a shared volume as suggested by the previous answer. So running your container for the first time try adding this in the command -v ~/.composer:/home/<user>/.composer where is the name of the default user in your container. Be aware also that the folder on your Docker Host machine must allow write access to the UID of the user inside the container.
When you have sorted out the sharing of the cards you need to consider what connection information is in the card. It is quite likely that the Business Network Card you are using will be using localhost as the addresses of your Fabric servers, the port forwarding of the ports from your Docker host into the containers means that localhost is easy and works. However in your container localhost will redirect inside the container so will not see the Fabric. The arguments on the Docker command --network composer_default will set up your new container on the same Docker network as the Fabric Containers and so your Container could see the 'addresses' of the Fabric servers e.g. orderer.example.com but you card would then fail outside your container. The best way forward would be to put the IP Address number of your Docker Host machine into the connection.json file instead of localhost, and then your card would work inside and outside of your container.
So, credentials would be config info. The two ways to pass config info into a basic docker container are:
environment variables (-e)
mount a volumes (-v) with config info.
You can also have scripts that you install from Dockerfile that modify files and such.
The docker logs may give clues as to the exact problem or set of problems.
docker logs mynode
You can also enter a running container and snoop around using the command
docker exec -it mynode bash

deploy wso2esb in docker container with kubernetes

can someone help with how to deploy wso2esb in docker container with kubernetes?
currently im running only one node/master at local machine with ubuntu server 14.04 LTS
if im running with this
sudo docker run --name esb isim/wso2esb
it instantly trigger the service inside the container
but if im running with this
kubectl run esb1 --image=isim/wso2esb
the container just run, without trigger the service inside the container
btw im using isim/wso2esb from docker hub
hope someone can help me..
From the comments above, it looks like you were connecting to the wrong IP address, which you discovered by running kubectl logs esb1.
In general, you can follow the Kubernetes Debugging FAQ when you see an issue like this to see if it is a common problem that has already been documented.

Resources