I am working in a POC using Hyperledger Composer v0.16.0 and Node.js SDK. I have deployed my Hyperledger Fabric instance following this developer tutorial and when I run locally my Node.js app via node server.js command it works correctly, I can retrieve participants, assets, etc.
However, when I Dockerize my Node.js app and I run the container for this app I am not able to reach the Hyperledger Fabric instance. So, how can I set the credentials to be able to reach my Hyperledger Fabric or another one since my Node.js app?
My Dockerfile looks like this:
FROM node:8.9.1
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
I run my docker/node.js image with this command:
docker run --network composer_default -p 3000:3000 -d myuser/node-web-app
There are 2 pitfalls to watch out for with Dockerizing your app. 1. Location of Cards and 2. Network Address of Fabric servers.
The Business Network Card(s) used by your app to connect to the Fabric. These cards are in a hidden folder under your default home folder e.g. /home/thatcher/.composer on a Linux machine. You need to 'pass' these into the container or share them with a shared volume as suggested by the previous answer. So running your container for the first time try adding this in the command -v ~/.composer:/home/<user>/.composer where is the name of the default user in your container. Be aware also that the folder on your Docker Host machine must allow write access to the UID of the user inside the container.
When you have sorted out the sharing of the cards you need to consider what connection information is in the card. It is quite likely that the Business Network Card you are using will be using localhost as the addresses of your Fabric servers, the port forwarding of the ports from your Docker host into the containers means that localhost is easy and works. However in your container localhost will redirect inside the container so will not see the Fabric. The arguments on the Docker command --network composer_default will set up your new container on the same Docker network as the Fabric Containers and so your Container could see the 'addresses' of the Fabric servers e.g. orderer.example.com but you card would then fail outside your container. The best way forward would be to put the IP Address number of your Docker Host machine into the connection.json file instead of localhost, and then your card would work inside and outside of your container.
So, credentials would be config info. The two ways to pass config info into a basic docker container are:
environment variables (-e)
mount a volumes (-v) with config info.
You can also have scripts that you install from Dockerfile that modify files and such.
The docker logs may give clues as to the exact problem or set of problems.
docker logs mynode
You can also enter a running container and snoop around using the command
docker exec -it mynode bash
Related
Now I have two Linux PC,mongodb is in the first PC which IP is 192.168.1.33,and a
java application on another Linux connect to the mongodb on 192.168.1.33
What I want to do is,prepare everything and make both Linux systems into docker images,and when I am in productive environment,I can simply restore the images that I prepared,and everything is OK,so I do not need complex deployment steps.
but the problem is,the IP of mongodb will change,and the IP 192.168.1.33 is written in my configuration file of my java application,it will not change automatically,is there a automated way?
Basics
We create Docker-file with minimal installation steps.
We create docker-Image from that Docker-file in step-1.
We create container from the step-2 image and expose the important port as required.
For your problem.
creating-a-docker-image-with-mongodb This article will help to dockerize the mongodb.
but the problem is,the IP of mongodb will change,and the IP
192.168.1.33 is written in my configuration file of my java application,it
will not change automatically,is there a automated way?
If you expose the mongo-db port to docker host you can use same
docker-host-IP:<exposed-port>
Ref from the article sudo docker run -p 27017:27017 -i -t my_new_mongodb
Example: 192.168.1.33 is your docker-host where mongodb container is running with exposed port 27017. You can add 192.168.1.33:27017 to your JAVA app.
What I want to do is,prepare everything and make both Linux systems
into docker images
You can not convert your VM to direct docker images. Instead you can follow the steps written in Basics and dockerize the both DB and application layer.
2.dockerize-your-java-application refer this link and dockerize you application based on requirements.
Step 1 & 2 will help you to build docker images which you can deploy to multiple servers.
I have installed a Docker on my Ubuntu machine 16.04.
Is there any way to bypass Docker container to host? (RCE, Privilege Escalation etc..) Which means is there any way to access the host machine inside the docker container.
Below is the command which I am using it to launch the container.
docker run --rm -ti ubuntu:16.04
I am going to give docker containers access in my college for testing purpose. And, I have hosted everything on my personal cloud. Is it possible to compromise the host machine from the container?
Please let me know about this. Before I start giving access in my college I need to make sure about it.
PS: I have configured macvlan and containers cannot talk to each other.
Thanks!!
I have 2 fairly simple Docker containers, 1 containing a NodeJS application, the other one is just a MongoDB container.
Dockerfile.nodeJS
FROM node:boron
ENV NODE_ENV production
# Create app directory
RUN mkdir -p /node/api-server
WORKDIR /node/api-server
# Install app dependencies
COPY /app-dir/package.json /node/api-server/
RUN npm install
# Bundle app source
COPY /app-dir /node/api-server
EXPOSE 3000
CMD [ "node", "." ]
Dockerfile.mongodb
FROM mongo:3.4.4
# Create database storage directory
VOLUME ["/data/db"]
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["mongod"]
EXPOSE 27017
They both work independently from each other, but when I create 2 separate containers of it, they won't communicate with each other anymore (Why?). Online there are a lot of tutorials about doing it with or without docker-compose. But they all use --link. Which is a deprecated legacy feature of Docker. So I don't want to go that path. What is the way to go in 2017, to make this connection between 2 docker containers?
you can create a specific network
docker create network -d overlay boron_mongo
and then you launch both containers with such a command
docker run --network=boron_mongo...
extract from
https://docs.docker.com/compose/networking/
The preferred way is to use docker-compose
Have a look at
Configuring the default network
https://docs.docker.com/compose/networking/#specifying-custom-networks
I'm trying to work on a dev environment with Node.js and Docker.
I want to be able to:
run my docker container when I boot my computer once and for all;
make changes in my local source code and see the changes without interacting with the docker container (with a mount).
I've tried the Node image and, if I understand correctly, it is not what I'm looking for.
I know how to make the mount point, but I'm missing how the server is supposed to detect the changes and "relaunch" itself.
I'm new to Node.js so if there is a better way to do things, feel free to share.
run my docker container when I boot my computer once and for all;
start containers automatically with the docker daemon or with your process manager
make changes in my local source code and see the changes without
interacting with the docker container (with a mount).
You need to mount your dev app folder as a volume
$ docker run --name myapp -v /app/src:/app image/app
and set in your Dockerfile nodeJs
CMD ["nodemon", "-L", "/app"]
I need to start, stop and restart containers from inside another container.
For Example:
Container A -> start Container B
Container A -> stop Container C
My Dockerfile:
FROM node:7.2.0-slim
WORKDIR /docker
COPY . /docker
CMD [ "npm", "start" ]
Docker Version 1.12.3
I want to avoid using a ssh connection. Any Ideas?
Per se a container runs in an isolated environment (e.g. with its own file system or network stack) and thus has no direct way to interact with the host it is running on. This is of course intended that way to allow for real isolation.
But there is a way to run containers with some more privileges. To talk to the docker daemon on the host, you can for example mount the docker socket of the host system into the container. This works the same way as you probably would mount some host folder into the container.
docker run -v /var/run/docker.sock:/var/run/docker.sock yourimage
For an example, please see the docker-compose file of the traefik proxy which is a process that listenes for starting and stopping containers on the host to activate some proxy routes to them. You can find the example in the traefik proxy repository.
To be able to talk to the docker daemon on the host, you then also need to have a docker client installed in the container or use some docker api for your programming language. There is an official list of such libraries for different programming languages in the docker docs.
Of course you should be aware of what privileges you give to the container. Someone who manages to exploit your application could possibly shut down your other containers or - even worse - start own containers on your system which can easily be used to gain control over your system. Keep that in mind when you build your application.