How can i connect the filesystems of two docker containers - node.js

so I have a static files (web app) running on container1, and a node js app that's running on container2, I want the node app the have writing access to the static files in container1. how can I achieve this?
what i tried so far :
docker compose, but it only allow for communication between container (network access), not sharing the same filesystem. Therefore, node can't access files on C1.

A way to do it is docker-compose volume
An example configuration yaml file for docker-compose v3 will be as below.
/share in host-os file system will be shared across these 2 containers
version: "3"
services:
webapp:
image: webapp:1.0
volumes:
- /share:/share
nodeapp:
image: nodeapp:1.0
volumes:
- /share:/share

Using a simple HTTP server (a simple node one can be found here) on one of the containers will allow you to host the static files. Then, this can be accessed from the other containers using the network all your containers are on.
Another option would be to mount a volume to both your containers. Any changes made via one container would reflect in the other if the same volume is mounted. More info can be found here.

Related

Dockerfile, mount host windows folder over server

I am trying to mount a folder of the host machine to docker container but without success. I have the following setup:
Windows machine
From 1 I access linux server
On 2 I create a docker container that should be able to access files on 1
In the dockerfile I do the following:
ADD //G/foo/boo /my_project/boo
This throws an error that the folder cannot be found, since the container tries to access the folder on linux. However, I do want the container to access the windows machine.
Ideally without copying the files from the source to target folder. I am not sure if ADD copies the files or just gives an opportunity to access files.
Volumes are designed to be attached to running containers and not to the containers used to build the docker image. In case you would like to make your running container accessing a shared file system, you need to attach the volume of the application container during the creation time. This step depends on what you are using for deploying the containers, but in case you are using docker-compose this can be done as shown below
nginxplus:
image: bhc-nginxplus
volumes:
- "${path_on_the_host}:${path_in_the_container}"
with docker commands
docker run -v ${path_on_the_host}:${path_in_the_container} $image

Writing log files in NodeJS Docker container

I want to write log files to the host file system, so it is persisted, even if the Docker container dies.
Do I need to mount a volume in my Docker yaml?
VOLUME /var/log/myApp
Then do I just reference the mount like this?
var stream = fs.createWriteStream(`/var/log/myApp/myLog.log`);
stream.write('Hello World!');
Then outside of my container, I can go to the /var/log/myApp/ directory and see my logs.
I am trying to find an example of this, but haven't seen anything.
When you're setting up your container, you just use the -v argument:
-v ./path/to/local/directory:/var/log/myApp
The first path is where the volume is available on the host system (the period at the beginning means it's relative to where you're running the docker command). The path on the right hand side is where it's available in the container.
Once more, in docker-compose:
volumes:
- "./path/to/local/directory:/var/log/myApp"
And yes, this will allow the data stored in the volume to be persistent.

Define priority for docker container with restart policy always

On a Linux server, I've serveral Docker containers running. For example, some Compose-Stacks for Wordpress hosting. But also internal applications like Confluence. After a reboot, it seems that the internal containers were started first. So the hosting containers (like Wordpress) are down for several minutes.
That's not good, since the internal apps were used by a few persons, where the external ones have much more traffic. So I want to define some kind of priority: Like starting the Wordpress containers before the Confluence, to name a concret example.
How can this be done? All containers have the restart policy always. But it seems not possible to define in which orders the containers should start...
version 3+ : Version 3 no longer supports the condition form of running containers.
version 2 : depends_on will help your case if you do docker-compose up, but ignores when you run in swarm mode.
docker-compoopse.yml (works after version 1.6.0 and before 2.1)
version: '2'
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
DOCS :
depends_on
Controlling startup order in Compose

Starting and stopping docker container from other container

I need to start, stop and restart containers from inside another container.
For Example:
Container A -> start Container B
Container A -> stop Container C
My Dockerfile:
FROM node:7.2.0-slim
WORKDIR /docker
COPY . /docker
CMD [ "npm", "start" ]
Docker Version 1.12.3
I want to avoid using a ssh connection. Any Ideas?
Per se a container runs in an isolated environment (e.g. with its own file system or network stack) and thus has no direct way to interact with the host it is running on. This is of course intended that way to allow for real isolation.
But there is a way to run containers with some more privileges. To talk to the docker daemon on the host, you can for example mount the docker socket of the host system into the container. This works the same way as you probably would mount some host folder into the container.
docker run -v /var/run/docker.sock:/var/run/docker.sock yourimage
For an example, please see the docker-compose file of the traefik proxy which is a process that listenes for starting and stopping containers on the host to activate some proxy routes to them. You can find the example in the traefik proxy repository.
To be able to talk to the docker daemon on the host, you then also need to have a docker client installed in the container or use some docker api for your programming language. There is an official list of such libraries for different programming languages in the docker docs.
Of course you should be aware of what privileges you give to the container. Someone who manages to exploit your application could possibly shut down your other containers or - even worse - start own containers on your system which can easily be used to gain control over your system. Keep that in mind when you build your application.

Sharing a configuration file to multiple docker containers

Suppose I have the following configuration file on my Docker host, and I want multiple Docker containers to be able to access this file.
/opt/shared/config_file.yml
In a typical non-Docker environment I could use symbolic links, such that:
/opt/app1/config_file.yml -> /opt/shared/config_file.yml
/opt/app2/config_file.yml -> /opt/shared/config_file.yml
Now suppose app1 and app2 are dockerized. I want to be able to update config_file.yml in one place and have all consumers (docker containers) pick up this change without requiring the container to be rebuilt.
I understand that symlinks cannot be used to access files on the host machine that are outside of the docker container.
The first two options that come to mind are:
Set up an NFS share from docker host to docker containers
Put the config file in a shared Docker volume, and use docker-compose to connect app1 and app2 to the shared config docker
I am trying to identify other options and then ultimately decide upon the best course of action.
What about host mounted volumes? If each application is only reading the configuration and the requirement is that it lives in different locations within the container you could do something like:
docker run --name app1 --volume /opt/shared/config_file.yml:/opt/app1/config_file.yml:ro app1image
docker run --name app2 --volume /opt/shared/config_file.yml:/opt/app2/config_file.yml:ro app2image
The file on the host can be mounted at a separate location per container. In Docker 1.9 you can actually have arbitrary volumes from specific plugins to hold the data (such as Flocker). However, both of these solutions are still per host and the data isn't available on multiple hosts at the same time.

Resources