Docker volume mapping not working - node.js

I'm working from the Dockerizing a Node.js web app example, trying to understand Docker from first principles. I've uploaded it to repl.it with server.js renamed to index.js (due to a bug/feature where repl.it forces the existence of index.js), here are the links:
Project: https://repl.it/repls/BurlyAncientTrust
Live demo: https://BurlyAncientTrust--five-nine.repl.co
Download: https://repl.it/repls/BurlyAncientTrust.zip
I've also put together some core operations that derive container(s) from images in a functional/declarative manner rather than using names (surprisingly there's no central source for these):
# commands to start, list, login and remove containers/images associated with current directory's image
# build and run docker image (if it was removed with "docker rmi -f <image>" then this restores IMAGE in "docker ps")
(image=<image_name> && docker build -t $image . && docker run --rm -p <host_port>:<container_port> -d $image)
# list container id for image name
(image=<image_name> && docker ps -a -q --filter=ancestor=$image)
(image=<image_name> && docker ps -a | awk '{print $1,$2}' | grep -w $image | awk '{print $1}')
# run/exec bash inside container (similar to "vagrant ssh")
(image=<image_name> && docker exec -it $(docker ps -a -q -n=1 --filter=ancestor=$image) bash)
# remove containers for image name
(image=<image_name> && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f)
# remove containers and specified image
(image=<image_name> && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
To build and run the example:
Download and unzip BurlyAncientTrust.zip
cd <path_to>/BurlyAncientTrust
Then:
(image=node-web-app && docker build -t $image . && docker run --rm -p 49160:8080 -d $image)
Visit:
http://localhost:49160/
You should see:
Hello world
The problem is that I can't get the -v option for volume mapping (directory sync) working:
(image=node-web-app && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
(image=node-web-app && docker build -t $image . && docker run --rm -v "$(pwd)":/usr/src/app -p 49160:8080 -d $image)
I see:
This site can’t be reached
And docker ps no longer shows the container. I'm on Mac OS X High Sierra so the "$(pwd)" portion may differ on other platforms. You can just substitute that with the absolute path of your current working directory. Here's the full output:
Zacks-Macbook:hello-system zackmorris$ (image=node-web-app && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
Untagged: node-web-app:latest
Deleted: sha256:117288d6b7424798766b288518e741725f8a6cba657d51cd5f3157ff5cc9b784
Deleted: sha256:e2fb2f92c1fd4697c1d217957dc048583a14ebc4ebfc73ef36e54cddc0eefe06
Deleted: sha256:d274f86b6093a8e44afe1720040403e3fb5793f5fe6b9f0cf2c12c42ae6476aa
Deleted: sha256:9116e43368aba02f06eb1751e6912e4063326ce93ca1724fead8a8c1e1c6c56b
Deleted: sha256:902d4d1718530f6c7a50dd11331ee9ea85a04464557d699377115625da571b61
Deleted: sha256:261c92dc9ba95e2447e0250ea435717c855c6b184184fa050fc15fc78b1447f8
Deleted: sha256:559b16060e30ea3875772aae28a2c47508dfebda35529e87e7ff46f035669798
Deleted: sha256:4316607ec7e64e54ad59c3e46288a9fb03d9ec149b428a8f70862da3daeed4e5
Zacks-Macbook:hello-system zackmorris$ (image=node-web-app && docker build -t $image . && docker run --rm -v "$(pwd)":/usr/src/app -p 49160:8080 -d $image)
Sending build context to Docker daemon 57.34kB
Step 1/7 : FROM node:carbon
---> baf6417c4cac
Step 2/7 : WORKDIR /usr/src/app
---> Using cache
---> 00b2b9912592
Step 3/7 : COPY package*.json ./
---> f39ed074815e
Step 4/7 : RUN npm install
---> Running in b1d9bf79d502
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN docker_web_app#1.0.0 No repository field.
npm WARN docker_web_app#1.0.0 No license field.
added 50 packages in 4.449s
Removing intermediate container b1d9bf79d502
---> cf2a5fce981c
Step 5/7 : COPY . .
---> 46d46102772b
Step 6/7 : EXPOSE 8080
---> Running in cd92fbacacf1
Removing intermediate container cd92fbacacf1
---> ac13f4eda9a2
Step 7/7 : CMD [ "npm", "start" ]
---> Running in b6cd6811b0ce
Removing intermediate container b6cd6811b0ce
---> 06f887984da8
Successfully built 06f887984da8
Successfully tagged node-web-app:latest
effc653267558c80fbcf017d4c10db3e46a7c944997c7e5a5fe5d8682c5c9dad
Docker file sharing:
$ pwd
/Users/zackmorris/Desktop/hello-system
I know that something as mission critical as volume mapping has to work.
UPDATE: I opened an issue for this, and it's looking like it may not be possible (it could be a bug/feature from the early history of Docker). The best answer so far is that the Dockerfile calls RUN npm install before "$(pwd)" is mounted at /usr/src/app (which replaces the contents) so the /usr/src/app/node_modules directory gets replaced with nothing, which causes Node.js to crash because it can't find the express module, which causes the container to quit.
So I'm looking for an answer that works around this and makes this directory mapping possible in a general sense, without any weird gotchas like having to rearrange the contents of the image.

I dug further into the distinction between Docker buildtime and runtime, specifically regarding Docker Compose, and stumbled onto this:
https://blog.codeship.com/using-docker-compose-for-nodejs-development/
He was able to make it work by mapping node_modules as an additional volume in his docker-compose.yml (note that my path is /usr/src/app and his is /usr/app/ so don't copypaste this):
volumes:
- .:/usr/app/
- /usr/app/node_modules
I'm thinking this works because it makes node_modules an overlayed volume, which preserves any files inside it rather than overwriting them.
I tried it as a raw Docker command -v /usr/src/app/node_modules and it worked! Here is a new standalone example that's identical to BurlyAncientTrust but has a node_modules directory added:
Project: https://repl.it/repls/RoundedImpishStructures
Live demo: https://RoundedImpishStructures--five-nine.repl.co
Download: https://repl.it/repls/RoundedImpishStructures.zip
To build and run the example:
Download and unzip RoundedImpishStructures.zip then:
cd <path_to>/RoundedImpishStructures
Remove the old container and image if you were using them:
(image=node-web-app && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
Run the new example:
(image=node-web-app && docker build -t $image . && docker run --rm -v "$(pwd)":/usr/src/app -v /usr/src/app/node_modules -p 49160:8080 -d $image)
You should see:
Hello world
Please don't upvote this answer, as I don't believe it to be a general solution. Hopefully it helps someone though.

Related

Cannot find the directory in a docker instanced deployed to AWS ECS?

My Dockerfile has the following line.
RUN useradd -r -m -d /app -s /bin/bash xyz
RUN usermod -a -G root xyz
RUN mkdir -p /app/xyz
WORKDIR /app
However, I couldn't find the directory /app after ssh to the docker instance? ps -auwx can see the program running from /app/.....
ssh -i /et/ssl/certs/....crt login#docker_instance_address
# in the ssh session
ls /app # No such file or directory
ps auwx | grep '...' # can see the programming launched by docker start
# shows /app/xyz/.....
I couldn't run docker exec -it xyz /bin/bash from the ECS cluster instance - permission denied.

Is it possible to map a user inside the docker container to an outside user?

I know that one can use the --user option with Docker to run a container as a certain user, but in my case, my Docker image has a user inside it, let us call that user manager. Now is it possible to map that user to a user on host? For example, if there is a user john on the host, can we map john to manager?
Yes, you can set the user from the host, but you should modify your Dockerfile a bit to deal with run time user.
FROM alpine:latest
# Override user name at build. If build-arg is not passed, will create user named `default_user`
ARG DOCKER_USER=default_user
# Create a group and user
RUN addgroup -S $DOCKER_USER && adduser -S $DOCKER_USER -G $DOCKER_USER
# Tell docker that all future commands should run as this user
USER $DOCKER_USER
Now, build the Docker image:
docker build --build-arg DOCKER_USER=$(whoami) -t docker_user .
The new user in Docker will be the Host user.
docker run --rm docker_user ash -c "whoami"
Another way is to pass host user ID and group ID without creating the user in Dockerfile.
export UID=$(id -u)
export GID=$(id -g)
docker run -it \
--user $UID:$GID \
--workdir="/home/$USER" \
--volume="/etc/group:/etc/group:ro" \
--volume="/etc/passwd:/etc/passwd:ro" \
--volume="/etc/shadow:/etc/shadow:ro" \
alpine ash -c "whoami"
You can further read more about the user in docker here and here.
Another way is through an entrypoint.
Example
This example relies on gosu which is present in recent Debian derivatives, not yet in Alpine 3.13 (but is in edge).
You could run this image as follow:
docker run --rm -it \
--env UID=$(id -u) \
--env GID=$(id -g) \
-v "$(pwd):$(pwd)" -w "$(pwd)" \
imagename
tree
.
├── Dockerfile
└── files/
└── entrypoint
Dockerfile
FROM ...
# [...]
ARG DOCKER_USER=default_user
RUN addgroup "$DOCKER_USER" \
&& adduser "$DOCKER_USER" -G "$DOCKER_USER"
RUN wget -O- https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64 |\
install /dev/stdin /usr/local/bin/gosu
COPY files /
RUN chmod 0755 /entrypoint \
&& sed "s/\$DOCKER_USER/$DOCKER_USER/g" -i /entrypoint
ENTRYPOINT ["/entrypoint"]
files/entrypoint
#!/bin/sh
set -e
set -u
: "${UID:=0}"
: "${GID:=${UID}}"
if [ "$#" = 0 ]
then set -- "$(command -v bash 2>/dev/null || command -v sh)" -l
fi
if [ "$UID" != 0 ]
then
usermod -u "$UID" "$DOCKER_USER" 2>/dev/null && {
groupmod -g "$GID" "$DOCKER_USER" 2>/dev/null ||
usermod -a -G "$GID" "$DOCKER_USER"
}
set -- gosu "${UID}:${GID}" "${#}"
fi
exec "$#"
Notes
UID is normally a read-only variable in bash, but it will work as expected if set by the docker --env flag
I choose gosu for it's simplicity, but you could make it work with su or sudo; it will need more configuration however
if you don't want to specify two --env switch, you could do something like: --env user="$(id -u):$(id -g)" and in the entrypoint: uid=${user%:*} gid=${user#*:}; note at this point the UID variable will be read-only in bash that's why I switched to lower-case... rest of the adaptation is left to the reader
There is no simple solution that handles all use cases. Solving these problems is continuous work, a part of life in the containerized world.
There is no magical parameter that you could add to a docker exec or docker run invocation and reliably cause the containerized software to no longer run into permissions issues during operations on host-mapped volumes. Unless your mapped directories are chmod-0777-and-come-what-may (DON'T), you will be running into permissions issues and you will be solving them as you go, and this is the task you should try becoming efficient at, instead of trying to find a miracle once-and-forever solution that will never exist.

Dockerfile VOLUME not working while -v works

When I pass volume like -v /dir:/dir it works like it should
But when I use VOLUME in my dockerfile it gets mountend empty
My Dockerfile looks like this
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install nano
ENV Editor="/usr/bin/nano"
ARG UID=1000
RUN useradd -u "$UID" -G root writer
RUN mkdir -p "/home/writer" && chown -R "$UID":1000 "/home/writer"
RUN mkdir -p "/home/stepik"
RUN chown -R "$UID":1000 "/home/stepik"
VOLUME ["/home/stepik"]
USER writer
WORKDIR /home/stepik
ENTRYPOINT ["bash"]
Defining the volume on the Dockerfile only tells docker that the volume needs to exist inside the container, not where to get the volume from. It's the same as passing the option -v /dir instead of -v /dir:/dir. The result is an "anonymous" volume with a guid you can see in docker volume ls. You can't pass the option inside the Dockerfile to identify where to mount the volume from, images you pull from the docker hub can't mount an arbitrary directory from your host and send the contents of that directory to a black hat machine on the internet by design.
Note that I don't recommend defining volumes inside the Dockerfile. See my blog post on the topic for more details.

Should using a temporary docker container remove a volume?

Running a docker container with the --rm option deletes a mounted volume post exit. I'm wondering whether this is intended behavior?
Here is the exact sequence.
ole#MKI:~$ docker volume create --name a-volume-test
ole#MKI:~$ sudo ls /var/lib/docker/volumes/ | grep a-
a-volume-test
ole#MKI:~$ docker run --rm -it -v a-volume-test:/data alpine /bin/ash
/ # touch /data/test
/ # ls /data
test
/ # exit
ole#MKI:~$ sudo ls /var/lib/docker/volumes/ | grep a-
After I exit the the volume is gone.
This was a bug that will be fixed in docker 1.11 - https://github.com/docker/docker/pull/19568
According to the Docs, no that is not intended, because you are mounting a named volume it should not be deleted. Maybe submit a github issue?
Note: When you set the --rm flag, Docker also removes the volumes associated with the container when the container is removed. This is similar to running docker rm -v my-container. Only volumes that are specified without a name are removed. For example, with docker run --rm -v /foo -v awesome:/bar busybox top, the volume for /foo will be removed, but the volume for /bar will not. Volumes inheritted via --volumes-from will be removed with the same logic -- if the original volume was specified with a name it will not be removed.
Source: Docker Docs

Why I can't touch file when the docker image has volume?

I have a mybase:latest image like this:
FROM ubuntu:latest
VOLUME /var
Then I encountered an error when docker run:
docker run -it mybase:latest mkdir -p /var/test && touch /var/test/test.txt
touch: cannot touch ‘/var/test/test.txt’: No such file or directory
I noticed this question: Building Dockerfile fails when touching a file after a mkdir
But it did not solve my problem as it said:
You can only create files there while the container is running
I think during Docker creating that container, mkdir -p /var/test && touch /var/test/test.txt is executed after all the volumes are ready, so it should work.
Where is worry about my thought?
Maybe the && part isn't in the same shell as the one created for the container. (But is actually the shell where you type the docker run command)
Try:
docker run -it mybase:latest sh -c 'mkdir -p /var/test && touch /var/test/test.txt'
That way at least, the && part applies to the shell of the mkdir command.

Resources