Docker not running a command - node.js

PS C:\E Drive\Docker\api> docker run --name myapp_c_nodemon -p 4000:4000 --rm -v C:\E Drive\Docker\api:/app -v /app/node_modules myapp:nodemon
docker: invalid reference format.
It's giving invalid reference format

You have a space in your host directory name, so you need to enclose it in quotes like this
docker run --name myapp_c_nodemon -p 4000:4000 --rm -v "C:\E Drive\Docker\api":/app -v /app/node_modules myapp:nodemon

Related

Running "docker exec" displays "bash-4.2" in "Shell command prompt"

docker run -itd --restart always --network host --name test_db -h test_db -v share:/home/share -v /etc/timezone:/etc/timezone -v /etc/localtime:/etc/localtime -v /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged mariadb_10.8 init
Running docker exec -it test_db bash displays root#test_db in the shell command prompt, but after someday bash-4.2 appeared.
And there is no ll command and no /root folder
How to fix it?
Thanks in advance.
Regards

How to set machine hostname as docker container hostname

I want to set the docker container hostname to the machine hostname on which docker is installed. Please note than I want to set the hostname dynamically and don't want to hardcode the machine hostname in my docker run command.
How do I achieve this?
My docker run command:
sudo docker run --name=rabbitmq -d -p 5672:5672 -p 15672:15672 \
-e RABBITMQ_DEFAULT_USER=admin \
-e RABBITMQ_DEFAULT_PASS=admin \
--hostname ?? \
-v rmq_vol:/var/lib/rabbitmq \
rabbitmq:3.9.0
What KamilCuk said.
add to docker run: --hostname $(hostname)
You're just passing in the result of the linux "hostname" command to your docker run configuration.

docker logs within a bash script doesn't work

I'm experimenting a weird behaviour of Docker in a bash script.
Let's see these two examples:
logs-are-showed() {
docker rm -f mybash &>/dev/null
docker run -it --rm -d --name mybash bash -c "echo hello; tail -f /dev/null"
docker logs mybash
}
# usage:
# $ localtunnel 8080
localtunnel() {
docker rm -f localtunnel &>/dev/null
docker run -it -d --network host --name localtunnel efrecon/localtunnel --port $1
docker logs localtunnel
}
In the first function logs-are-showed the command docker logs returns me the logs of the container mybash
In the second function localtunnel the command docker logs doesn't return me anything.
After having called the localtunnel function, if I ask for the container logs from outside the script, it shows me the logs correctly.
Why does this happen?
Processes take time to react. They may be no logs right after starting a process - it has not written anything yet. Wait a bit.

Docker volume mapping not working

I'm working from the Dockerizing a Node.js web app example, trying to understand Docker from first principles. I've uploaded it to repl.it with server.js renamed to index.js (due to a bug/feature where repl.it forces the existence of index.js), here are the links:
Project: https://repl.it/repls/BurlyAncientTrust
Live demo: https://BurlyAncientTrust--five-nine.repl.co
Download: https://repl.it/repls/BurlyAncientTrust.zip
I've also put together some core operations that derive container(s) from images in a functional/declarative manner rather than using names (surprisingly there's no central source for these):
# commands to start, list, login and remove containers/images associated with current directory's image
# build and run docker image (if it was removed with "docker rmi -f <image>" then this restores IMAGE in "docker ps")
(image=<image_name> && docker build -t $image . && docker run --rm -p <host_port>:<container_port> -d $image)
# list container id for image name
(image=<image_name> && docker ps -a -q --filter=ancestor=$image)
(image=<image_name> && docker ps -a | awk '{print $1,$2}' | grep -w $image | awk '{print $1}')
# run/exec bash inside container (similar to "vagrant ssh")
(image=<image_name> && docker exec -it $(docker ps -a -q -n=1 --filter=ancestor=$image) bash)
# remove containers for image name
(image=<image_name> && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f)
# remove containers and specified image
(image=<image_name> && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
To build and run the example:
Download and unzip BurlyAncientTrust.zip
cd <path_to>/BurlyAncientTrust
Then:
(image=node-web-app && docker build -t $image . && docker run --rm -p 49160:8080 -d $image)
Visit:
http://localhost:49160/
You should see:
Hello world
The problem is that I can't get the -v option for volume mapping (directory sync) working:
(image=node-web-app && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
(image=node-web-app && docker build -t $image . && docker run --rm -v "$(pwd)":/usr/src/app -p 49160:8080 -d $image)
I see:
This site can’t be reached
And docker ps no longer shows the container. I'm on Mac OS X High Sierra so the "$(pwd)" portion may differ on other platforms. You can just substitute that with the absolute path of your current working directory. Here's the full output:
Zacks-Macbook:hello-system zackmorris$ (image=node-web-app && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
Untagged: node-web-app:latest
Deleted: sha256:117288d6b7424798766b288518e741725f8a6cba657d51cd5f3157ff5cc9b784
Deleted: sha256:e2fb2f92c1fd4697c1d217957dc048583a14ebc4ebfc73ef36e54cddc0eefe06
Deleted: sha256:d274f86b6093a8e44afe1720040403e3fb5793f5fe6b9f0cf2c12c42ae6476aa
Deleted: sha256:9116e43368aba02f06eb1751e6912e4063326ce93ca1724fead8a8c1e1c6c56b
Deleted: sha256:902d4d1718530f6c7a50dd11331ee9ea85a04464557d699377115625da571b61
Deleted: sha256:261c92dc9ba95e2447e0250ea435717c855c6b184184fa050fc15fc78b1447f8
Deleted: sha256:559b16060e30ea3875772aae28a2c47508dfebda35529e87e7ff46f035669798
Deleted: sha256:4316607ec7e64e54ad59c3e46288a9fb03d9ec149b428a8f70862da3daeed4e5
Zacks-Macbook:hello-system zackmorris$ (image=node-web-app && docker build -t $image . && docker run --rm -v "$(pwd)":/usr/src/app -p 49160:8080 -d $image)
Sending build context to Docker daemon 57.34kB
Step 1/7 : FROM node:carbon
---> baf6417c4cac
Step 2/7 : WORKDIR /usr/src/app
---> Using cache
---> 00b2b9912592
Step 3/7 : COPY package*.json ./
---> f39ed074815e
Step 4/7 : RUN npm install
---> Running in b1d9bf79d502
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN docker_web_app#1.0.0 No repository field.
npm WARN docker_web_app#1.0.0 No license field.
added 50 packages in 4.449s
Removing intermediate container b1d9bf79d502
---> cf2a5fce981c
Step 5/7 : COPY . .
---> 46d46102772b
Step 6/7 : EXPOSE 8080
---> Running in cd92fbacacf1
Removing intermediate container cd92fbacacf1
---> ac13f4eda9a2
Step 7/7 : CMD [ "npm", "start" ]
---> Running in b6cd6811b0ce
Removing intermediate container b6cd6811b0ce
---> 06f887984da8
Successfully built 06f887984da8
Successfully tagged node-web-app:latest
effc653267558c80fbcf017d4c10db3e46a7c944997c7e5a5fe5d8682c5c9dad
Docker file sharing:
$ pwd
/Users/zackmorris/Desktop/hello-system
I know that something as mission critical as volume mapping has to work.
UPDATE: I opened an issue for this, and it's looking like it may not be possible (it could be a bug/feature from the early history of Docker). The best answer so far is that the Dockerfile calls RUN npm install before "$(pwd)" is mounted at /usr/src/app (which replaces the contents) so the /usr/src/app/node_modules directory gets replaced with nothing, which causes Node.js to crash because it can't find the express module, which causes the container to quit.
So I'm looking for an answer that works around this and makes this directory mapping possible in a general sense, without any weird gotchas like having to rearrange the contents of the image.
I dug further into the distinction between Docker buildtime and runtime, specifically regarding Docker Compose, and stumbled onto this:
https://blog.codeship.com/using-docker-compose-for-nodejs-development/
He was able to make it work by mapping node_modules as an additional volume in his docker-compose.yml (note that my path is /usr/src/app and his is /usr/app/ so don't copypaste this):
volumes:
- .:/usr/app/
- /usr/app/node_modules
I'm thinking this works because it makes node_modules an overlayed volume, which preserves any files inside it rather than overwriting them.
I tried it as a raw Docker command -v /usr/src/app/node_modules and it worked! Here is a new standalone example that's identical to BurlyAncientTrust but has a node_modules directory added:
Project: https://repl.it/repls/RoundedImpishStructures
Live demo: https://RoundedImpishStructures--five-nine.repl.co
Download: https://repl.it/repls/RoundedImpishStructures.zip
To build and run the example:
Download and unzip RoundedImpishStructures.zip then:
cd <path_to>/RoundedImpishStructures
Remove the old container and image if you were using them:
(image=node-web-app && docker ps -a -q --filter=ancestor=$image | xargs docker rm -f && docker rmi $image)
Run the new example:
(image=node-web-app && docker build -t $image . && docker run --rm -v "$(pwd)":/usr/src/app -v /usr/src/app/node_modules -p 49160:8080 -d $image)
You should see:
Hello world
Please don't upvote this answer, as I don't believe it to be a general solution. Hopefully it helps someone though.

jenkins container without root authority

I run a docker container by
[root#compute maven]# docker run -d -p 8083:8080 --name jenkins -v /usr/bin/docker:/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -v /root/maven-tar/:/root csphere/jenkins:1.609
e40f704478e5e37ee3f214a1469f5851a78c324099610a12e79238b8599e194a
Then I get into the container by "docker exec"
[root#compute maven]# docker exec -it jenkins /bin/bash
jenkins#e40f704478e5:
I try run "docker ps",it showes this
jenkins#e40f704478e5:/$ docker ps
/usr/bin/docker: 2: .: Can't open /etc/sysconfig/docker

Resources