The Problem
When I start a new container in Docker, I want to mount a volume so that I get the latest updates to any files on my host machine and can work with them in my container. However, what I am finding is that Docker is mounting my volumes when I build the image. What I want instead is to mount the volumes when I create a new container.
Because I am using Docker to manage my development environment, this means that whenever I update a little piece of code, I have to rebuild my development environment Docker image, which takes usually around 20-30 mins. Obviously, this is not the functionality I want from Docker.
Here is what I am using to build my development environment container:
Dockerfile
# This docker file constructs an Ubuntu development environment and configures the compiler/libs needed
FROM ubuntu:latest
ADD . /gdms-rcon/liaison
WORKDIR /gdms-rcon/liaison
RUN rm -rf ./build
RUN apt-get update
RUN apt-get install -y -f gcc g++ qtbase5-dev cmake mysql-client
fig.yml
liaison:
build: ./liaison/
command: /bin/bash
volumes:
- liaison:/gdms-rcon/liaison
working_dir: /gdms-rcon/liaison
I also use a fig.yml file to make it easier to build.
To run, I use: fig build
To access my container to compile my source code, I use: docker run -it <container_id>
Maybe I'm doing something wrong with my commands? I don't use fig up because it won't give me an interactive shell, so I use docker run -it <container_id> instead. I chose to use fig so that it would mount my volumes automatically, but it isn't working as I would have hoped.
Here is an image to more clearly demonstrate my problem
If you're not using fig to start the container, the volumes line in your fig.yml isn't doing anything useful. If you need an interactive shell, fig is not really the tool for you.
Just docker build your image like normal, and then use the -v flag to docker run to mount the volume:
docker run -it -v <hostpath>:<containerpath> <imageid>
Related
I have a nodejs application inside a docker containter, and I'm trying to run another docker image from the container.
I connected the docker socket to the container, ran the machine, and I went into the containter.
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -w /root node bash
When I write in the terminal docker I get an error:
bash: docker: command not found.
It happens precisely in the specific image of NodeJS, if for example I run such a test
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
It works great.
Why can't I run docker in the node image?
This not work because to mount sockets nodejs container must include a docker instance inside it.
Just try another general image other than docker. It also will not work. Search for nodejs images it self include docker. Use that then it will work.
If such image not exist you have to create new image from both docker and nodejs images and add command to start it.
I am working on a jenkins ssh agent for my builds
I want to have docker installed so it can run and build docker images
I currently have the following in my Dockerfile
RUN curl -fsSL get.docker.com -o /opt/get-docker.sh
RUN chmod +x /opt/get-docker.sh
RUN sh /opt/get-docker.sh
This works fine when I run docker with
docker run <image> -v /var/run/docker.sock:/var/run/docker.sock
Issue I'm having is when I run docker ps with in the container, it shows all my parent containers as well, is there a way to prevent this?
If you mount the host's /var/run/docker.sock your docker client will connect to the host's docker daemon, and so see everything that is running on the host.
To make it so your containers can run docker in a way that appears isolated from the host you should investigate Docker-in-docker.
I would like to make my own Node-RED docker image so when I start it the flows are loaded and Node-RED is ready to go.
The flow I want to load is placed in a 'flows.json' file. And when I import it manually via the interface it works fine.
The Node-RED documentation for docker suggests the following line for starting Node-RED with a custom flow
$ docker run -it -p 1880:1880 -e FLOWS=my_flows.json nodered/node-red-docker
However when I try to do this the flow ends up empty.
I suspect this has to do something with the fact that the flow I'm trying to load is using the 'node-red-node-mongodb' plug-in, which is not installed by default.
How can I build a Node-RED image where the 'node-red-node-mongodb' is already installed?
If anymore information is required please ask.
UPDATE
I made the following Dockerfile:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
Then I build it with:
docker build -t testenvironment/nodered .
And started it with:
docker run -d -p 1880:1880 -e FLOWS=flows.json --name node-red testenvironment/nodered
But when I go to the Node-RED interface there is no flow. Also I don't see the MongoDB node in the sidebar.
The documentation on the Node-RED site includes instructions for how to customise a Docker image and add extra nodes. You can either do it by logging into the existing image using docker exec and installing the node by hand with npm
# Open a shell in the container
docker exec -it mynodered /bin/bash
# Once inside the container, npm install the nodes in /data
cd /data
npm install node-red-node-mongodb
exit
# Restart the container to load the new nodes
docker stop mynodered
docker start mynodered
Else you can extend the image by creating your own Docker file:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
And then build it with
docker build -t mynodered:<tag> .
I have a perplexing Docker problem. I am running Docker on my Mint laptop and on a Ubuntu VPS. I have been able to build images in the past locally and send them to the server and have them run there. However, for clarity, the ones that work were probably built when I was running Ubuntu locally (more on that later).
I have an example based on Alpine:
FROM alpine:3.5
# Do a system update
RUN apk update
ENTRYPOINT ["sleep", "3"]
I build like so, and send to the remote:
docker build -t alpine-sleep .
docker save alpine-sleep | gzip > alpine-sleep.tgz
rsync --progress alpine-sleep.tgz myserver.example.com:/path/to/images/
I then unpack/import on the remote, and run, thus:
docker import /path/to/images/alpine-sleep.tgz alpine-sleep
docker run -it alpine-sleep
I get this console reply:
docker: Error response from daemon: No command specified.
See 'docker run --help'.
However, if I copy the Dockerfile to the remote, then do this:
docker build -t alpine-sleep-localbuild .
docker run -it alpine-sleep-localbuild
then I get the sleep working fine.
My Docker and kernel versions locally:
jon#jvb ~/alpine_test $ uname -r
4.4.0-79-generic
jon#jvb ~/alpine_test $ docker -v
Docker version 1.12.6, build 78d1802
And remotely:
root#vps:~/alpine-sleep# uname -r
3.13.0-24-generic
root#vps:~/alpine-sleep# docker -v
Docker version 17.05.0-ce, build 89658be
I wonder, does the major difference in the kernel make a difference? I expect 3.13 to 4.4 is quite a big jump. I don't recall what version of the kernel I was using when I build things when I was running Ubuntu locally, but it would not surprise me if it is was 3.x.
The other thing that strikes me as unexpected is the high variation in Docker version numbers. How do I have version 1.x locally, and 17.x remotely? Has the project been through a version re-numbering?
Update
I've just checked the kernel version when I was running Ubuntu locally, and that was:
4.4.0-75-generic
So, this makes me think that a major kernel discrepancy could not be to blame.
The issue is that Docker won't warn you when you use the wrong combination of save/load and export/import. You save/load an image, and you export/import a tar file from a container. Since you are doing a docker save to save your image, you need to do a docker load to restore it on the other host:
docker load < /path/to/images/alpine-sleep.tgz
I have found this very old issue: https://github.com/moby/moby/issues/1826
An image imported via docker import won't know what command to run. Any image will lose all of its associated metadata on export, so the default command won't be available after importing it somewhere else.
So, run it with the entrypoint:
docker run --entrypoint sleep alpine-sleep 3
I am looking for a Docker image that is just some *nix flavor with NPM and Node.js installed.
This image
https://hub.docker.com/_/node/
requires that a package.json file is present, and the Docker build uses COPY to copy the package.json file over, and it also looks for a Node.js script to start when the build is run.
...I just need a container to run a shell script using this technique:
docker exec mycontainer /path/to/test.sh
Which I discovered via:
Running a script inside a docker container using shell script
I don't need a package.json file or a Node.js start script, all I want is
a container image
Node.js and NPM installed
Does anyone know if there is an a Docker image for Node.js / NPM that does not require a package.json file? Perhaps I should just use a plain old container image and just add the code to install Node myself?
Alright, I tried to make this a simple question, unfortunately nobody could provide a simple answer...until now!
Instead of using this base image:
FROM node:5-onbuild
We use this instead:
FROM node:5
I read about onbuild and could not figure out what it's about, but it adds more than I needed for my use case.
the below code is in our Dockerfile
# 1. start with this image as a base
FROM node:5
# 2. copy the script from real-life into the container (magic)
COPY script.sh /usr/src/app/
# 3. define container entry point which will run our script
ENTRYPOINT ["/bin/bash", "/usr/src/app/script.sh"]
you build the docker image like so:
docker build -t foo .
then you run the image like so, which will "run the entrypoint":
docker run -it --rm foo
The container stdout should stream to the terminal where you ran docker run which is good (am I asking too much?).