Run docker command into node container - node.js

I have a nodejs application inside a docker containter, and I'm trying to run another docker image from the container.
I connected the docker socket to the container, ran the machine, and I went into the containter.
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -w /root node bash
When I write in the terminal docker I get an error:
bash: docker: command not found.
It happens precisely in the specific image of NodeJS, if for example I run such a test
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
It works great.
Why can't I run docker in the node image?

This not work because to mount sockets nodejs container must include a docker instance inside it.
Just try another general image other than docker. It also will not work. Search for nodejs images it self include docker. Use that then it will work.
If such image not exist you have to create new image from both docker and nodejs images and add command to start it.

Related

How to get back to shell in nodejs:latest docker image?

I'm newbie to docker, I tried this command
docker run -it node:latest
then, I was in the node REPL,
Welcome to Node.js v16.3.0.
Type ".help" for more information.
>
I tried control+c ,but this quit the image,
Is there any way to go to the shell in this image?
In order to overwrite the entry point of the docker image you're using, you will need to use the --entrypoint flag in the run command.
docker run -it --entrypoint bash node:latest
For better understanding on how to work with already running docker container you can refer to the following question

Parent Docker Containers using Docker in Docker

I am working on a jenkins ssh agent for my builds
I want to have docker installed so it can run and build docker images
I currently have the following in my Dockerfile
RUN curl -fsSL get.docker.com -o /opt/get-docker.sh
RUN chmod +x /opt/get-docker.sh
RUN sh /opt/get-docker.sh
This works fine when I run docker with
docker run <image> -v /var/run/docker.sock:/var/run/docker.sock
Issue I'm having is when I run docker ps with in the container, it shows all my parent containers as well, is there a way to prevent this?
If you mount the host's /var/run/docker.sock your docker client will connect to the host's docker daemon, and so see everything that is running on the host.
To make it so your containers can run docker in a way that appears isolated from the host you should investigate Docker-in-docker.

Building a custom Node-RED image

I would like to make my own Node-RED docker image so when I start it the flows are loaded and Node-RED is ready to go.
The flow I want to load is placed in a 'flows.json' file. And when I import it manually via the interface it works fine.
The Node-RED documentation for docker suggests the following line for starting Node-RED with a custom flow
$ docker run -it -p 1880:1880 -e FLOWS=my_flows.json nodered/node-red-docker
However when I try to do this the flow ends up empty.
I suspect this has to do something with the fact that the flow I'm trying to load is using the 'node-red-node-mongodb' plug-in, which is not installed by default.
How can I build a Node-RED image where the 'node-red-node-mongodb' is already installed?
If anymore information is required please ask.
UPDATE
I made the following Dockerfile:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
Then I build it with:
docker build -t testenvironment/nodered .
And started it with:
docker run -d -p 1880:1880 -e FLOWS=flows.json --name node-red testenvironment/nodered
But when I go to the Node-RED interface there is no flow. Also I don't see the MongoDB node in the sidebar.
The documentation on the Node-RED site includes instructions for how to customise a Docker image and add extra nodes. You can either do it by logging into the existing image using docker exec and installing the node by hand with npm
# Open a shell in the container
docker exec -it mynodered /bin/bash
# Once inside the container, npm install the nodes in /data
cd /data
npm install node-red-node-mongodb
exit
# Restart the container to load the new nodes
docker stop mynodered
docker start mynodered
Else you can extend the image by creating your own Docker file:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
And then build it with
docker build -t mynodered:<tag> .

Docker image for sailsjs development on macosx hangs

I have a docker image build on Arch Linux (sailsjs-dev) with node and sailsjs which I would like to use for development, mounting the app directory inside the container as follows:
docker run --rm --name testapp -p 1337:1337 -v $PWD:/app \
sailsjs-dev sails lift
$PWD is the directory with the sails project.
This works fine on linux, but if I try to run it on macosx (with docker-machine) it hangs forever at the very beginning, with log level set on silly (in config/log.js):
info: Starting app...
There is no other output, this is all we get.
Note, the same docker image works perfectly also on mac with an express app. What could be peculiar of sail that causes the problem?
I can also add that on a mac docker uses a virtualbox instance named docker machine.
We solved it running npm install from within the docker container:
docker run --rm --name testapp -p 1337:1337 -ti -v $PWD:/app \
sailsjs-dev /bin/bash
npm install --no-bin-links
--no-bin-links avoids the creation of symlinks.

Docker - Volume not mounting latest files in container

The Problem
When I start a new container in Docker, I want to mount a volume so that I get the latest updates to any files on my host machine and can work with them in my container. However, what I am finding is that Docker is mounting my volumes when I build the image. What I want instead is to mount the volumes when I create a new container.
Because I am using Docker to manage my development environment, this means that whenever I update a little piece of code, I have to rebuild my development environment Docker image, which takes usually around 20-30 mins. Obviously, this is not the functionality I want from Docker.
Here is what I am using to build my development environment container:
Dockerfile
# This docker file constructs an Ubuntu development environment and configures the compiler/libs needed
FROM ubuntu:latest
ADD . /gdms-rcon/liaison
WORKDIR /gdms-rcon/liaison
RUN rm -rf ./build
RUN apt-get update
RUN apt-get install -y -f gcc g++ qtbase5-dev cmake mysql-client
fig.yml
liaison:
build: ./liaison/
command: /bin/bash
volumes:
- liaison:/gdms-rcon/liaison
working_dir: /gdms-rcon/liaison
I also use a fig.yml file to make it easier to build.
To run, I use: fig build
To access my container to compile my source code, I use: docker run -it <container_id>
Maybe I'm doing something wrong with my commands? I don't use fig up because it won't give me an interactive shell, so I use docker run -it <container_id> instead. I chose to use fig so that it would mount my volumes automatically, but it isn't working as I would have hoped.
Here is an image to more clearly demonstrate my problem
If you're not using fig to start the container, the volumes line in your fig.yml isn't doing anything useful. If you need an interactive shell, fig is not really the tool for you.
Just docker build your image like normal, and then use the -v flag to docker run to mount the volume:
docker run -it -v <hostpath>:<containerpath> <imageid>

Resources