How to know which node:alpine image version is in running container - node.js

I have one running container which uses node:alpine as base image.I want to know version of this image.

You can check the Dockerfile of the container if handy.
You can check the first line with FROM node:<version>-alpine
For example :
FROM node:12.18.1-alpine
ENV NODE_ENV=production
WORKDIR /app
You can also use the command docker image inspect
https://docs.docker.com/engine/reference/commandline/image_inspect/
You can also exec into the container to check the version of Node
Command to check node version:
docker exec -ti <Container name> sh -c "node --version"

Try running the command FROM node:<version>-alpine
You can also try running the command alpine -v or alpine -version .You can also start Alpine and press? on the main menu to open the main Help page, which will also tell you the version.
Refer to the link for more information.

Related

Docker image "node" terminates before I can access it (using azure cli)

I try to deploy my image that is based on node (node:latest) on azure. When I do it terminates automatically and does not let me do what I need to do with it.
My docker file:
WORKDIR /usr/src/app
COPY package.json .
COPY artillery-scripts.sh .
COPY images images
COPY src src
EXPOSE 80
RUN npm install -g artillery && \
npm install faker && \
npm install worker && \
npm install -g node-fetch -save && \
npm install -g https://github.com/preguica/artillery-plugin-metrics-by-endpoint.git
I have tried adding && \ while true; do echo SLEEP; sleep 10; done at the end so it wouldn't terminate automatically but that produces an error.
Any one know what this problem is?
Probably good to first try it all locally. It seems you misunderstand some fundamental parts of docker.
Writing something that will pause in your Dockerfile makes no sense at all, since that file is for building the image, not running the container.
Once you have the image, you can run one or more containers based on this image.
Usually you will want to put a CMD or ENTRYPOINT at the end that will tell the container what command to run. Read this article which gives a pretty good explanation of both.
If you want to interact with the container look into the -i and -t (or short -it) flags of the run command. When you run your container, you can also provide a command, this will override any command given in CMD or be appended to anything in ENTRYPOINT.
If you do not write an ENTRYPOINT or CMD it will default to running a shell.
However, if you run it without -it it will start the shell, consider it's work done and stop immediately.
Again if you would want to start a specific script for instance you can add a line to the end of your Dockerfile such as
CMD "node somefile.js"
So first build your image based on the dockerfile, then run the container based on the image:
docker build -t someImageName:someTag .
docker run -it someImageName:someTag // will run CMD, "node somefile.js" or:
docker run -it someImageName:someTag node // will override it and just run node
You can install docker locally and just do that all on your local machine, and once you get a feel for it, and once you are sure your dockerfile is correct see how to deploy it to azure. That way it is easier to debug and learn.
Extra tip: you wrote EXPOSE 80. Read the docs on EXPOSE and PUBLISH beacuse it can be confusing when you start out. EXPOSE is just there for documentation, it does NOT actually expose anything. If you would like to connect somehow to the container from the outside world you have to PUBLISH the port. This is done in the run command:
docker run -it someImageName:someTag -p 80:80 // the first is host port, the second is the container port.

How to get back to shell in nodejs:latest docker image?

I'm newbie to docker, I tried this command
docker run -it node:latest
then, I was in the node REPL,
Welcome to Node.js v16.3.0.
Type ".help" for more information.
>
I tried control+c ,but this quit the image,
Is there any way to go to the shell in this image?
In order to overwrite the entry point of the docker image you're using, you will need to use the --entrypoint flag in the run command.
docker run -it --entrypoint bash node:latest
For better understanding on how to work with already running docker container you can refer to the following question

Building a custom Node-RED image

I would like to make my own Node-RED docker image so when I start it the flows are loaded and Node-RED is ready to go.
The flow I want to load is placed in a 'flows.json' file. And when I import it manually via the interface it works fine.
The Node-RED documentation for docker suggests the following line for starting Node-RED with a custom flow
$ docker run -it -p 1880:1880 -e FLOWS=my_flows.json nodered/node-red-docker
However when I try to do this the flow ends up empty.
I suspect this has to do something with the fact that the flow I'm trying to load is using the 'node-red-node-mongodb' plug-in, which is not installed by default.
How can I build a Node-RED image where the 'node-red-node-mongodb' is already installed?
If anymore information is required please ask.
UPDATE
I made the following Dockerfile:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
Then I build it with:
docker build -t testenvironment/nodered .
And started it with:
docker run -d -p 1880:1880 -e FLOWS=flows.json --name node-red testenvironment/nodered
But when I go to the Node-RED interface there is no flow. Also I don't see the MongoDB node in the sidebar.
The documentation on the Node-RED site includes instructions for how to customise a Docker image and add extra nodes. You can either do it by logging into the existing image using docker exec and installing the node by hand with npm
# Open a shell in the container
docker exec -it mynodered /bin/bash
# Once inside the container, npm install the nodes in /data
cd /data
npm install node-red-node-mongodb
exit
# Restart the container to load the new nodes
docker stop mynodered
docker start mynodered
Else you can extend the image by creating your own Docker file:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
And then build it with
docker build -t mynodered:<tag> .

Create docker container with Node.js/NPM preinstalled but no package.json

I am looking for a Docker image that is just some *nix flavor with NPM and Node.js installed.
This image
https://hub.docker.com/_/node/
requires that a package.json file is present, and the Docker build uses COPY to copy the package.json file over, and it also looks for a Node.js script to start when the build is run.
...I just need a container to run a shell script using this technique:
docker exec mycontainer /path/to/test.sh
Which I discovered via:
Running a script inside a docker container using shell script
I don't need a package.json file or a Node.js start script, all I want is
a container image
Node.js and NPM installed
Does anyone know if there is an a Docker image for Node.js / NPM that does not require a package.json file? Perhaps I should just use a plain old container image and just add the code to install Node myself?
Alright, I tried to make this a simple question, unfortunately nobody could provide a simple answer...until now!
Instead of using this base image:
FROM node:5-onbuild
We use this instead:
FROM node:5
I read about onbuild and could not figure out what it's about, but it adds more than I needed for my use case.
the below code is in our Dockerfile
# 1. start with this image as a base
FROM node:5
# 2. copy the script from real-life into the container (magic)
COPY script.sh /usr/src/app/
# 3. define container entry point which will run our script
ENTRYPOINT ["/bin/bash", "/usr/src/app/script.sh"]
you build the docker image like so:
docker build -t foo .
then you run the image like so, which will "run the entrypoint":
docker run -it --rm foo
The container stdout should stream to the terminal where you ran docker run which is good (am I asking too much?).

Cannot finde module 'express' (node app with docker)

I'm a newbie with Docker and I'm trying to start with NodeJS so here is my question..
I have this Dockerfile inside my project:
FROM node:argon
# Create app directory
RUN mkdir -p /home/Documents/node-app
WORKDIR /home/Documents/node-app
# Install app dependencies
COPY package.json /home/Documents/node-app
RUN npm install
# Bundle app source
COPY . /home/Documents/node-app
EXPOSE 8080
CMD ["npm", "start"]
When I run a container with docker run -d -p 49160:8080 node-container it works fine..
But when I try to map my host project with the container directory (docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont) it doesn't work.
The error I get is: Error: Cannot find module 'express'
I've tried with other solutions from related questions but nothing seems to work for me (or I know.. I'm just too rookie with this)
Thank you !!
When you run your container with -v flag, which mean mount a directory from your Docker engine’s host into a container, will overwrite what you do in /home/Documents/node-app,such as npm install.
So you cannot see the node_modules directory in the container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
mount a host directory as a data volume.As what the docs said,the pre-existing content of host directory will not be removed, but no information about what's going on the exist directory of the container.
There is a example to support my opinion.
Dockerfile
FROM alpine:latest
WORKDIR /usr/src/app
COPY . .
I create a test.t file in the same directory of Dockerfile.
Proving
Run command docker build -t test-1 .
Run command docker run --name test-c-1 -it test-1 /bin/sh,then your container will open bash.
Run command ls -l in your container bash,it will show test.t file.
Just use the same image.
Run command docker run --name test-c-2 -v /home:/usr/src/app -it test-1 /bin/sh. You cannot find the file test.t in your test-c-2 container.
That's all.I hope it will help you.
I recently faced the similar issue.
Upon digging into docker docs I discovered that when you run the command
docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont
the directory on your host machine ( left side of the ':' in the -v option argument ) will be mounted on the target directory ( in the container ) ##/home/Documents/node-app##
and since your target directory is working directory and so non-empty, therefore
"the directory’s existing contents are obscured by the bind mount."
I faced an alike problem recently. Turns out the problem was my package-lock.json, it was outdated in relation to the package.json and that was causing my packages not being downloaded while running npm install.
I just deleted it and the build went ok.

Resources