I need to run a nodejs application in a docker container. I'm not an expert in Linux so it's a bit hard to me to understand ho to do that. The whole application stored in github (https://github.com/kashesandr/NRTC). The app uses a serialport module (https://github.com/voodootikigod/node-serialport) that is compiled with node-gyp and in my case a serialport is a virtual one that uses a USB2Serial driver
(http://www.prolific.com.tw/US/ShowProduct.aspx?pcid=41)
I want to create a separate docker container for the app. Could you please help me?
This question is very vague.
There is an official image at docker hub for building node based images. There is plenty of "how to" info in the image's readme. The only tricky part seems to me is how to access the serial port from within the container. I believe it's only possible by running the container in privileged mode, while ensuring that the device node exists inside the container as well. Of course the USB2Serial driver need to be installed on the host operating system.
I'd suggest spin up the official node image in interactive mode, and try to install / run your app inside it manually, then you could figure out a script based on that later:
docker run -it --privileged -v /dev:/dev -v path-to-your-app:/usr/src/your-app node:4.4.0 /bin/bash
root#3dd71f11f02f:/# node --version
v4.4.0
root#3dd71f11f02f:/# npm --version
2.14.20
root#3dd71f11f02f:/# gcc --version
gcc (Debian 4.9.2-10) 4.9.2
As you see this would give you an interactive (-it) root access inside the container, which has everything you probably need, with an identical /dev structure as on the host os (-v /dev:/dev binds it), so there should be no problem accessing ports. (refine the -v /dev:/dev volume binding to something more specific later for security reasons). If you need everything else which is not installed by default, add it via apt-get (e.g. apt-get update && apt-get install [package]), as the official node image is based on Debian Jessie.
After you figured out how to run the app (npm install, gyp whatever), writing a Dockerfile should be trivial.
FROM node:4.4.0
RUN npm install ...\
&& steps\
&& to && be && executed && inside && the && image
CMD /your/app/start/script.sh
... and do a docker build, then run your image with --privileged, in non interactive (without -it) in production.
Related
I am building some base Docker images for my organization to be used by applications teams when they deploy their applications in OpenShift. One of the images I have to make is an NodeJS image (we want our images to be internal rather than sourced from DockerHub). I am building on RedHat's RHEL7 Universal Base Image (ubi). However I am having trouble configuring NodeJS to work in the container. Here is my Dockerfile:
FROM myimage_rhel7_base:1.0
USER root
RUN INSTALL_PKGS="rh-nodejs10 rh-nodejs10-npm rh-nodejs10-nodejs-nodemon nss_wrapper" && \
yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
yum clean all
USER myuser
However when I run the image there are no node or npm commands available unless I run scl enable rh-nodejs10 bash. This does not work in the Dockerfile as it creates a subshell that will not be usable to a user accessing the container.
I have tried installing from source, but I have run into a different issue of needing to upgrade the gcc/g++ versions despite them not being available in my configured repos from my org. I also figure that if I can get NodeJS to work from the package manager it will help get security patches and such should the package be updated.
My question is, what are the recommended steps to create an image that can be used to build applications running on NodeJS?
Possibly this is a case where the best code is code you don't write at all. Take a look at https://github.com/sclorg/s2i-nodejs-container
It is a project that creates an image that has nodejs installed. This might be a perfect solution out of the box, or it could also serve as a great example of what you're trying to build.
Also, their readme attempts to describe how they get around the scl enable command.
Normally, SCL requires manual operation to enable the collection you
want to use. This is burdensome and can be prone to error. The
OpenShift S2I approach is to set Bash environment variables that serve
to automatically enable the desired collection:
BASH_ENV: enables the collection for all non-interactive Bash sessions
ENV: enables the collection for all invocations of /bin/sh
PROMPT_COMMAND: enables the collection in interactive shell
Two examples:
* If you specify BASH_ENV, then all your #!/bin/bash scripts do not need to call scl enable.
* If you specify PROMPT_COMMAND, then on execution of the podman exec ... /bin/bash command, the collection will be automatically
enabled.
I decided in the end to install node using the binaries rather than our rpm server. Here is the implementation
FROM myimage_rhel7_base:1.0
USER root
# Get node distribution from nexus and install it
RUN wget -P /tmp http://myrepo.example.com/repository/node/node-v10.16.3-linux-x64.tar.xz && \
tar -C /usr/local --strip-components 1 -xf /tmp/node-v10.16.3-linux-x64.tar.xz && \
rm /tmp/node-v10.16.3-linux-x64.tar.xz
I have a React app that I would like to Dockerize for Windows containers. this my Dockerfile:
FROM stefanscherer/node-windows
# Override the base log level (info).
ENV NPM_CONFIG_LOGLEVEL warn
# Expose port for service
EXPOSE 80
# Install and configure `serve`.
RUN npm install -g serve
# Copy source code to image
COPY . .
# Install dependencies
RUN npm install
# Build app and start server from script
CMD [ "npm", "start" ]
The image is successfully built, but when I try to run it I get this error:
Error response from daemon: container 3b4b9e2bab346bbd95b9dc144429026c1abbe7f4d088f1f10d4c959364f50e9e encountered an error during CreateProcess: failure in a Windows system call: The system cannot find the file specified. (0x2) extra info: {"CommandLine":"npm start","WorkingDirectory":"C:\\","Environment":{"NPM_CONFIG_LOGLEVEL":"warn"},"CreateStdInPipe":true,"CreateStdOutPipe":true,"CreateStdErrPipe":true,"ConsoleSize":[0,0]}.
I am new with Docker so I not sure if I am missing something. Any ideas?
This error probabily is because the image base is nanoserver in this case, and then the react-scripts don't works well. Also the docker images from stefanscherer/node-windows arn't updates (the latest versions of NodeJs in these images are 12.x).
Because this, I made one new docker image with some LTS versions as 14.19.0, 16.17.0 for example.
The docker image is henriqueholtz/node-win, where the tags are the NodeJs versions.
Note: For now, the NodeJs don't have official image to windows container.
In the README in docker hub, you can see one example and the links to some articles with more examples.
See some articles with examples:
How to run ReactJs app on Windows container
How to execute windows container with NodeJs
Below one example to run your create-react-app, for example (obviously, you must change the volume to your folder - use powershell):
docker run -t -p 3000:3000 --name=my-own-cra-windows-container -v C:\Projects\my-own-cra\:C:\app\ henriqueholtz/node-win:16.17.0 cmd /c "npm -v & node -v & npm start"
I am new to docker and I ran these two commands in my mac terminal
docker pull amazonlinux
docker run -v $(pwd):/lambda-project -it amazonlinux
After running these two commands, I entered into the Linux terminal, where I installed Nodejs and few node modules
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.6/install.sh | bash
. ~/.nvm/nvm.sh
nvm install 6.11.5
npm -v
npm install serverless -global
everything worked fine so far, I was able to run npm -v and it showed me npm version and also serverless -v worked fine.
Then I did exit and I came out of the container into my local terminal.
Then I entered into my container again by using below command
docker run -v $(pwd):/lambda-project -it amazonlinux
This time my installations are gone. npm -v gave me the command not found.
My question is that how can I save the state or modules installed into a container and how can I log in again into the container to work further after exiting from the container.
With each docker run command you are starting another new container. You can run the command docker ps --all. You will see all containers (including exited ones) and their IDs. You can restart an exited container with the command docker restart <id>. The container is now running. With the command docker attach <id> you are back in the container. All installed libraries should still be present, but:
The downloaded shell script sets some shell variables. After attaching to the container, you can run the shell script again: . ~/.nvm/nvm.sh. Now you can access npm. This shell command prints out what it did and what you should do to keep those changes.
If you want to keep all those changes and use it regularly you can write a Dockerfile which builds an image with all those libs already installed. This official page gets you started in writing Dockerfiles: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
I would like to make my own Node-RED docker image so when I start it the flows are loaded and Node-RED is ready to go.
The flow I want to load is placed in a 'flows.json' file. And when I import it manually via the interface it works fine.
The Node-RED documentation for docker suggests the following line for starting Node-RED with a custom flow
$ docker run -it -p 1880:1880 -e FLOWS=my_flows.json nodered/node-red-docker
However when I try to do this the flow ends up empty.
I suspect this has to do something with the fact that the flow I'm trying to load is using the 'node-red-node-mongodb' plug-in, which is not installed by default.
How can I build a Node-RED image where the 'node-red-node-mongodb' is already installed?
If anymore information is required please ask.
UPDATE
I made the following Dockerfile:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
Then I build it with:
docker build -t testenvironment/nodered .
And started it with:
docker run -d -p 1880:1880 -e FLOWS=flows.json --name node-red testenvironment/nodered
But when I go to the Node-RED interface there is no flow. Also I don't see the MongoDB node in the sidebar.
The documentation on the Node-RED site includes instructions for how to customise a Docker image and add extra nodes. You can either do it by logging into the existing image using docker exec and installing the node by hand with npm
# Open a shell in the container
docker exec -it mynodered /bin/bash
# Once inside the container, npm install the nodes in /data
cd /data
npm install node-red-node-mongodb
exit
# Restart the container to load the new nodes
docker stop mynodered
docker start mynodered
Else you can extend the image by creating your own Docker file:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
And then build it with
docker build -t mynodered:<tag> .
I have a perplexing Docker problem. I am running Docker on my Mint laptop and on a Ubuntu VPS. I have been able to build images in the past locally and send them to the server and have them run there. However, for clarity, the ones that work were probably built when I was running Ubuntu locally (more on that later).
I have an example based on Alpine:
FROM alpine:3.5
# Do a system update
RUN apk update
ENTRYPOINT ["sleep", "3"]
I build like so, and send to the remote:
docker build -t alpine-sleep .
docker save alpine-sleep | gzip > alpine-sleep.tgz
rsync --progress alpine-sleep.tgz myserver.example.com:/path/to/images/
I then unpack/import on the remote, and run, thus:
docker import /path/to/images/alpine-sleep.tgz alpine-sleep
docker run -it alpine-sleep
I get this console reply:
docker: Error response from daemon: No command specified.
See 'docker run --help'.
However, if I copy the Dockerfile to the remote, then do this:
docker build -t alpine-sleep-localbuild .
docker run -it alpine-sleep-localbuild
then I get the sleep working fine.
My Docker and kernel versions locally:
jon#jvb ~/alpine_test $ uname -r
4.4.0-79-generic
jon#jvb ~/alpine_test $ docker -v
Docker version 1.12.6, build 78d1802
And remotely:
root#vps:~/alpine-sleep# uname -r
3.13.0-24-generic
root#vps:~/alpine-sleep# docker -v
Docker version 17.05.0-ce, build 89658be
I wonder, does the major difference in the kernel make a difference? I expect 3.13 to 4.4 is quite a big jump. I don't recall what version of the kernel I was using when I build things when I was running Ubuntu locally, but it would not surprise me if it is was 3.x.
The other thing that strikes me as unexpected is the high variation in Docker version numbers. How do I have version 1.x locally, and 17.x remotely? Has the project been through a version re-numbering?
Update
I've just checked the kernel version when I was running Ubuntu locally, and that was:
4.4.0-75-generic
So, this makes me think that a major kernel discrepancy could not be to blame.
The issue is that Docker won't warn you when you use the wrong combination of save/load and export/import. You save/load an image, and you export/import a tar file from a container. Since you are doing a docker save to save your image, you need to do a docker load to restore it on the other host:
docker load < /path/to/images/alpine-sleep.tgz
I have found this very old issue: https://github.com/moby/moby/issues/1826
An image imported via docker import won't know what command to run. Any image will lose all of its associated metadata on export, so the default command won't be available after importing it somewhere else.
So, run it with the entrypoint:
docker run --entrypoint sleep alpine-sleep 3