Creating a custom NodeJSDocker image on rhel7 - node.js

I am building some base Docker images for my organization to be used by applications teams when they deploy their applications in OpenShift. One of the images I have to make is an NodeJS image (we want our images to be internal rather than sourced from DockerHub). I am building on RedHat's RHEL7 Universal Base Image (ubi). However I am having trouble configuring NodeJS to work in the container. Here is my Dockerfile:
FROM myimage_rhel7_base:1.0
USER root
RUN INSTALL_PKGS="rh-nodejs10 rh-nodejs10-npm rh-nodejs10-nodejs-nodemon nss_wrapper" && \
yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
yum clean all
USER myuser
However when I run the image there are no node or npm commands available unless I run scl enable rh-nodejs10 bash. This does not work in the Dockerfile as it creates a subshell that will not be usable to a user accessing the container.
I have tried installing from source, but I have run into a different issue of needing to upgrade the gcc/g++ versions despite them not being available in my configured repos from my org. I also figure that if I can get NodeJS to work from the package manager it will help get security patches and such should the package be updated.
My question is, what are the recommended steps to create an image that can be used to build applications running on NodeJS?

Possibly this is a case where the best code is code you don't write at all. Take a look at https://github.com/sclorg/s2i-nodejs-container
It is a project that creates an image that has nodejs installed. This might be a perfect solution out of the box, or it could also serve as a great example of what you're trying to build.
Also, their readme attempts to describe how they get around the scl enable command.
Normally, SCL requires manual operation to enable the collection you
want to use. This is burdensome and can be prone to error. The
OpenShift S2I approach is to set Bash environment variables that serve
to automatically enable the desired collection:
BASH_ENV: enables the collection for all non-interactive Bash sessions
ENV: enables the collection for all invocations of /bin/sh
PROMPT_COMMAND: enables the collection in interactive shell
Two examples:
* If you specify BASH_ENV, then all your #!/bin/bash scripts do not need to call scl enable.
* If you specify PROMPT_COMMAND, then on execution of the podman exec ... /bin/bash command, the collection will be automatically
enabled.

I decided in the end to install node using the binaries rather than our rpm server. Here is the implementation
FROM myimage_rhel7_base:1.0
USER root
# Get node distribution from nexus and install it
RUN wget -P /tmp http://myrepo.example.com/repository/node/node-v10.16.3-linux-x64.tar.xz && \
tar -C /usr/local --strip-components 1 -xf /tmp/node-v10.16.3-linux-x64.tar.xz && \
rm /tmp/node-v10.16.3-linux-x64.tar.xz

Related

Laradock - add custom npm package

It's a kind of not normal thing, but this is something, that temporarily is a solution.
I have laradock installed in a system and laravel app.
All that I'm using from laradock provides me command below
docker-compose up -d nginx mysql php-worker workspace redis
I need to add node package (https://www.npmjs.com/package/tiktok-scraper) installed globally in my docker, so I can get results by executing php code like below
exec('tiktok-scraper user username-n 3 -t json');
This needs to be available for php-fpm and php-worker level, as I need this in jobs and for endpoints, that should invoke scrape.
I know, that I'm doing wrong, but I have tried to install it within workspace like using
docker-compose exec workspace bash
npm i -g tiktok-scraper
and after this it's available in my workspace (I can run for instance tiktok-scraper --help) and it will show me the different options.
But this doesn't solve the issue, as I'm getting nothing by exec('tiktok-scraper user username-n 3 -t json'); in my laravel app.
I'm not so familiar with docker and not sure, in which dockerfile should I put something like
RUN npm i -g tiktok-scraper
Any help will be appreciated
Thanks
To execute the npm package from inside your php-worker you would need to install it in the php-worker container. But for the php exec() to have an effect on your workspace this workspace would need to be in the same container as your php-worker.

Installing a software and setting up environment variable in Dockerfile

I have a jar file, which I need to create a docker image. My jar file is dependent on an application called ImageMagick. Basically, ImageMagick will be installed and the path to image magick will be added as an environmental variable. I am new to Docker, and based on my understanding, I believe, a container can access only resource within the container.
So I created a docker file, as such
FROM openjdk:8
ADD target/eureka-server-1.0.0-RELEASE.jar eureka-server-
1.0.0-RELEASE.jar
EXPOSE 9991
RUN ["yum","install","ImageMagick"]
RUN ["export","imagemagix_home", "whereis ImageMagick"](Here is what am
struggling that, i need to set env variable by taking the installation
directory of imagemagick. Currently iam getting null)
ENTRYPOINT ["java","-jar","eureka-server-1.0.0-RELEASE.jar"]
Please let me know, whether the solution am trying is proper, or is there any better solution for my problem.
Update,
As am installing an application and setting env variable at the build time, passing an argument in -e runtime is no use.I have updated my docker file as below,
FROM openjdk:8
ADD target/eureka-server-1.0.0-RELEASE.jar eureka-server-
1.0.0-RELEASE.jar
EXPOSE 9991
RUN ["yum","install","ImageMagick"]
ENV imagemagix_home = $(whereis ImageMagick)
RUN ["wget","https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-64bit-
static.tar.xz"]
RUN ["tar","xvf","ffmpeg-git-*.tar.xz"]
RUN ["cd","./ffmpeg-git-*"]
RUN ["cp","ff*","qt-faststart","/usr/local/bin/"]
ENV ffmpeg_home = $(whereis ffmpeg)
ENTRYPOINT ["java","-jar","eureka-server-1.0.0-RELEASE.jar"]
And while am building, iam getting an error that,
OCI runtime create failed: conatiner_linux.go: starting container process caused "exec": "\yum": executable file not found in $PATH: unknow.
Update
yum is not available in my base image package, so I changed yum as apt-get as below,
RUN apt-get install build-essential checkinstall && apt-get build-dep
imagemagick -y
Now am getting package not found build-essential, check install. returned
a non-zero code 100
Kindly let me know whats going wrong
It seems build-essential or checkinstall is not available. Try installing them in separate commands. Or searching for them.
Maybe you need to do apt-et update to refresh the repository cache before installing them.

NodeJS API with external deps in other language

I am developing a NodeJS API and everything is ok.
For an specific issue I am using a local CLI dependency that process some input files and output other stuff, in case to return from the API.
I wanted to know (maybe open my mind on) what kind of service I can use to serve this API in production.
The idea is to have a Node environment (like in my local) that can have installed in the same machine an external dependency not necessarily written in Node.
My specific dependency is fontforge and other little things.
Thanks in advance.
It's hard to beat a good VPS if you need to install custom software that is not easy to install with npm. My favorite VPS provider is Digital Ocean. You can have two months of a basic server for free with this link so you can see if it's ok for you before you pay anything. By second favorite VPS provider is Vultr because you can install custom ISOs on their servers. You can try it for free with this link. But it will mean taking care of the server yourself. With services like Heroku all of that is taken care for you - but you can't install whatever you want there. With a VPS you get your own server with root access. Usually it's Linux but Digital Ocean also supports FreeBSD and some people install OpenBSD, though it's not officially supported. With a VPS you can install whatever you want, but you have to do it yourself. There is always a trade off.
More info
Installing Node
To install Node on the VPS, my recommendation is to install in /opt with a versioned directory and a symlink - this is an example procedure that I wrote for a different answer:
# change dir to your home:
cd ~
# download the source:
curl -O https://nodejs.org/dist/v6.1.0/node-v6.1.0.tar.gz
# extract the archive:
tar xzvf node-v6.1.0.tar.gz
# go into the extracted dir:
cd node-v6.1.0
# configure for installation:
./configure --prefix=/opt/node-v6.1.0
# build and test:
make && make test
# install:
sudo make install
# make a symlink to that version:
sudo ln -svf /opt/node-v6.1.0 /opt/node
See this answer for more info.
Your start scripts
To have your own application nicely started on server startup - here is an example Upstart script based on the one that I'm using - it should work on Ubuntu 14.04, not tested on newer versions - save it in /etc/init/YOURAPP.conf:
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on runlevel [06]
# If the process quits unexpectadly trigger a respawn
respawn
# Start the process
exec start-stop-daemon --start --chuid node --make-pidfile --pidfile /www/YOURAPP/run/node-upstart.pid --exec /opt/node/bin/node -- /www/YOURAPP/app/app.js >> /www/YOURAPP/log/node-upstart.log 2>&1
Just change:
YOURAPP to the name of your own app
/opt/node/bin/node to your path to node
/www/YOURAPP/app/app.js to the path of your Node app
/www/YOURAPP/run to where you want your PID file
/www/YOURAPP/log to where you want your logs
--chuid node to --chuid OTHERUSER if you want it to run as a different user than node
(make sure to add a user with a name from --chuid above)
With your /etc/init/YOURAPP.conf in place you can safely restart your server and have your app still running, you can run:
start YOURAPP
restart YOURAPP
stop YOURAPP
to start, restart and stop your app - which would also happen automatically during the system boot or shutdown.

Running nodejs serialport in a docker container

I need to run a nodejs application in a docker container. I'm not an expert in Linux so it's a bit hard to me to understand ho to do that. The whole application stored in github (https://github.com/kashesandr/NRTC). The app uses a serialport module (https://github.com/voodootikigod/node-serialport) that is compiled with node-gyp and in my case a serialport is a virtual one that uses a USB2Serial driver
(http://www.prolific.com.tw/US/ShowProduct.aspx?pcid=41)
I want to create a separate docker container for the app. Could you please help me?
This question is very vague.
There is an official image at docker hub for building node based images. There is plenty of "how to" info in the image's readme. The only tricky part seems to me is how to access the serial port from within the container. I believe it's only possible by running the container in privileged mode, while ensuring that the device node exists inside the container as well. Of course the USB2Serial driver need to be installed on the host operating system.
I'd suggest spin up the official node image in interactive mode, and try to install / run your app inside it manually, then you could figure out a script based on that later:
docker run -it --privileged -v /dev:/dev -v path-to-your-app:/usr/src/your-app node:4.4.0 /bin/bash
root#3dd71f11f02f:/# node --version
v4.4.0
root#3dd71f11f02f:/# npm --version
2.14.20
root#3dd71f11f02f:/# gcc --version
gcc (Debian 4.9.2-10) 4.9.2
As you see this would give you an interactive (-it) root access inside the container, which has everything you probably need, with an identical /dev structure as on the host os (-v /dev:/dev binds it), so there should be no problem accessing ports. (refine the -v /dev:/dev volume binding to something more specific later for security reasons). If you need everything else which is not installed by default, add it via apt-get (e.g. apt-get update && apt-get install [package]), as the official node image is based on Debian Jessie.
After you figured out how to run the app (npm install, gyp whatever), writing a Dockerfile should be trivial.
FROM node:4.4.0
RUN npm install ...\
&& steps\
&& to && be && executed && inside && the && image
CMD /your/app/start/script.sh
... and do a docker build, then run your image with --privileged, in non interactive (without -it) in production.

How can I ssh into my meteor app?

I am working on a meteor project, and I deployed it at meteor.com, I used ffmpeg library for some audio option, so I need to install ffmpeg on meteor server.
I successfully executed following command to install ffmpeg on meteor server.
1]. git clone https://github.com/FFmpeg/FFmpeg.git
2]. cd FFmpeg && ./configure --disable-yasm
3]. cd FFmpeg && make
but in 4th command I am facing an issue
4]. cd FFmpeg && make install
then I am getting errror like:-
cannot create directory /usr/local/man/man1 :permission denied
and when I used cd FFmpeg && sudo make install then getting error:-
sudo: no tty present and no askpass program specified
so what should I do to solve this error or can install ffmpeg library
thanks..
It is not possible to get access via ssh to apps on meteor.com & I don't think allow you to use custom binaries on their infrastructure.
Each instance is in a sort of vm which doesn't give you root access so you can't make any binaries.
If you want to use ffmpeg/custom binaries with your app you would have to use your own infrastructure like on heroku (which is also free), AWS or digitalocean.
The Dev-Ops that meteor deploy affords is a deployment of the bundled meteor app only. There is no other access (ftp, ssh, or otherwise) given besides the mongo database (via meteor mongo <siteurl>

Resources