node.js docker image for all environments - including production - node.js

Currently we are using node:4.2.3 (LTS) docker image which is around 642 MB in size and node_modules around 140 MB in total ~800MB to build our web application docker image.
Publishing these images to our private registry and pulling them all environments becoming a time taken process.
Since we cant reduce the node_modules size( would be helpful if any reducing methods are avail) looking for suggestions to use any other node docker image for all environments - including production.

You can build your own docker images using following Dockerfile:
FROM ubuntu:14.04
RUN sudo apt-get update && sudo apt-get install -y wget
# install node v4.2.6
RUN wget https://nodejs.org/dist/v4.2.6/node-v4.2.6-linux-x64.tar.gz && \
tar -C /usr/local --strip-components 1 -xzf node-v4.2.6-linux-x64.tar.gz && \
rm node-v4.2.6-linux-x64.tar.gz
# install express 4.13.4
RUN npm install express#4.13.4
Using following command to build the image:
sudo docker build -t ubuntu-node .
The image is only 255MB
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-node latest 7ed1b88adb46 7 seconds ago 255 MB
Of course, you can install any necessary dependencies.

Related

NodeJS: I am using ffmpeg inside the container and I think docker doesn't free memory

I am using docker container to host a music bot using ffmpeg, with every song it processes it keeps on adding up to the memory and eventually runs of memory, is there any fix ? when using it on windows/linux without docker it works really fine only uses 150mb max and dosent even use more that 40% cpu but on docker it goes like 2.3 GB and dosent go down below that.
P.S i am using fluent-ffmpeg
Here is the Docker File -
FROM ubuntu:latest
USER root
WORKDIR /app
RUN apt-get update
RUN apt-get -y install curl gnupg
RUN curl -fsSL https://deb.nodesource.com/setup_18.x | bash -
RUN apt-get -y install nodejs
RUN apt install ffmpeg -y
COPY . .
RUN mkdir /app/storage/
RUN npm install
CMD ["node" , "."]

installing nodejs in dockerfile issue

I am trying to install nodejs in dockerfile with pyenv but I keep getting this error when I run it through my gitlab runner. I am trying to install version 16.12.0. Is there a better solution to this issue?
Dockerfile
#install npm
ENV NODE_VERSION=16.12.0
RUN apt install -y curl
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
ENV NVM_DIR=/root/.nvm
RUN . "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION}
RUN . "$NVM_DIR/nvm.sh" && nvm use v${NODE_VERSION}
RUN . "$NVM_DIR/nvm.sh" && nvm alias default v${NODE_VERSION}
ENV PATH="/root/.nvm/versions/node/v${NODE_VERSION}/bin/:${PATH}"
RUN node --version
RUN npm --version
output
Step 15/25 : ENV NODE_VERSION=16.12.0
---> Running in 7623dfe4669c
Removing intermediate container 7623dfe4669c
---> c1486340596a
Step 16/25 : RUN apt install -y curl
---> Running in a4661b68566b
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Reading package lists...
Building dependency tree...
Reading state information...
curl is already the newest version (7.68.0-1ubuntu2.7).
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
Removing intermediate container a4661b68566b
---> d727779ba39b
Step 17/25 : RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
---> Running in c3661e8eead3
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
Removing intermediate container c3661e8eead3
---> beffd784c86b
Step 18/25 : ENV NVM_DIR=/root/.nvm
---> Running in 22b69a1563b2
Removing intermediate container 22b69a1563b2
---> 821b73dfd5fa
Step 19/25 : RUN . "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION}
---> Running in 54c168c88ec7
/bin/sh: 1: .: Can't open /root/.nvm/nvm.sh
The command '/bin/sh -c . "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION}' returned a non-zero code: 127
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 127
In general you should avoid using version managers like nvm in Docker. You shouldn't need more than one version of a language interpreter at a time, and the version managers often require shell dotfile setup that's complicated to configure in Docker.
You don't say why you need Node. The easiest thing to do is to use the Docker Hub node image
FROM node:lts
# and none of what you show in the question
If you're using this to build the front end of your otherwise Python application, you can use a multi-stage build for this
FROM node:lts AS frontend
WORKDIR /frontend
COPY frontend/package*.json ./
RUN npm ci
COPY frontend/ ./
RUN npm build
FROM python:3.10
...
COPY --from=frontend /frontend/dist ./static/
If you really need Node in your otherwise Ubuntu-based image, the next easiest thing to do is just install it. The default Ubuntu nodejs package should work fine for most practical uses.
FROM ubuntu:20.04
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get --no-install-recomends --assume-yes \
nodejs
If you really need it in a custom image, and it really needs to be a super specific version of Node, you should be able to just download Node and install it.
FROM ubuntu:20.04
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get --no-install-recomends --assume-yes \
curl \
xz-utils
ARG node_version=v16.14.2
RUN cd /opt \
&& curl -LO https://nodejs.org/dist/${node_version}/node-${node_version}-linux-x64.tar.xz \
&& tar xJf node-${node_version}-linux-x64.tar.xz \
&& rm node-${node_version}-linux-x64.tar.xz
ENV PATH=/opt/node-${node_version}-linux-x64/bin:${PATH}

Attempting to host a flutter project on Azure App Services using a docker image; local image and cloud image behave differently

I am having trouble with azure and docker where my local machine image is behaving differently than the image I push to ACR. while trying to deploy to web, I get this error:
ERROR - failed to register layer: error processing tar file(exit status 1): Container ID 397546 cannot be mapped to a host IDErr: 0, Message: mapped to a host ID
So in trying to fix it, I have come to find out that azure has a limit on uid numbers of 65000. Easy enough, just change ownership of the affected files to root, right?
Not so. I put the following command into my Dockerfile:
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
Works great locally for changing the uids of the affected files from 397546 to 0. I do a command in the cli of the container:
find / -uid 397546
It finds none of the same files it found before. Yay! I even navigate to the directories where the affected files are, and do a quick
ls -n to double confirm they are fine, and sure enough the uids are now 0 on all of them. Good to go?
Next step, push to cloud. When I push and reset the app service, I still continue to get the same exact error above. I have confirmed on multiple fronts that it is indeed pushing the correct image to the cloud.
All of this means that somehow my local image and the cloud image are behaving differently.
I am stumped guys please help.
The Dockerfile is as below:
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 psmisc
RUN apt-get clean
# Clone the flutter repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Enable flutter web
RUN flutter upgrade
RUN flutter config --enable-web
# Run flutter doctor
RUN flutter doctor -v
# Change ownership to root of affected files
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
# Copy the app files to the container
COPY ./build/web /usr/local/bin/app
COPY ./startup /usr/local/bin/app/server
COPY ./pubspec.yaml /usr/local/bin/app/pubspec.yaml
# Set the working directory to the app files within the container
WORKDIR /usr/local/bin/app
# Get App Dependencies
RUN flutter pub get
# Build the app for the web
# Document the exposed port
EXPOSE 4040
# Set the server startup script as executable
RUN ["chmod", "+x", "/usr/local/bin/app/server/server.sh"]
# Start the web server
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]```
So basically we have made a shell script to build web BEFORE building the docker image. we then use the static js from the build/web folder and host that on the server. No need to download all of flutter. Makes pipelines a little harder, but at least it works.
New Dockerfile:
FROM ubuntu:20.04 as build-env
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get -y install sudo
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
## preesed tzdata, update package index, upgrade packages and install needed software
RUN echo "tzdata tzdata/Areas select US" > /tmp/preseed.txt; \
echo "tzdata tzdata/Zones/US select Colorado" >> /tmp/preseed.txt; \
debconf-set-selections /tmp/preseed.txt && \
apt-get update && \
apt-get install -y tzdata
RUN apt-get install -y curl git wget unzip libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 python3 nginx nano vim
RUN apt-get clean
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN cd /app/
# Configure nginx and remove secret files
RUN mv /app/build/web/ /var/www/html/patient
RUN cd /etc/nginx/sites-enabled
RUN cp -f /app/default /etc/nginx/sites-enabled/default
RUN cd /app/ && rm -r .dart_tool .vscode assets bin ios android google_place lib placepicker test .env .flutter-plugins .flutter-plugins-dependencies .gitignore .metadata analysis_options.yaml flutter_01.png pubspec.lock pubspec.yaml README.md
# Record the exposed port
EXPOSE 5000
# Start the python server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]

Minimize docker size in headless container along with node

I want to install node inisde the chrome headless trunk image below:
alpeware/chrome-headless-trunk (https://hub.docker.com/r/alpeware/chrome-headless-trunk/).
The size of alpeware/chrome-headless-trunk is around 300MB, but after installing nodejs from source image size comes to around 900MB.
Installing node inside the docker:
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get -y install nodejs
Is there any way to minimize the size of chrome-headless-trunk image with node installation as well ?
I will recommend using an alpine based image that is only 228MB and the tag I mentioned below has nodejs and chrome both. Your image is based on Ubuntu and its heavy as compared to alpine which is 5MB only.
FROM zenika/alpine-chrome
USER root
RUN apk add --no-cache tini make gcc g++ python git nodejs nodejs-npm yarn \
&& apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing wqy-zenhei \
&& rm -rf /var/lib/apt/lists/* \
/var/cache/apk/* \
/usr/share/man \
/tmp/*
USER chrome
ENTRYPOINT ["tini", "--"]
Docker image that has node and chrome
zenika:with-node
You check more details alpine-chrome and here

How to get npm to use cache

UPDATE
I changed directions from this question and ended up taking advantage of Docker image layers to cache the npm install unless there is changes to the package.config, see here.
Note, in relation to this question, I still build my AngularJs Docker image in a slave Jenkins Docker image but I no longer run the npm install in the Docker slave, I copy my app files to my AngularJs Docker image and run the npm install in the AngularJs Docker image, thus getting a Docker cache layer of the npm install, inspiration from this great idea/answer here.
-------------------------------END UPDATE------------------------------
Ok, I should add the caveat that I am in a Docker container but that really shouldn't matter much possibly, I do not stop the container and I have volumes for the for the npm cache folder as well as the /home folder for the user running npm commands.
The purpose of the Docker container, with npm installed, is that it is a build slave, spun up by Jenkins to build an AngularJs application. The problem is that it is incredibly slow, downloading all the needed npm packages, every time.
jenkins is the user, a jenkins account on a build server is "whom" is running npm install
I have Volumes for both the npm folder for the user running the npm install cmd: /home/jenkins/.npm and also the folder that the command npm config get cache says is my cache directory: /root/.npm. Not that container volumes should even matter because I have not stopped the container after running npm install.
Ok the steps I take to start debugging, to start, I "bash into the container" with this command:
docker exec -it <container_id> bash
All commands I run from this point forward I am connected to the running container with npm installed.
echo "$HOME" results in /root
npm config get cache results in root/.npm
Any time jenkins runs npm install in this container, after that command finishes successfully, I run npm cache ls which always yields empty, nothing cached: ~/.npm
Many packages were downloaded however as we can see with ls -a /home/jenkins/.npm/:
So I tried setting the cache-min to a very long expiration time: npm config set cache-min 9999999 that didn't help.
I am not sure what else to do, it just seems that none of my npm packages are being cached, how do I get npm to cache packages?
here is a truncated npm install output:
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-48_binding.node
Download complete
Binary saved to /home/jenkins/workspace/tsl.frontend.development/node_modules/node-sass/vendor/linux-x64-48/binding.node
Caching binary to /home/jenkins/.npm/node-sass/4.5.3/linux-x64-48_binding.node
Binary found at /home/jenkins/workspace/tsl.frontend.development/node_modules/node-sass/vendor/linux-x64-48/binding.node
Testing binary
Binary is fine
typings WARN deprecated 3/24/2017: "registry:dt/core-js#0.9.7+20161130133742" is deprecated (updated, replaced or removed)
[?25h
+-- app (global)
`-- core-js (global)
And here is my Dockerfile:
FROM centos:7
MAINTAINER Brian Ogden
RUN yum update -y && \
yum clean all
#############################################
# Jenkins Slave setup
#############################################
RUN yum install -y \
git \
openssh-server \
java-1.8.0-openjdk \
sudo \
make && \
yum clean all
# gen dummy keys, centos doesn't autogen them like ubuntu does
RUN /usr/bin/ssh-keygen -A
# Set SSH Configuration to allow remote logins without /proc write access
RUN sed -ri 's/^session\s+required\s+pam_loginuid.so$/session optional pam_loginuid.so/' /etc/pam.d/sshd
# Create Jenkins User
RUN useradd jenkins -m -s /bin/bash
# Add public key for Jenkins login
RUN mkdir /home/jenkins/.ssh
COPY /files/id_rsa.pub /home/jenkins/.ssh/authorized_keys
#setup permissions for the new folders and files
RUN chown -R jenkins /home/jenkins
RUN chgrp -R jenkins /home/jenkins
RUN chmod 600 /home/jenkins/.ssh/authorized_keys
RUN chmod 700 /home/jenkins/.ssh
# Add the jenkins user to sudoers
RUN echo "jenkins ALL=(ALL) ALL" >> etc/sudoers
#############################################
# Expose SSH port and run SSHD
EXPOSE 22
#Technically, the Docker Plugin enforces this call when it starts containers by overriding the entry command.
#I place this here because I want this build slave to run locally as it would if it was started in the build farm.
CMD ["/usr/sbin/sshd","-D"]
#############################################
# Docker and Docker Compose Install
#############################################
#install required packages
RUN yum install -y \
yum-utils \
device-mapper-persistent-data \
lvm2 \
curl && \
yum clean all
#add Docker CE stable repository
RUN yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
#Update the yum package index.
RUN yum makecache fast
#install Docker CE
RUN yum install -y docker-ce-17.06.0.ce-1.el7.centos
#install Docker Compose 1.14.0
#download Docker Compose binary from github repo
RUN curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
#Apply executable permissions to the binary
RUN chmod +x /usr/local/bin/docker-compose
#############################################
ENV NODE_VERSION 6.11.1
#############################################
# NodeJs Install
#############################################
RUN yum install -y \
wget
#Download NodeJs package
RUN wget https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz
#extract the binary package into our system's local package hierarchy with the tar command.
#The archive is packaged within a versioned directory, which we can get rid of by passing the --strip-components 1 option.
#We will specify the target directory of our command with the -C command:
#This will install all of the components within the /usr/local branch
RUN tar --strip-components 1 -xzvf node-v* -C /usr/local
#############################################
#############################################
# npm -setup volume for package cache
# this will speed up builds
#############################################
RUN mkdir /home/jenkins/.npm
RUN chown jenkins /home/jenkins/.npm .
RUN mkdir /root/.npm
RUN chown jenkins /root/.npm .
#for npm cache, this cannot be expressed in docker-compose.yml
#the reason for this is that Jenkins spins up slave containers using
#the docker plugin, this means that there
VOLUME /home/jenkins/.npm
VOLUME /root/.npm
#############################################
When you run docker exec -it <container> bash you access the Docker container as the root user. npm install thus saves the cache to /root/.npm, which isn't a volume saved by the container. Jenkins, on the other hand, uses the jenkins user, which saves to /home/jenkins/.npm, which is being cached. So in order to emulate the functionality of the actual Jenkins workflow, you need to su jenkins before you can npm install.
That being said, the npm cache is not a perfect solution (especially if you have a ton of automated Jenkins builds). Some things to look into that would be better long-term solutions:
Install a local NPM Cache like sinopia. I found this guide to be particularly helpful.
Use Docker to build you app (which would work fine with Docker In Docker). Docker would cache after each build step, saving the repeated fetching of dependencies.

Resources