How to get npm to use cache - node.js

UPDATE
I changed directions from this question and ended up taking advantage of Docker image layers to cache the npm install unless there is changes to the package.config, see here.
Note, in relation to this question, I still build my AngularJs Docker image in a slave Jenkins Docker image but I no longer run the npm install in the Docker slave, I copy my app files to my AngularJs Docker image and run the npm install in the AngularJs Docker image, thus getting a Docker cache layer of the npm install, inspiration from this great idea/answer here.
-------------------------------END UPDATE------------------------------
Ok, I should add the caveat that I am in a Docker container but that really shouldn't matter much possibly, I do not stop the container and I have volumes for the for the npm cache folder as well as the /home folder for the user running npm commands.
The purpose of the Docker container, with npm installed, is that it is a build slave, spun up by Jenkins to build an AngularJs application. The problem is that it is incredibly slow, downloading all the needed npm packages, every time.
jenkins is the user, a jenkins account on a build server is "whom" is running npm install
I have Volumes for both the npm folder for the user running the npm install cmd: /home/jenkins/.npm and also the folder that the command npm config get cache says is my cache directory: /root/.npm. Not that container volumes should even matter because I have not stopped the container after running npm install.
Ok the steps I take to start debugging, to start, I "bash into the container" with this command:
docker exec -it <container_id> bash
All commands I run from this point forward I am connected to the running container with npm installed.
echo "$HOME" results in /root
npm config get cache results in root/.npm
Any time jenkins runs npm install in this container, after that command finishes successfully, I run npm cache ls which always yields empty, nothing cached: ~/.npm
Many packages were downloaded however as we can see with ls -a /home/jenkins/.npm/:
So I tried setting the cache-min to a very long expiration time: npm config set cache-min 9999999 that didn't help.
I am not sure what else to do, it just seems that none of my npm packages are being cached, how do I get npm to cache packages?
here is a truncated npm install output:
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-48_binding.node
Download complete
Binary saved to /home/jenkins/workspace/tsl.frontend.development/node_modules/node-sass/vendor/linux-x64-48/binding.node
Caching binary to /home/jenkins/.npm/node-sass/4.5.3/linux-x64-48_binding.node
Binary found at /home/jenkins/workspace/tsl.frontend.development/node_modules/node-sass/vendor/linux-x64-48/binding.node
Testing binary
Binary is fine
typings WARN deprecated 3/24/2017: "registry:dt/core-js#0.9.7+20161130133742" is deprecated (updated, replaced or removed)
[?25h
+-- app (global)
`-- core-js (global)
And here is my Dockerfile:
FROM centos:7
MAINTAINER Brian Ogden
RUN yum update -y && \
yum clean all
#############################################
# Jenkins Slave setup
#############################################
RUN yum install -y \
git \
openssh-server \
java-1.8.0-openjdk \
sudo \
make && \
yum clean all
# gen dummy keys, centos doesn't autogen them like ubuntu does
RUN /usr/bin/ssh-keygen -A
# Set SSH Configuration to allow remote logins without /proc write access
RUN sed -ri 's/^session\s+required\s+pam_loginuid.so$/session optional pam_loginuid.so/' /etc/pam.d/sshd
# Create Jenkins User
RUN useradd jenkins -m -s /bin/bash
# Add public key for Jenkins login
RUN mkdir /home/jenkins/.ssh
COPY /files/id_rsa.pub /home/jenkins/.ssh/authorized_keys
#setup permissions for the new folders and files
RUN chown -R jenkins /home/jenkins
RUN chgrp -R jenkins /home/jenkins
RUN chmod 600 /home/jenkins/.ssh/authorized_keys
RUN chmod 700 /home/jenkins/.ssh
# Add the jenkins user to sudoers
RUN echo "jenkins ALL=(ALL) ALL" >> etc/sudoers
#############################################
# Expose SSH port and run SSHD
EXPOSE 22
#Technically, the Docker Plugin enforces this call when it starts containers by overriding the entry command.
#I place this here because I want this build slave to run locally as it would if it was started in the build farm.
CMD ["/usr/sbin/sshd","-D"]
#############################################
# Docker and Docker Compose Install
#############################################
#install required packages
RUN yum install -y \
yum-utils \
device-mapper-persistent-data \
lvm2 \
curl && \
yum clean all
#add Docker CE stable repository
RUN yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
#Update the yum package index.
RUN yum makecache fast
#install Docker CE
RUN yum install -y docker-ce-17.06.0.ce-1.el7.centos
#install Docker Compose 1.14.0
#download Docker Compose binary from github repo
RUN curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
#Apply executable permissions to the binary
RUN chmod +x /usr/local/bin/docker-compose
#############################################
ENV NODE_VERSION 6.11.1
#############################################
# NodeJs Install
#############################################
RUN yum install -y \
wget
#Download NodeJs package
RUN wget https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz
#extract the binary package into our system's local package hierarchy with the tar command.
#The archive is packaged within a versioned directory, which we can get rid of by passing the --strip-components 1 option.
#We will specify the target directory of our command with the -C command:
#This will install all of the components within the /usr/local branch
RUN tar --strip-components 1 -xzvf node-v* -C /usr/local
#############################################
#############################################
# npm -setup volume for package cache
# this will speed up builds
#############################################
RUN mkdir /home/jenkins/.npm
RUN chown jenkins /home/jenkins/.npm .
RUN mkdir /root/.npm
RUN chown jenkins /root/.npm .
#for npm cache, this cannot be expressed in docker-compose.yml
#the reason for this is that Jenkins spins up slave containers using
#the docker plugin, this means that there
VOLUME /home/jenkins/.npm
VOLUME /root/.npm
#############################################

When you run docker exec -it <container> bash you access the Docker container as the root user. npm install thus saves the cache to /root/.npm, which isn't a volume saved by the container. Jenkins, on the other hand, uses the jenkins user, which saves to /home/jenkins/.npm, which is being cached. So in order to emulate the functionality of the actual Jenkins workflow, you need to su jenkins before you can npm install.
That being said, the npm cache is not a perfect solution (especially if you have a ton of automated Jenkins builds). Some things to look into that would be better long-term solutions:
Install a local NPM Cache like sinopia. I found this guide to be particularly helpful.
Use Docker to build you app (which would work fine with Docker In Docker). Docker would cache after each build step, saving the repeated fetching of dependencies.

Related

Unable to install package using docker : kpt not found or ambiguous repo/dir#version specify '.git' in argument

I am using below script and it giving me an error #/bin/sh: 1: kpt: not found
FROM nginx
RUN apt update
RUN apt -y install git
RUN apt -y install curl
# install kpt package
RUN mkdir -p ~/bin
RUN curl -L https://github.com/GoogleContainerTools/kpt/releases/download/v1.0.0-beta.1/kpt_linux_amd64 --output ~/bin/kpt && chmod u+x ~/bin/kpt
RUN export PATH=${HOME}/bin:${PATH}
RUN SRC_REPO=https://github.com/kubeflow/manifests
RUN kpt pkg get $SRC_REPO/tf-training#v1.1.0 tf-training
But if I create the image using
FROM nginx
RUN apt update
RUN apt -y install git
RUN apt -y install curl
and perform
docker exec -it container_name bash
and manually do the task then I am able to install kpt package. Sharing below the screenshot of the process
The error changes if I provide the full path to /bin/kpt
Error: ambiguous repo/dir#version specify '.git' in argument
FROM nginx
RUN apt update
RUN apt -y install git
RUN apt -y install curl
RUN mkdir -p ~/bin
RUN curl -L https://github.com/GoogleContainerTools/kpt/releases/download/v1.0.0-beta.1/kpt_linux_amd64 --output ~/bin/kpt && chmod u+x ~/bin/kpt
RUN export PATH=${HOME}/bin:${PATH}
# Below line of code is to ensure that kpt is installed and working fine
RUN ~/bin/kpt pkg get https://github.com/ajinkya101/kpt-demo-repo.git/Packages/Nginx
RUN SRC_REPO=https://github.com/kubeflow/manifests
RUN ~/bin/kpt pkg get $SRC_REPO/tf-training#v1.1.0 tf-training
What is happening when I am using docker and not able to install it?
First, make sure SRC_REPO is declared as a Dockerfile environment variable
ENV SRC_REPO=https://github.com/kubeflow/manifests.git
^^^ ^^^^
And make sure the URL ends with .git.
As mentioned in kpt get:
In most cases the .git suffix should be specified to delimit the REPO_URI from the PKG_PATH, but this is not required for widely recognized repo prefixes.
Second, to be sure, specify the full path of kpt, without ~ or ${HOME}.
/root/bin/kpt
For testing, add a RUN id -a && pwd to be sure who and where you are when using the nginx image.

Attempting to host a flutter project on Azure App Services using a docker image; local image and cloud image behave differently

I am having trouble with azure and docker where my local machine image is behaving differently than the image I push to ACR. while trying to deploy to web, I get this error:
ERROR - failed to register layer: error processing tar file(exit status 1): Container ID 397546 cannot be mapped to a host IDErr: 0, Message: mapped to a host ID
So in trying to fix it, I have come to find out that azure has a limit on uid numbers of 65000. Easy enough, just change ownership of the affected files to root, right?
Not so. I put the following command into my Dockerfile:
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
Works great locally for changing the uids of the affected files from 397546 to 0. I do a command in the cli of the container:
find / -uid 397546
It finds none of the same files it found before. Yay! I even navigate to the directories where the affected files are, and do a quick
ls -n to double confirm they are fine, and sure enough the uids are now 0 on all of them. Good to go?
Next step, push to cloud. When I push and reset the app service, I still continue to get the same exact error above. I have confirmed on multiple fronts that it is indeed pushing the correct image to the cloud.
All of this means that somehow my local image and the cloud image are behaving differently.
I am stumped guys please help.
The Dockerfile is as below:
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 psmisc
RUN apt-get clean
# Clone the flutter repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Enable flutter web
RUN flutter upgrade
RUN flutter config --enable-web
# Run flutter doctor
RUN flutter doctor -v
# Change ownership to root of affected files
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
# Copy the app files to the container
COPY ./build/web /usr/local/bin/app
COPY ./startup /usr/local/bin/app/server
COPY ./pubspec.yaml /usr/local/bin/app/pubspec.yaml
# Set the working directory to the app files within the container
WORKDIR /usr/local/bin/app
# Get App Dependencies
RUN flutter pub get
# Build the app for the web
# Document the exposed port
EXPOSE 4040
# Set the server startup script as executable
RUN ["chmod", "+x", "/usr/local/bin/app/server/server.sh"]
# Start the web server
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]```
So basically we have made a shell script to build web BEFORE building the docker image. we then use the static js from the build/web folder and host that on the server. No need to download all of flutter. Makes pipelines a little harder, but at least it works.
New Dockerfile:
FROM ubuntu:20.04 as build-env
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get -y install sudo
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
## preesed tzdata, update package index, upgrade packages and install needed software
RUN echo "tzdata tzdata/Areas select US" > /tmp/preseed.txt; \
echo "tzdata tzdata/Zones/US select Colorado" >> /tmp/preseed.txt; \
debconf-set-selections /tmp/preseed.txt && \
apt-get update && \
apt-get install -y tzdata
RUN apt-get install -y curl git wget unzip libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 python3 nginx nano vim
RUN apt-get clean
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN cd /app/
# Configure nginx and remove secret files
RUN mv /app/build/web/ /var/www/html/patient
RUN cd /etc/nginx/sites-enabled
RUN cp -f /app/default /etc/nginx/sites-enabled/default
RUN cd /app/ && rm -r .dart_tool .vscode assets bin ios android google_place lib placepicker test .env .flutter-plugins .flutter-plugins-dependencies .gitignore .metadata analysis_options.yaml flutter_01.png pubspec.lock pubspec.yaml README.md
# Record the exposed port
EXPOSE 5000
# Start the python server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]

node.js docker image for all environments - including production

Currently we are using node:4.2.3 (LTS) docker image which is around 642 MB in size and node_modules around 140 MB in total ~800MB to build our web application docker image.
Publishing these images to our private registry and pulling them all environments becoming a time taken process.
Since we cant reduce the node_modules size( would be helpful if any reducing methods are avail) looking for suggestions to use any other node docker image for all environments - including production.
You can build your own docker images using following Dockerfile:
FROM ubuntu:14.04
RUN sudo apt-get update && sudo apt-get install -y wget
# install node v4.2.6
RUN wget https://nodejs.org/dist/v4.2.6/node-v4.2.6-linux-x64.tar.gz && \
tar -C /usr/local --strip-components 1 -xzf node-v4.2.6-linux-x64.tar.gz && \
rm node-v4.2.6-linux-x64.tar.gz
# install express 4.13.4
RUN npm install express#4.13.4
Using following command to build the image:
sudo docker build -t ubuntu-node .
The image is only 255MB
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-node latest 7ed1b88adb46 7 seconds ago 255 MB
Of course, you can install any necessary dependencies.

Switching users inside Docker image to a non-root user

I'm trying to switch user to the tomcat7 user in order to setup SSH certificates.
When I do su tomcat7, nothing happens.
whoami still ruturns root after doing su tomcat7
Doing a more /etc/passwd, I get the following result which clearly shows that a tomcat7 user exists:
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
sys:x:3:3:sys:/dev:/bin/sh
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/bin/sh
man:x:6:12:man:/var/cache/man:/bin/sh
lp:x:7:7:lp:/var/spool/lpd:/bin/sh
mail:x:8:8:mail:/var/mail:/bin/sh
news:x:9:9:news:/var/spool/news:/bin/sh
uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh
proxy:x:13:13:proxy:/bin:/bin/sh
www-data:x:33:33:www-data:/var/www:/bin/sh
backup:x:34:34:backup:/var/backups:/bin/sh
list:x:38:38:Mailing List Manager:/var/list:/bin/sh
irc:x:39:39:ircd:/var/run/ircd:/bin/sh
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh
nobody:x:65534:65534:nobody:/nonexistent:/bin/sh
libuuid:x:100:101::/var/lib/libuuid:/bin/sh
messagebus:x:101:104::/var/run/dbus:/bin/false
colord:x:102:105:colord colour management daemon,,,:/var/lib/colord:/bin/false
saned:x:103:106::/home/saned:/bin/false
tomcat7:x:104:107::/usr/share/tomcat7:/bin/false
What I'm trying to work around is this error in Hudson:
Command "git fetch -t git#________.co.za:_______/_____________.git +refs/heads/*:refs/remotes/origin/*" returned status code 128: Host key verification failed.
This is my Dockerfile, it takes an existing hudson war file and config that is tarred and builds an image, hudson runs fine, it just can't access git due to certificates not existing for user tomcat7.
FROM debian:wheezy
# install java on image
RUN apt-get update
RUN apt-get install -y openjdk-7-jdk tomcat7
# install hudson on image
RUN rm -rf /var/lib/tomcat7/webapps/*
ADD ./ROOT.tar.gz /var/lib/tomcat7/webapps/
# copy hudson config over to image
RUN mkdir /usr/share/tomcat7/.hudson
ADD ./dothudson.tar.gz /usr/share/tomcat7/
RUN chown -R tomcat7:tomcat7 /usr/share/tomcat7/
# add ssh certificates
RUN mkdir /root/.ssh
ADD ssh.tar.gz /root/
# install some dependencies
RUN apt-get update
RUN apt-get install --y maven
RUN apt-get install --y git
RUN apt-get install --y subversion
# background script
ADD run.sh /root/run.sh
RUN chmod +x /root/run.sh
# expose port 8080
EXPOSE 8080
CMD ["/root/run.sh"]
I'm using the latest version of Docker (Docker version 1.0.0, build 63fe64c/1.0.0), is this a bug in Docker or am I missing something in my Dockerfile?
You should not use su in a dockerfile, however you should use the USER instruction in the Dockerfile.
At each stage of the Dockerfile build, a new container is created so any change you make to the user will not persist on the next build stage.
For example:
RUN whoami
RUN su test
RUN whoami
This would never say the user would be test as a new container is spawned on the 2nd whoami. The output would be root on both (unless of course you run USER beforehand).
If however you do:
RUN whoami
USER test
RUN whoami
You should see root then test.
Alternatively you can run a command as a different user with sudo with something like
sudo -u test whoami
But it seems better to use the official supported instruction.
As a different approach to the other answer, instead of indicating the user upon image creation on the Dockerfile, you can do so via command-line on a particular container as a per-command basis.
With docker exec, use --user to specify which user account the interactive terminal will use (the container should be running and the user has to exist in the containerized system):
docker exec -it --user [username] [container] bash
See https://docs.docker.com/engine/reference/commandline/exec/
In case you need to perform privileged tasks like changing permissions of folders you can perform those tasks as a root user and then create a non-privileged user and switch to it.
FROM <some-base-image:tag>
# Switch to root user
USER root # <--- Usually you won't be needed it - Depends on base image
# Run privileged command
RUN apt install <packages>
RUN apt <privileged command>
# Set user and group
ARG user=appuser
ARG group=appuser
ARG uid=1000
ARG gid=1000
RUN groupadd -g ${gid} ${group}
RUN useradd -u ${uid} -g ${group} -s /bin/sh -m ${user} # <--- the '-m' create a user home directory
# Switch to user
USER ${uid}:${gid}
# Run non-privileged command
RUN apt <non-privileged command>
Add this line to docker file
USER <your_user_name>
Use docker instruction USER
You should also be able to do:
apt install sudo
sudo -i -u tomcat
Then you should be the tomcat user. It's not clear which Linux distribution you're using, but this works with Ubuntu 18.04 LTS, for example.
There's no real way to do this. As a result, things like mysqld_safe fail, and you can't install mysql-server in a Debian docker container without jumping through 40 hoops because.. well... it aborts if it's not root.
You can use USER, but you won't be able to apt-get install if you're not root.

Dockerfile/npm: create a sudo user to run npm install

Creating a Dockerfile to install a node framework that we've created (per my earlier post here):
# Install dependencies and nodejs
RUN apt-get update
RUN apt-get install -y python-software-properties python g++ make
RUN add-apt-repository ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get install -y nodejs
# Install git
RUN apt-get install -y git
# Bundle app source
ADD . /src
# Create a nonroot user, and switch to it
RUN /usr/sbin/useradd --create-home --home-dir /usr/local/nonroot --shell /bin/bash nonroot
RUN /usr/sbin/adduser nonroot sudo
RUN chown -R nonroot /usr/local/
RUN chown -R nonroot /usr/lib/
RUN chown -R nonroot /usr/bin/
RUN chown -R nonroot /src
USER nonroot
# Install app source
RUN cd /src; npm install
The problem is that npm expects to be run not as root -- is there a way to chain a series of sudo useradd commands to create a temp user that has sudo privileges that I can then switch to USER to run the npm install?
EDIT: updated the above, now getting this issue after successfuly creating a user and getting to the npm install line and choking:
Error: Attempt to unlock javascript-brunch#1.7.1, which hasn't been locked
at unlock (/usr/lib/node_modules/npm/lib/cache.js:1304:11)
at cb (/usr/lib/node_modules/npm/lib/cache.js:646:5)
at /usr/lib/node_modules/npm/lib/cache.js:655:20
at /usr/lib/node_modules/npm/lib/cache.js:1282:20
at afterMkdir (/usr/lib/node_modules/npm/lib/cache.js:1013:14)
at /usr/lib/node_modules/npm/node_modules/mkdirp/index.js:37:53
at Object.oncomplete (fs.js:107:15)
If you need help, you may report this *entire* log,
including the npm and node versions, at:
<http://github.com/npm/npm/issues>
The "Attempt to unlock" issue is often caused by not having the environment variable HOME set properly. npm needs this to be set to a directory that it can edit (it sets up and manages an .npm directory there).
You can specify environment variables in your docker run call with e. g. docker run -e "HOME=/home/docker".
To solve your "Attempt to unlock" issue, try cleaning the npm cache first by issuing
npm cache clean
After that, run
npm install
I came across a similar npm install error when I was trying to execute is as a non-root user in my Dockerfile. Svante's explanation of the issue is bang on, npm does some caching under the $HOME dir. Here's a simple Dockerfile that works with npm install:
FROM dockerfile/nodejs
# Assumes you have a package.json in the current dir
ADD . /src
# Create a nonroot user, and switch to it
RUN /usr/sbin/useradd --create-home --home-dir /usr/local/nonroot --shell /bin/bash nonroot
RUN chown -R nonroot /src
# Switch to our nonroot user
USER nonroot
# Set the HOME var, npm install gets angry if it can't write to the HOME dir,
# which will be /root at this point
ENV HOME /usr/local/nonroot
# Install app source
WORKDIR /src
RUN npm install

Resources