npm login via a node docker container - node.js

I'm trying to dockerize a project. I've got a few services as containers, i.e. Redis, Postgres, RabbitMQ, and Node. I have a docker-compose.yml that has all the services needed.
In my node build Dockerfile:
FROM node:16
ARG PAT
WORKDIR /app
COPY package.json .
COPY .npmrc /root/.npmrc
RUN npm install
COPY . .
WORKDIR /app/project1
RUN npm install
WORKDIR /app/project2
RUN npm install
The above fails because, within project2, I have a private GitHub package that I need to authenticate. I have generated a PAT and I can do npm login --scope=#OWNER --registry=https://npm.pkg.github.com enter the correct credentials and then do npm install which successfully gets the package that needed authenticating.
Is there a way to automate this via docker-compose/Dockerfile? Somehow add the token, owner, username, etc to the .yml file and use that to login?
My node services in my docker-compose.yml:
node:
container_name: node
build:
context: ..
dockerfile: ./docker/build/node/Dockerfile
args:
PAT: TOKEN
ports:
- 3150:3150

As I see it, you need your credentials in the build phase of your image. You can do as follows.
Create a .npmrc in your docker context
#OWNER:registry=https://npm.pkg.github.com/
//npm.pkg.github.com/:_authToken=${PAT}
user.email=email#example.com
user.name=foo bar
and copy that file in the Dockerfile
FROM node:16-alpine
ARG PAT
COPY --chown=node .npmrc /home/node/.npmrc
and then during the image build set the value for the PAT environment variable from the GITHUB_PAT environment variable of the host.
docker build --build-arg PAT=${GITHUB_PAT} .
ie --build-arg sets the environment variables during build time of the image. But be aware, that any environment variable set via --build-args is only available during build time of the image. Ie, it's not available when the container is running. But again, you don't seem to need it at the runtime of the container, as the installation of your npm packages happens during the build time of the image.

Related

Does GCP Cloud Build Docker remove files created during the Dockerfile execution?

I have a build step in the docker file that generates some files. Since I also need those files locally (when testing) I have the generation of them not in Cloud Build itself but in the Dockerfile (simple node script that executes via npx). Locally this works perfectly fine and my Docker image does contain those generated files. But whenever I throw this Dockerfile into Cloud Build it executes the script but it does not keep the generated files in the resulting image. I also scanned the logs and so on but found no error (such as a persission error or something similar).
Is there any flag or something I am missing here that prevents my Dockerfile from generating those files and storing them into the image?
Edit:
Deployment pipeline is a trigger onto a GitHub pull request that runs the cloud build.yaml in which the docker build command is located. Afterwards the image is getting pushed to the Artifact Registry and to Cloud Run. On Cloud Run itself the files are gone. Steps in-between I can't check but when building locally the files are getting generated and they are persistent in the image.
Dockerfile
FROM node:16
ARG ENVIRONMENT
ARG GOOGLE_APPLICATION_CREDENTIALS
ARG DISABLE_CLOUD_LOGGING
ARG DISABLE_CONSOLE_LOGGING
ARG GIT_ACCESS_TOKEN
WORKDIR /usr/src/app
COPY ./*.json ./
COPY ./src ./src
COPY ./build ./build
ENV ENVIRONMENT="${ENVIRONMENT}"
ENV GOOGLE_APPLICATION_CREDENTIALS="${GOOGLE_APPLICATION_CREDENTIALS}"
ENV DISABLE_CLOUD_LOGGING="${DISABLE_CLOUD_LOGGING}"
ENV DISABLE_CONSOLE_LOGGING="${DISABLE_CONSOLE_LOGGING}"
ENV PORT=8080
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
RUN npm install
RUN node ./build/generate-files.js
RUN rm -rf ./build
EXPOSE 8080
ENTRYPOINT [ "node", "./src/index.js" ]
Cloud Build (stuff before and after is just normal deployment to Cloud Run stuff)
...
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [ '-c', 'docker build --build-arg ENVIRONMENT=${_ENVIRONMENT} --build-arg DISABLE_CONSOLE_LOGGING=true --build-arg GIT_ACCESS_TOKEN=$$GIT_ACCESS_TOKEN -t location-docker.pkg.dev/$PROJECT_ID/atrifact-registry/docker-image:${_ENVIRONMENT} ./' ]
secretEnv: ['GIT_ACCESS_TOKEN']
...
I figured it out. Somehow the build process does not fail when crashing a RUN statement. This lead to me thinking there are no problem, when in fact it could not authorize my generation script. Adding --network=cloudbuild to the docker build command fixed the authorization problem.

copy build from containter in different context

So im trying to get the environment for my project set up to use docker. Project structure is as follows.
/client
/server
/nginx
docker-compose.yml
docker-compose.override.yml
docker-compose.prod.yml
in the Dockerfile for each /client, /server, and nginx I have a base image that installs my dependencies then a development image that installs dev-dependencies and a production image that builds or runs the image for client and server respectively
ex.
# start from a node image
FROM node:14.8.0-alpine as base
WORKDIR /client
COPY package.json package-lock.json ./
RUN npm i --only=prod
FROM base as development
RUN npm install --only=dev
CMD [ "npm", "run", "start" ]
FROM base as production
COPY . .
RUN npm run build
so here is where my problem comes in.
In /nginx I want nginx in development just act as a revers proxy for create-react-app, but when I am in production I want to take client/build from the production client image and copy it into the nginx server to be served statically without the overhead of the entire build tool chain for react.
ie.
FROM nginx:stable-alpine as base
FROM base as development
COPY development.conf /etc/nginx/nginx.conf
FROM base as production
COPY production.conf /etc/nginx/nginx.conf
COPY --from=??? /client/build /usr/share/nginx/html
^
what goes here?
If anyone has any clue how to get this to work without having pull from docker hub and having to push images up to docker hub every time a change is made that would be great.
You can COPY --from= another image by name. Just like docker run, the image needs to be local, and Docker won't contact Docker Hub or another registry server if you already have the image.
# Most basic form; "myapp" is the containing directory name
COPY --from=myapp_client /client/build /usr/share/nginx/html
Compose doesn't directly have a way to specify this build dependency, but running docker-compose build twice should do the trick.
If you're planning to deploy this, you probably want some control over the name and tag of the image. In docker-compose.yml you can specify both build: and image:, which well tell Compose what name to use when it builds the image. You can also use environment variables almost everywhere in the Compose file, and pass ARG into a build to configure it. Combining all of these would give you:
version: '3.8'
services:
client:
build: ./client
image: registry.example.com/my/client:${TAG:-latest}
nginx:
build:
context: ./nginx
args:
TAG: ${TAG:-latest}
image: registry.example.com/my/client:${TAG:-latest}
FROM nginx:stable-alpine
ARG TAG=latest
COPY --from=registry.example.com/my/client:${TAG} /usr/share/nginx/html
TAG=20210113 docker-compose build
TAG=20210113 docker-compose build
TAG=20210113 docker-compose up -d
# TAG=20210113 docker-compose push

Cannot dockerize app with docker compose with secrets says file not found but file exists

I am trying to dockerize an API which uses firebase, the credentials file is proving to be difficult to dockerize, I'll be deploying using docker-compose, my files are:
docker-compose:
version: "3.7"
services:
api:
restart: always
build: .
secrets:
- source: google_creds
target: auth_file
env_file: auth.env
ports:
- 1234:8990
secrets:
google_creds:
file: key.json
the key.json is the private key file
The Dockerfile looks like:
FROM alpine
# Install the required packages
RUN apk add --update git go musl-dev
# Install the required dependencies
RUN go get github.com/gorilla/mux
RUN go get golang.org/x/crypto/sha3
RUN go get github.com/lib/pq
RUN go get firebase.google.com/go
# Setup the proper workdir
WORKDIR /root/go/src/secure-notes-api
# Copy indivisual files at the end to leverage caching
COPY ./LICENSE ./
COPY ./README.md ./
COPY ./*.go ./
COPY db db
RUN go build
#Executable command needs to be static
CMD ["/root/go/src/secure-notes-api/secure-notes-api"]
I've set the GOOGLE_APPLICATION_CREDENTIALS env from my auth.env to: /run/secrets/auth_file
The program panics with:
panic: google: error getting credentials using GOOGLE_APPLICATION_CREDENTIALS environment variable: open "/run/secrets/auth_file": no such file or directory
I've tried:
Mounting a volume to a path and setting the env var to that, results in the same
Copying the key to docker image (out of desperation), resulted in the same
Overriding start command to cat the secret file - this worked, i could see the entire file being outputted
Curiously enough, if I mount a volume, shell into it and execute the binary manually, it works perfectly well.

Handling node modules with docker-compose

I am building a set of connected node services using docker-compose and can't figure out the best way to handle node modules. Here's what should happen in a perfect world:
Full install of node_modules in each container happens on initial build via each service's Dockerfile
Node modules are cached after the initial load -- i.e. functionality so that npm only installs when package.json has changed
There is a clear method for installing npm modules -- whether it needs to be rebuilt or there is an easier way
Right now, whenever I npm install --save some-module and subsequently run docker-compose build or docker-compose up --build, I end up with the module not actually being installed.
Here is one of the Dockerfiles
FROM node:latest
# Create app directory
WORKDIR /home/app/api-gateway
# Intall app dependencies (and cache if package.json is unchanged)
COPY package.json .
RUN npm install
# Bundle app source
COPY . .
# Run the start command
CMD [ "npm", "dev" ]
and here is the docker-compose.myl
version: '3'
services:
users-db:
container_name: users-db
build: ./users-db
ports:
- '27018:27017'
healthcheck:
test: exit 0'
api-gateway:
container_name: api-gateway
build: ./api-gateway
command: npm run dev
volumes:
- './api-gateway:/home/app/api-gateway'
- /home/app/api-gateway/node_modules
ports:
- '3000:3000'
depends_on:
- users-db
links:
- users-db
It looks like this line might be overwriting your node_modules directory:
# Bundle app source
COPY . .
If you ran npm install on your host machine before running docker build to create the image, you have a node_modules directory on your host machine that is being copied into your container.
What I like to do to address this problem is copy the individual code directories and files only, eg:
# Copy each directory and file
COPY ./src ./src
COPY ./index.js ./index.js
If you have a lot of files and directories this can get cumbersome, so another method would be to add node_modules to your .dockerignore file. This way it gets ignored by Docker during the build.

How do I populate a volume in a docker-compose.yaml

I am starting to write my first docker-compose.yml file to set a a combination of services that make up my application (all node-js). One of the services (web-server - bespoke, not express) has both a large set of modules it needs and an even larger set of bower_components.
In order to provide separation of concerns, and so I can control the versioning more closely I want to create two named volumes which hold the node_modules and bower_components, and mount those volumes on to the relevant directories of the web-server service.
The question that is confusing me is how do I get these two volumes populated on service startup. There are two reasons for my confusion:-
The behaviour of docker-compose with the -d flag versus the docker run command with the -d flag - the web service obviously needs to keep running (and indeed needs to be restarted if it fails) whereas the container that might populate one or other of the volumes is a run once as the whole application is brought up with docker-compose up command. Can I control this?
A running service and the build commands of that service. Could I actually use a Dockerfiles to run npm install and bower install. In particular, if I change the source code of the web application, but the modules and bower_components don't change, will this build step be instantaneous because of a cached result?
I have been unable to find examples of this sort of behaviour so I am puzzled as to how to go about doing it. Can someone help.
I did sommething like that without bower but with nodeJS tools like Sass, Hall, live reload, jasmine...
I used npm for all installation inside the npm project (not global install)
For that, the official node image is quiet well, I only have to set the PATH to the app/node_modules/.bin. So my Dockerfile look like this (very simple) :
FROM node:7.5
ENV PATH /usr/src/app/node_modules/.bin/:$PATH
My docker-compose.yml file is :
version: '2'
services:
mydata:
image: busybox
stdin_open: true
volumes:
- .:/usr/src/app
node:
build: .
image: mynodecanvassvg
working_dir: /usr/src/app
stdin_open: true
volumes_from:
- mydata
sass:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
#entrypoint: "node-sass -w -r -o public/css src/scss"
stdin_open: true
jasmine:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
#entrypoint: "jasmine-node --coffee --autoTest tests/coffee"
stdin_open: true
live:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
ports:
- 35729:35729
stdin_open: true
I have only some trouble with entrypoints that all needs a terminal to display result while working. So, I use the stdin_open: true to keep the container active and then I use the docker exec -it on each containers to get running each watch services.
And of course I launch the docker-compose with the -d to keep it alive as daemon.
Next you have to put your npm package.json on your app folder (next to Dockerfile and docker-compose.yml) and launch a npm update to load and install the modules.
I'll start with the standard way first
2. Dockerfile
Using a Dockerfile avoids trying to work out how to setup docker-compose service dependencies or external build scripts to get volumes populated and working before a docker-compose up.
A Dockerfile can be setup so only changes to the bower.json and package.json will trigger a reinstall of node_modules or bower_components.
The command that installs first will, at some point, have to invalidate the second commands cache though so the order you put them in matters. Which ever updates the least, or is significantly slower should go first. You may need to manually install bower globally if you want to run the bower command first.
If you are worried about NPM versioning, look at using yarn and a yarn.lock file. Yarn will speed things up a little bit too. Bower can just set specific versions as it doesn't have the same sub module versioning issues NPM does.
File Dockerfile
FROM mhart/alpine-node:6.9.5
RUN npm install bower -g
WORKDIR /app
COPY package.json /app/
RUN npm install --production
COPY bower.json /app/
RUN bower install
COPY / /app/
CMD ["node", "server.js"]
File .dockerignore
node_modules/
bower_components/
This is all supported in a docker-compose build: stanza
1. Docker Compose + Volumes
The easiest/quickest way to populate a volume is by defining a VOLUME in the Dockerfile after the directory has been populated in the image. This will work via compose. I'd question the point of using a volume when the image already has the required content though...
Any other methods of population will require some custom build scripts outside of compose. One option would be to docker run a container with the required volume attached and populate it with npm/bower install.
docker run \
--volume myapp_bower_components:/bower_components \
--volume bower.json:/bower.json \
mhart/alpine-node:6.9.5 \
npm install bower -g && bower install
and
docker run \
--volume myapp_mode_modules:/node_modules \
--volume package.json:/package.json \
mhart/alpine-node:6.9.5 \
npm install --production
Then you will be able to mount the populated volume on your app container
docker run \
--volume myapp_bower_components:/bower_components \
--volume myapp_node_modules:/node_modules \
--port 3000:3000
my/app
You'd probably need to come up with some sort of versioning scheme for the volume name as well so you could roll back. Sounds like a lot of effort for something an image already does for you.
Or possibly look at rocker, which provides an alternate docker build system and lets you do all the things Docker devs rail against, like mounting a directory during a build. Again this is stepping outside of what Docker Compose supports.

Resources