Private node module pull inside docker - node.js

I have a private repository of a node_module which I install by including it in package.json
ssh://git#github.com/iamsaquib/<pivate-repo>.git
When I am copying all server files inside docker image and try to do a npm install it is unable to install the package and throws I don't have proper access rights. I think I have to authorize by copying my id_rsa.pub inside Dockerfile and add it as authorized key, what is the correct way to do this?
Dockerfile
FROM node:12-slim
ENV NODE_ENV=development
WORKDIR /app
USER root
COPY . .
RUN ./install.sh
RUN ./build.sh
EXPOSE 8000
CMD ["./run.sh"]

You need to mount SSH private key (/home/yourname/.ssh/id_rsa).
You should avoid putting private key in Docker images. One work around could be multi-stage image (security might still be debatable).
FROM node:12-slim as installer
ENV NODE_ENV=development
WORKDIR /app
USER root
COPY /home/yourname/.ssh /home/root/.ssh
COPY /home/yourname/.gitconfig /home/root/.gitconfig
COPY . .
RUN ./install.sh
RUN ./build.sh
RUN rm -rf /home/root/.ssh
RUN rm -rf /home/root/.gitconfig
# Final image
FROM node:12-slim
WORKDIR /app
ENV NODE_ENV=development
USER root
COPY --from=installer /app .
EXPOSE 8000
CMD ["./run.sh"]

Related

Dockerfile not reading build arg or secret from file/stdin but works when hardcoded

I have the following dockerfile:
FROM node:14-alpine
RUN mkdir /app && chown -R node:node /app
WORKDIR /app
USER node
RUN --mount=type=secret,id=ghtoken echo "//npm.pkg.github.com/:_authToken=$(cat /run/secrets/ghtoken)" > ~/.npmrc
COPY package.json yarn.lock .npmrc ./
RUN yarn install
RUN yarn install --only=dev
COPY --chown=node:node . .
CMD ["yarn", "start"]
This is a nextjs application and it has a private node module as a dependency which is hosted on GitHub NPM Registry. GitHub requires a personal access token for installing private node modules. So I cannot hardcode the token in the ~/.npmrc file. I have two options: a) Pass the token via build arg 2) Use docker secrets. Unfortunately neither of these works for me. In line 5 of the Dockerfile, you can see that I am expecting a secret called ghtoken which I am loading from a file called secret.txt when running the build command:
DOCKER_BUILDKIT=1 docker build . --secret id=ghtoken,src=secret.txt -t dashboard:latest --no-cache
This doesn't work. I get 401 unauthorized from GitHub's end. For some reason, the Dockerfile acts as if I didn't pass any secret at all.
If I change line 5 from this:
RUN --mount=type=secret,id=ghtoken echo "//npm.pkg.github.com/:_authToken=$(cat /run/secrets/ghtoken)" > ~/.npmrc
...to this:
RUN echo "//npm.pkg.github.com/:_authToken=<Real-Token-Here>" > ~/.npmrc
...then it works.
Similarly, if I go for the build arg approach, with a slight modification to my Dockerfile:
ARG GHTOKEN
FROM node:14-alpine
RUN mkdir /app && chown -R node:node /app
WORKDIR /app
USER node
RUN echo "//npm.pkg.github.com/:_authToken=$GHTOKEN" > ~/.npmrc
COPY package.json yarn.lock .npmrc ./
RUN yarn install
RUN yarn install --only=dev
COPY --chown=node:node . .
CMD ["yarn", "start"]
...and pass the GHTOKEN as the build arg:
docker build --build-arg GHTOKEN=<Real-Token-Here> . -t dashboard:latest --no-cache
...then I get 401 Unauthorized again. If I change line 6 from this:
RUN echo "//npm.pkg.github.com/:_authToken=$GHTOKEN" > ~/.npmrc
...to this:
RUN echo "//npm.pkg.github.com/:_authToken=<Real-Token-Here>" > ~/.npmrc
I don't get errors. Apparently, my Dockerfile cannot read from the build arg or secret. How can I fix this?
UPDATE:
The ARG approach was not working because I was using the ARG statement before the FROM statement. If I update the first two lines to look like this:
FROM node:14-alpine
ARG GHTOKEN
...then the token is properly retrieved. But I'm still in the dark why the docker secret approach wouldn't work. Besides, this arg passing approach is insecure.
will advise you not to use build arguments for passing secrets:
Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.
maybe you just need to take a different approach, such as:
create .npmrc file dynamically outside of your container
spin-up a vanilla node container and mount .npmrc and node_modules directory or a volume
run npm within the container
at this stage, you have the node_modules directory/volume ready and you can mount\copy it to your application container\image.
or... (you should do it wisely, do the file won't be stored in a layered cache)
create .npmrc file dynamically outside of your container
copy .npmrc into your application container
run npm
delete the .npmrc file from the container

Docker: node_modules symlink not working for typescript

I am working on containerization of Express app in TS. But not able to link node_modules installed outside the container. Volume is also mounted for development.But still getting error in editor(vscode) Cannot find module 'typeorm' or its corresponding type declarations., similar for all dependencies.
volumes:
- .:/usr/src/app
Dockerfile:
FROM node:16.8.0-alpine3.13 as builder
WORKDIR /usr/src/app
COPY package.json .
COPY transformPackage.js .
RUN ["node", "transformPackage"]
FROM node:16.8.0-alpine3.13
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app/package-docker.json package.json
RUN apk update && apk upgrade
RUN npm install --quiet && mv node_modules ../ && ln -sf ../node_modules node_modules
COPY . .
EXPOSE 3080
ENV NODE_PATH=./dist
RUN npm run build
CMD ["npm", "start"]
I've one workaround where I can install dependencies locally, and then use those, but need another solution where we should install dependencies only in the container and not the outside.
Thanks in advance.
Your first code section implies you use docker-compose. Probably the build (of the Dockerfile) is also done there.
The point is that the volume mappings in the docker-compose are not available during build-phase in that same Docker-service.

Use dockerfile to build npm project

I have Dockerfile like:
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
RUN npm run build
I need compiled files on my local drive not in docker container.
VOLUME looks like I need I think, but dunno how to do it, to make the build and share those build files.
can someone help me ? thanks!
Assuming npm run build in your Dockerfile produces a build directory, you can get it locally using a volume indeed
docker build -t <yourcontainername> .
docker run \
-v ${PWD}/build:/home/node/app/build \
-it <yourcontainername>
You can use
docker cp <containerId>:/file/path/within/container /host/path/target

Generate .env file for docker image

I want to generate a .env file and push it to the docker image during the docker build with Dockerfile.
The problem is that the .env file is not copy to the docker image.
I tried to add COPY .env to /website/.env but it can't find it.
I execute a nodejs script during the build process who get envs from AWS and then create a .env file.
FROM node:10.15.3-alpine
ARG SERVER_ENV
# Create app directory
RUN mkdir /website
WORKDIR /website
COPY . /website/
RUN node aws ${SERVER_ENV}
COPY .env /website/.env
COPY package.json yarn.lock ./
RUN ls -al
RUN pwd
# Install Yarn
RUN npm install -g yarn#1.15.2
# Install app dependencies
RUN yarn install
# Build source files
RUN yarn run build
UPDATE
I finally found a way to fix it.
I separate task in my Dockerfile like this :
FROM mhart/alpine-node:10 as env
ARG SERVER_ENV
WORKDIR /usr/src
COPY aws.js /usr/src
RUN yarn add aws-sdk
COPY . .
RUN node aws ${SERVER_ENV}
FROM mhart/alpine-node:10 as base
WORKDIR /usr/src
COPY package.json yarn.lock /usr/src/
RUN yarn install
COPY . .
COPY --from=env /usr/src .
RUN yarn build
FROM mhart/alpine-node:10
WORKDIR /usr/src
COPY --from=base /usr/src .
Maybe you have a .dockerignore file that blocks the .env file from being copied to the image?

Docker - Override content of linked volume

Having simple Node.js docker container
docker-compose.yml:
app:
build: ./dockerfiles/app
volumes:
- /Users/home/work/app:/usr/app
Dockerfile:
FROM node:6.7-slim
COPY package.json /tmp
RUN cd /tmp && npm install
RUN mkdir -p /usr/app
WORKDIR /usr/app
CMD ["node", "./src/app.js"]
What I want to achieve is container where I have package.json and installed node modules (npm install). Part where I copy package.json and install modules inside container is pretty straighforward, but problem occur, when I want to use these node_modules inside linked app. I can't find any way, how to copy /tmp/node_modules into /usr/app/node_modules
Is there any Docker way ho to do that? If not, can I tell my node app to look for node_modules somewhere else than in root directory?
You can achieve what you want by changing the CMD used when starting the container, either in your Dockerfile, or in the docker-compose.yml file.
Instead of just starting node ./src/app.js, you want to do two things:
Copy the node_modules over.
Start Node
Using the docker-compose.yml, I would do the following:
app:
build: ./dockerfiles/app
volumes:
- /Users/home/work/app:/usr/app
command: >
bash -c "
rm -rf /usr/app/node_modules
&& cp -R /tmp/node_modules /usr/app/node_modules
&& node ./src/app.js
"
This will delete the existing node modules on the mapped-in volume, then copy in the ones from container, and then finally starts the node app. This is going to happen every time the container is started.
As #schovi has mentioned in order to not override the contents of node_modules within the container and the contents of node_modules within the folder of the host machine, it is necessary to create another internal volume in the docker-compose.yml file:
volumes:
- ${APP_PATH}:/usr/app
- /usr/app/node_modules
Doing that makes it safe to copy the files from /tmp/node_modules into /usr/app/node_modules using this instructions.
FROM node
# Node modules
COPY *.json /tmp/
RUN cd /tmp && yarn
# App
RUN mkdir -p /usr/app
WORKDIR /usr/app
RUN cp -a /tmp/node_modules /usr/app/node_modules
ENV NODE_ENV docker
CMD ["run-node", "src/app.js"]
However, I would create first the app folder and install node_modules directly on it, reducing considerably the cache layers and increasing the building speed.
FROM node:12.8.1 #always mind the version
# Node modules
RUN mkdir -p /usr/app
WORKDIR /usr/app
#Mind that point (workdir)
COPY package*.json .
RUN yarn
ENV NODE_ENV docker
CMD ["run-node", "src/app.js"]
I hope it helps! :D
Thing that helped me is following usage of volumes
volumes:
- ${APP_PATH}:/usr/app
# Empty node_modules directory
- /usr/app/node_modules
Then in Dockerfile:
FROM node
# Node modules
COPY *.json /tmp/
RUN cd /tmp && yarn
ENV NODE_PATH /tmp/node_modules:${NODE_PATH}
# App
RUN mkdir -p /usr/app
WORKDIR /usr/app
ENV NODE_ENV docker
CMD ["run-node", "src/app.js"]
This allow me to have node_modules in another directory and app will look for them there.

Resources