Does GCP Cloud Build Docker remove files created during the Dockerfile execution? - node.js

I have a build step in the docker file that generates some files. Since I also need those files locally (when testing) I have the generation of them not in Cloud Build itself but in the Dockerfile (simple node script that executes via npx). Locally this works perfectly fine and my Docker image does contain those generated files. But whenever I throw this Dockerfile into Cloud Build it executes the script but it does not keep the generated files in the resulting image. I also scanned the logs and so on but found no error (such as a persission error or something similar).
Is there any flag or something I am missing here that prevents my Dockerfile from generating those files and storing them into the image?
Edit:
Deployment pipeline is a trigger onto a GitHub pull request that runs the cloud build.yaml in which the docker build command is located. Afterwards the image is getting pushed to the Artifact Registry and to Cloud Run. On Cloud Run itself the files are gone. Steps in-between I can't check but when building locally the files are getting generated and they are persistent in the image.
Dockerfile
FROM node:16
ARG ENVIRONMENT
ARG GOOGLE_APPLICATION_CREDENTIALS
ARG DISABLE_CLOUD_LOGGING
ARG DISABLE_CONSOLE_LOGGING
ARG GIT_ACCESS_TOKEN
WORKDIR /usr/src/app
COPY ./*.json ./
COPY ./src ./src
COPY ./build ./build
ENV ENVIRONMENT="${ENVIRONMENT}"
ENV GOOGLE_APPLICATION_CREDENTIALS="${GOOGLE_APPLICATION_CREDENTIALS}"
ENV DISABLE_CLOUD_LOGGING="${DISABLE_CLOUD_LOGGING}"
ENV DISABLE_CONSOLE_LOGGING="${DISABLE_CONSOLE_LOGGING}"
ENV PORT=8080
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
RUN npm install
RUN node ./build/generate-files.js
RUN rm -rf ./build
EXPOSE 8080
ENTRYPOINT [ "node", "./src/index.js" ]
Cloud Build (stuff before and after is just normal deployment to Cloud Run stuff)
...
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [ '-c', 'docker build --build-arg ENVIRONMENT=${_ENVIRONMENT} --build-arg DISABLE_CONSOLE_LOGGING=true --build-arg GIT_ACCESS_TOKEN=$$GIT_ACCESS_TOKEN -t location-docker.pkg.dev/$PROJECT_ID/atrifact-registry/docker-image:${_ENVIRONMENT} ./' ]
secretEnv: ['GIT_ACCESS_TOKEN']
...

I figured it out. Somehow the build process does not fail when crashing a RUN statement. This lead to me thinking there are no problem, when in fact it could not authorize my generation script. Adding --network=cloudbuild to the docker build command fixed the authorization problem.

Related

npm login via a node docker container

I'm trying to dockerize a project. I've got a few services as containers, i.e. Redis, Postgres, RabbitMQ, and Node. I have a docker-compose.yml that has all the services needed.
In my node build Dockerfile:
FROM node:16
ARG PAT
WORKDIR /app
COPY package.json .
COPY .npmrc /root/.npmrc
RUN npm install
COPY . .
WORKDIR /app/project1
RUN npm install
WORKDIR /app/project2
RUN npm install
The above fails because, within project2, I have a private GitHub package that I need to authenticate. I have generated a PAT and I can do npm login --scope=#OWNER --registry=https://npm.pkg.github.com enter the correct credentials and then do npm install which successfully gets the package that needed authenticating.
Is there a way to automate this via docker-compose/Dockerfile? Somehow add the token, owner, username, etc to the .yml file and use that to login?
My node services in my docker-compose.yml:
node:
container_name: node
build:
context: ..
dockerfile: ./docker/build/node/Dockerfile
args:
PAT: TOKEN
ports:
- 3150:3150
As I see it, you need your credentials in the build phase of your image. You can do as follows.
Create a .npmrc in your docker context
#OWNER:registry=https://npm.pkg.github.com/
//npm.pkg.github.com/:_authToken=${PAT}
user.email=email#example.com
user.name=foo bar
and copy that file in the Dockerfile
FROM node:16-alpine
ARG PAT
COPY --chown=node .npmrc /home/node/.npmrc
and then during the image build set the value for the PAT environment variable from the GITHUB_PAT environment variable of the host.
docker build --build-arg PAT=${GITHUB_PAT} .
ie --build-arg sets the environment variables during build time of the image. But be aware, that any environment variable set via --build-args is only available during build time of the image. Ie, it's not available when the container is running. But again, you don't seem to need it at the runtime of the container, as the installation of your npm packages happens during the build time of the image.

copy build from containter in different context

So im trying to get the environment for my project set up to use docker. Project structure is as follows.
/client
/server
/nginx
docker-compose.yml
docker-compose.override.yml
docker-compose.prod.yml
in the Dockerfile for each /client, /server, and nginx I have a base image that installs my dependencies then a development image that installs dev-dependencies and a production image that builds or runs the image for client and server respectively
ex.
# start from a node image
FROM node:14.8.0-alpine as base
WORKDIR /client
COPY package.json package-lock.json ./
RUN npm i --only=prod
FROM base as development
RUN npm install --only=dev
CMD [ "npm", "run", "start" ]
FROM base as production
COPY . .
RUN npm run build
so here is where my problem comes in.
In /nginx I want nginx in development just act as a revers proxy for create-react-app, but when I am in production I want to take client/build from the production client image and copy it into the nginx server to be served statically without the overhead of the entire build tool chain for react.
ie.
FROM nginx:stable-alpine as base
FROM base as development
COPY development.conf /etc/nginx/nginx.conf
FROM base as production
COPY production.conf /etc/nginx/nginx.conf
COPY --from=??? /client/build /usr/share/nginx/html
^
what goes here?
If anyone has any clue how to get this to work without having pull from docker hub and having to push images up to docker hub every time a change is made that would be great.
You can COPY --from= another image by name. Just like docker run, the image needs to be local, and Docker won't contact Docker Hub or another registry server if you already have the image.
# Most basic form; "myapp" is the containing directory name
COPY --from=myapp_client /client/build /usr/share/nginx/html
Compose doesn't directly have a way to specify this build dependency, but running docker-compose build twice should do the trick.
If you're planning to deploy this, you probably want some control over the name and tag of the image. In docker-compose.yml you can specify both build: and image:, which well tell Compose what name to use when it builds the image. You can also use environment variables almost everywhere in the Compose file, and pass ARG into a build to configure it. Combining all of these would give you:
version: '3.8'
services:
client:
build: ./client
image: registry.example.com/my/client:${TAG:-latest}
nginx:
build:
context: ./nginx
args:
TAG: ${TAG:-latest}
image: registry.example.com/my/client:${TAG:-latest}
FROM nginx:stable-alpine
ARG TAG=latest
COPY --from=registry.example.com/my/client:${TAG} /usr/share/nginx/html
TAG=20210113 docker-compose build
TAG=20210113 docker-compose build
TAG=20210113 docker-compose up -d
# TAG=20210113 docker-compose push

How dockerize create-react-app can access to azure release pipeline variables in docker environment

Here is the complete flow of problem
1) Azure Build pipeline creates an artefact (docker image) using following DockerFile
FROM hub.docker.prod.private.com/library/node:10.16-alpine as buildImage
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ENV PATH /app/node_modules/.bin:$PATH
ENV REACT_APP_SERVER_URL=${REACT_APP_SERVER_URL}
ENV REACT_APP_AD_APP_ID=${REACT_APP_AD_APP_ID}
ENV REACT_APP_REDIRECT_URL=${REACT_APP_REDIRECT_URL}
ENV REACT_APP_AUTHORITY=${REACT_APP_AUTHORITY}
COPY package.json /usr/src/app/
RUN npm install
RUN npm install react-scripts#3.0.1 -g
COPY . /usr/src/app
RUN npm run build
FROM hub.docker.prod.private.com/library/nginx:1.15.12-alpine
COPY --from=buildImage /usr/src/app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
2) And pushes docker image into Azure container registry (ACR).
3) Multistage Release pipeline pulls image from ACR and deploy on azure app service(s)
(QA -> Stage -> Prod).
4) Release pipeline is using variable values from variable group defined in release pipeline and I am
expecting these variable should available in docker environment so that it replaces
ENV variable placeholders in DockerFile.
But after deployment all environment variables that being used inside application remains undefined, can you please correct me if it is possible to use docker environment the way I mentioned above.
But after deployment all environment variables that being used inside
application remains undefined, can you please correct me if it is
possible to use docker environment the way I mentioned above.
The way mentioned above wasn't possible to work. During the deployment process, Release Variables won't automatically replace the original values with stage-scope release variables in DockerFile.
As a workaround, you can try Replace Tokens task from Replace Tokens, add this task in your three stages before deploy task. So the task order in your three stages should be similar to: Replace Tokens task to set release variables in DockerFile=>docker build and push=>deploy task.
To use this task, your Dockerfile should be:
ENV REACT_APP_SERVER_URL=#{REACT_APP_SERVER_URL}#
ENV REACT_APP_AD_APP_ID=#{REACT_APP_AD_APP_ID}#
ENV REACT_APP_REDIRECT_URL=#{REACT_APP_REDIRECT_URL}#
ENV REACT_APP_AUTHORITY=#{REACT_APP_AUTHORITY}#
Details about how this task works check my another issue.
If you don't want to make any change to the Dockerfile itself, another way is to pass the value in command-line arguments. You can check this similar post for more details.

Running Jest in --watch mode in a Docker container in a subdirectory of git root

I have built a web project with the following file structure:
-root_folder/
-docker-compose.yml
-.git/
-backend/
-.dockerignore
-docker/
-dev.dockerfile
-frontend/
-.dockerignore
-docker/
-dev.dockerfile
I run the frontend app (Angular) in a Docker container. I also run the backend app (ExpressJS) in another container but the backend is not relevant to my problem.
I have mounted the volume ./frontend to /app in the container in order to allow hot reloads.
This configuration works to run Angular just fine. However, when running Jest with the --watch flag it gives the error --watch is not supported without git/hg, please use --watchAll
I went back into the dockerfile and added:
RUN apk update -q && \
apk add -q git
But this doesn't fix the problem. From all the research I've done, it seems that the issue is that Jest watch mode uses git somehow to detect changes, but my git folder is not in the 'frontend' subdirectory.
I tried to modify my container to copy all the files to /app/frontend instead and then also copy in and mount .git folder to /app/.git but that had no effect.
I do not want to run Jest with --watchAll (but I tested it and that does run properly). Any suggestions?
EDIT Answered my own question. I was on the right track with mounting the .git folder. The missing step was setting GIT_WORK_TREE and GIT_DIR environment variables.
I was able to get this working exactly as I wanted. The problem is that in order for Jest to run in watch mode, it does so by looking at the changed files according to Git. I was able to get this functionality working by setting up the directory structure on the container similar to my host system with:
-app/
-.git/
-frontend/
Then, most importantly, setting the GIT_WORK_TREE and GIT_DIR environment variables.
Here is my dockerfile:
FROM node:alpine3.11 as dev
WORKDIR /app/frontend
# To use packages in CLI without global install
ENV PATH /app/frontend/node_modules/.bin:$PATH
COPY . .
RUN npm install --silent
EXPOSE 4200
CMD ["/bin/sh", "-c", "npm run start:dev"]
##########################################################
FROM dev as unit-test
ENV GIT_WORK_TREE=/app/frontend GIT_DIR=/app/.git
RUN apk update && \
apk add git
CMD ["/bin/sh", "-c", "jest --watch"]
Without the env vars set, Jest continues to give the error that it can't work without git. I'm assuming it's because git init was never ran and it probably does some other stuff behind the scenes that copying in the .git folder doesn't accomplish.
Here is the docker-compose I used for the test service in case it helps someone:
f-test-unit:
container_name: "f-test-unit"
build:
context: "frontend"
dockerfile: "docker/dev.dockerfile"
target: "unit-test"
volumes:
- "./frontend:/app/frontend"
- "/app/frontend/node_modules/"
- "./.git:/app/.git"
tty: true
stdin_open: false
Side note: if you add the tty and stdin_open lines, it allows for your logs in the docker container to be colorized, which is very useful with Jest.

Project dependency not found when running DockerFile build command on Azure DevOps

This is the Dockerfile generated by VS2017.
I changed a little bit for using it on Azure DevOps
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApi.csproj", "WebApi/"]
COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProj.Common/"]
RUN dotnet restore "MyProject.WebApi/MyProject.WebApi.csproj"
COPY . .
WORKDIR "/src/MyProject.WebApi"
RUN dotnet build "MyProject.WebApi.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "MyProject.WebApi.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
Solution structure
MyProject.sln
-MyProject.Common
...
-MyProject.WebApi
...
Dockerfile
I have created a Build Pipeline under Azure DevOps to run Docker Build with these steps :
Get Sources Step from Azure Repos Git
Agent Job (Hosted Ubuntu 1604)
Command Line script docker build -t WebApi .
I have this error
2019-02-02T18:14:33.4984638Z ---> 9af3faec3d9e
2019-02-02T18:14:33.4985440Z Step 7/17 : COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProject.Common/"]
2019-02-02T18:14:33.4999594Z COPY failed: stat /var/lib/docker/tmp/docker-builder671248463/MyProject.Common/MyProject.Common.csproj: no such file or directory
2019-02-02T18:14:33.5327830Z ##[error]Bash exited with code '1'.
2019-02-02T18:14:33.5705235Z ##[section]Finishing: Command Line Script
Attached Screenshot with the working directory used
I don't understand if I have to change something inside Dockerfile or into Console Script step on DevOps
This is just a hunch, but considering your Dockerfile is located under MyProject.WebApi and you want to copy files from MyProject.Common which is on the same level, then you might need to specify a different context root directory when running docker build:
docker build -t WebApi -f Dockerfile ../
When Docker builds an image it collects a context - a list of files which are accessible during build and can be copied into image.
When you run docker build -t WebApi . it runs inside MyProject.WebApi directory and all files in the directory . (unless you have .dockerignore file), which is MyProject.WebApi in this case, are included into context. But MyProject.Common is not part of the context and thus you can't copy anything from it.
Hope this helps
EDIT: Perhaps you don't need not specify Working Directory (shown in the screenshot), then the command would change into:
docker build -t WebApi -f MyProject.WebApi/Dockerfile .
In this case Docker will use Dockerfile located inside MyProject.WebApi and include all files belonging to the solution into the context.
You can also read about context in the Extended description for the docker build command in the official documentation.

Resources