node js app engine github trigger builds not getting published to development? - node.js

I have a node.js website that runs locally fine with node server.js. I added a Dockerfile:
FROM node:carbon
VOLUME ["/root"]
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
And if I deploy my app with gcloud app deploy I can get it accessible online via a url. I believe my project is an 'App Engine' project? If I run subsequent gcloud app deploy commands, my new code gets pushed to the online site. But I can't get github master commits trigger and publish a new build.
I tried adding a trigger so that everytime new code gets added to my public github repo master branch, it gets sent to my production URL.
Full Trigger:
So I merge a PR into the master branch of my github repo. I look in my build history and see there is a new build, clicking the commit takes me to the new pr I just merged into the master branch of my github repo.
But If I access my website url, the new code is not there. If I run cloud app deploy again eventually it will appear, my trigger seems to be working fine from the logs, why is my build not getting published?

I think the problem might be with the fact that you're using a Dockerfile instead of Cloud Build configuration file... Unless there's something else I'm not seeing.
Look here under the fourth step, fifth bullet, for the solution. It says:
Under Build configuration, select Cloud Build configuration file.

Related

Deploying Angular Universal 9 to Google App Engine

I am not sure the situation has been changed but it seems I got stuck with the versions I am using.
Previously, in Angular 7, we were able to generate server files for Angular Universal at the root level so we could have node main.js in app yaml and Google App Engine just found the way to run our web application. It seems this is not possible anymore for Angular 9.
We are using Angular SSR for our production web site. It compiles all the server files in /dist-server folder. There is a docker file to deploy it on Google App Engine:
FROM node:12-alpine as buildContainer
WORKDIR /app
COPY ./package.json ./package-lock.json /app/
RUN npm install
COPY . /app
RUN npm run build:ssr // This will create dist/ and dist-server/ folders in the docker
FROM node:12-alpine
WORKDIR /app
COPY --from=buildContainer /app/package.json /app
COPY --from=buildContainer /app/dist /app/dist
COPY --from=buildContainer /app/dist-server /app/dist-server
EXPOSE 4000
CMD ["npm", "run", "serve:ssr"]
In package.json we have :
"serve:ssr": "node dist-server/main.js",
In order to start the deployment, we type gcloud app deploy in the terminal and everything works fine for this process. The main problem is this takes almost 25 mins to finish. The main bottleneck for the time consuming part is the compiling.
I thought we could have compiled the repo on our local dev machine, and copy only dist/ and dist-server folder to the docker and add node dist-server/main.js to run our web application in the docker file. Whenever I tried to copy only dist and dist-server folder. I got below error message:
COPY failed: stat /var/lib/docker/tmp/docker-builder{random numbers}/dist: no such file or directory
I also tried to compile the main.js which is the main server file for angular universal at the same level as app.yaml. I assumed this is required according to Google App Engine Node JS deployment rule since there is an example repo from Google. I cannot compile our main.js file into the root folder, it gives below error message:
An unhandled exception occurred: Output path MUST not be project root directory!
So I am looking for a solution to which does not require Google App Engine to rebuild our repo (since we can do this in our dev machine and upload the compiled files for the sake of time saving) to make the deployment process faster.
Thanks for your help
I have found that the .dockerignore file had dist and dist-server folder in it. I have removed those entries. I am able to compile and deploy the docker file on google app engine now.

Problem deploying MERN app with Docker to GCP App Engine - should deploy take multiple hours?

I am inexperienced with Dev Ops, which drew me to using Google App Engine to deploy my MERN application. Currently, I have the following Dockerfile and entrypoint.sh:
# Dockerfile
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
# Entrypoint.sh
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
The React front end is in a client folder, which is located in the base directory of the Node application. I am attempting to deploy these together, and would generally prefer to deploy together rather than separate. Running docker-compose up --build successfully redeploys my application on localhost.
I have created a very simple app.yaml file which is needed for Google App Engine:
# app.yaml
runtime: custom
env: standard
I read in the docs here to use runtime: custom when using a Dockerfile to configure the runtime environment. I initially selected a standard environment over a flexible environment, and so I've added env: standard as the other line in the app.yaml.
After installing and running gcloud app deploy, things kicked off, however for the last several hours this is what I've seen in my terminal window:
Hours seems like a higher magnitude of time than what seems right for deploying an application, and I've begun to think that I've done something wrong.
You are probably uploading more files than you need.
Use .gcloudignore file to describe the files/folders that you do not want to upload. LINK
You may need to change the file structure of your current project.
Additionally, it might be worth researching further the use of the Standard nodejs10 runtime. It uploads and starts much faster than the Flexible alternative (custom env is part of App Engine Flex). Then you can deploy each part to a different service.

Azure App Service setting ASPNETCORE_ENVIRONMENT=Development result in 404

I notice this issue when I deploy my asp.net core MVC application docker image to Azure App Service. Either setting ASPNETCORE_ENVIRONMENT in Dockerfile ENV ASPNETCORE_ENVIRONMENT Development or setting in docker-compose
environment:
- ASPNETCORE_ENVIRONMENT=Development
I always get 404 when accessing website.
The weird part is that if I set ASPNETCORE_ENVIRONMENT to any other value (Staging, Production, etc.), 404 will be gone and website can be accessed normally
How to reproduce:
Create a asp.net core MVC project (just create a bare bone project, don't change any code or add any logic)
Build this project on local machine (Dockerfile is like below)
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app
# Copy necessary files and restore
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:2.2-aspnetcore-runtime
COPY --from=build-env /app/out .
# Start
ENV ASPNETCORE_ENVIRONMENT Development
ENTRYPOINT ["dotnet", "CICDTesting.dll"]
Push this image to Azure container registry
I have a webhook attached with App Service, so deployment will be triggered automatically
From the Log Stream I can see the image is pulled successfully and container is up and running
Access the website, then it gives 404
If I change ENV ASPNETCORE_ENVIRONMENT Development to ENV ASPNETCORE_ENVIRONMENT Staging, and repeat build and deploy steps, website will be accessible.
It's the same if I remove ENV ASPNETCORE_ENVIRONMENT Development from Dockerfile and configure it in docker-compose

GCP Cloud build ignores timeout settings

I use Cloud Build for copying the configuration file from storage and deploying the app to App Engine flex.
The problem is that the build fails every time when it lasts more than 10 minutes. I've specified timeout in my cloudbuild.yaml but it looks like it's ignored. Also, I configured app/cloud_build_timeout and set it to 1000. Could somebody explain to me what is wrong here?
My cloudbuild.yaml looks in this way:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["cp", "gs://myproj-dev-247118.appspot.com/.env.cloud", ".env"]
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: 1000s
timeout: 1600s
My app.yaml use custom env that build it from Dockerfile and looks like this:
runtime: custom
env: flex
manual_scaling:
instances: 1
env_variables:
NODE_ENV: dev
Dockerfile also contains nothing special, just installing dependencies and app building:
FROM node:10 as front-builder
WORKDIR /app
COPY front-end .
RUN npm install
RUN npm run build:web
FROM node:12
WORKDIR /app
COPY api .
RUN npm install
RUN npm run build
COPY .env .env
EXPOSE 8080
COPY --from=front-builder /app/web-build web-build
CMD npm start
When running gcloud app deploy directly for an App Engine Flex app, from your local machine for example, under the hood it spawns a Cloud Build job to build the image that is then deployed to GAE (you can see that build in Cloud Console > Cloud Build). This build has a 10min timeout that can be customized via:
gcloud config set app/cloud_build_timeout 1000
Now, the issue here is that you're issuing the gcloud app deploy command from within Cloud Build itself. Since each individual Cloud Build step is running in its own Docker container, you can't just add a previous step to customize the timeout since the next one will use the default gcloud setting.
You've got several options to solve this:
Add a build step to first build the image with docker build, upload it to Google Cloud Registry. You can set a custom timeout on these steps to fit your needs. Finally, deploy your app with glcoud app deploy --image-url=IMAGE-URL.
Create your own custom gcloud builder where app/cloud_build_timeout is set to your custom value. You can derive it from the default gcloud builder Dockerfile and add /builder/google-cloud-sdk/bin/gcloud config set app/cloud_build_timeout 1000
Just in case if you are using Google Cloud Build with Skaffold, remember checking the skaffold.yaml if you setted the timeout option inside the googleCloudBuild section in build. For example:
build:
googleCloudBuild:
timeout: 3600s
Skaffold will ignore the gcloud config of the machine where you are running the deploy. For example it will ignore this CLI command: gcloud config set app/cloud_build_timeout 3600

Building a Docker image for a Node.js app in GitLab CI

I'm working on a Node.js application for which my current Dockerfile looks like this:
# Stage 0
# =======
FROM node:10-alpine as build-stage
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN yarn build
# Stage 1
# =======
FROM nginx:mainline-alpine
COPY --from=build-stage /app/build /usr/share/nginx/html
I'd like to integrate this into a GitLab CI pipeline but I'm not sure if I got the basic idea. So far I know that I need to create a .gitlab-ci.yml file which will be later picked up by GitLab.
My basic idea is:
I push my code changes to GitLab.
GitLab builds a new Docker image based on my Dockerfile.
GitLab pushes this newly create image to a "production" server (later).
So, my question is:
My .gitlab-ci.yml should then contain something like a build job which triggers... what? The docker build command? Or do I need to "copy" the Dockerfile content to the CI file?
GitLab CI executes the pipeline in the Runners that need to be registered into the project using generated tokens (Settings/CI CD/Runners). You also can used Shared Runners for multiple projects. The pipeline is configured with the .gitlab-ci.yml file and you can build, test, push and deploy docker images using the yaml file, when something is done in the repo (push to branch, merge request, etc).
It’s also useful when your application already has the Dockerfile that
can be used to create and test an image
So basically you need to install the runner, register it with the token of your project (or use Shared Runners) and configure your CI yaml file. The recommended aproach is docker in docker but it is up to you. You can also check this basic example. Finally you can deploy your container directly into Kubernetes, Heroku or Rancher. Remember to safely configure your credentials and secrets in Settings/Variables.
Conclusion
GitLab CI is awesome, but I recommend you to firstly think about your git workflow to use in order to set the stages in the .gitlab-ci.yml file. This will allow you to configure your node project as a pipeline an then it would be easy to export to other tools such as Jenkins pipelines or Travis for example.
build job trigger:
option 1:
add when: manual in the job and you can run the job by manual in CI/CD>Pipelines
option 2:
only:
- <branchname>
in this case the job start when you push into the defined branch
(this my personal suggest)
option 3:
add nothin' and the job will run every time when you push code
Of corse you can combine the options above.
In addition may star the job with web request by using the job token.
docker build command will work in pipeline. I think in script section.
Requirements docker engine on the gitlab-runner which pick the job.
Or do I need to "copy" the Dockerfile content to the CI file?
no

Resources