Gitlab-runner run too long - gitlab

I made a static site and now trying to configure GitLab CI/CD.
The source code for the site is on a remote server.
This is my gitlab-ci.yml
image: ruby:2.6
variables:
JEKYLL_ENV: production
before_script:
- gem install bundler
- bundle install
deploy:
stage: deploy
script:
- bundle exec jekyll build --watch
only:
- master
I use --watch because without using --watch job passed correctly, changes are displayed on the local computer, but no updates are displayed on the remote server.
There is a line in the answer:
Auto-regeneration: disabled for '/ builds / wiki / docplus'. Use --watch to enable
But with --watch I push my commits and gitlab-runner runs too long and job failed
$ bundle exec jekyll build --watch
Configuration file: /builds/wiki/docplus/_config.yml
Source: /builds/wiki/docplus
Destination: /builds/wiki/docplus/_site
Incremental build: disabled. Enable with --incremental
Generating...
done in 1.863 seconds.
Auto-regeneration: enabled for '/builds/wiki/docplus'
Pulling docker image gitlab/gitlab-runner-helper:x86_64-003fe500 ...
ERROR: Job failed: execution took longer than 1h0m0s seconds
What's wrong?

Update your deploy.script to just build (without --watch):
deploy:
stage: deploy
script:
- bundle exec jekyll build

Related

How to deploy Angular App with GitLab CI/CD

I've been trying to setup a CI/CD Pipeline on my Repo which runs common tasks like linting/tests etc. I've successfully setup a Gitlab-Runner which is working fine. The only part I'm stuck is the "deploy" part.
When I run my build, how do I actually get the files into my /var/www/xyz folder.
I get that everything is running in a Docker Container and I can't just magically copy paste my files there, but i don't get how I get the files on my actual server-directory. I've been searching for days for good docs / explanations, so as always, StackOverflow is my last resort for help.
I'm running on a Ubuntu 20.04 LTS VPS and a SaaS GitLab-Repository if that info is needed. That's my .gitlab-ci.yml:
image: timbru31/node-alpine-git
before_script:
- git fetch origin
stages:
- setup
- test
- build
- deploy
#All Setup Jobs
Install Dependencies:
stage: setup
interruptible: true
script:
- npm install
- npm i -g #nrwl/cli
artifacts:
paths:
- node_modules/
# All Test Jobs
Lint:
stage: test
script: npx nx run nx-fun:lint
Tests:
stage: test
script: npx nx run nx-fun:test
Deploy:
stage: build
script:
- ls /var/www/
- npx nx build --prod --output-path=dist/
- cp -r dist/* /var/www/html/neostax/
only:
refs:
- master
Normally I would ssh into my server, run the build, and then copy the build to the corresponding web-directory.
TL;DR - How do I get files from a GitLab-Runner to an actual directory on the server?

How github-actions run test on (production) build results instead of develop mode

I currently have a github action like this in a Create React App
name: Percy
on: [push]
jobs:
percy:
name: Visual Testing
runs-on: ubuntu-16.04
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Cypress run
uses: cypress-io/github-action#v2
env:
PERCY_TOKEN: ${{ secrets.PERCY_TOKEN }}
with:
start: yarn start
wait-on: 'http://localhost:3000'
command-prefix: 'percy exec -- npx'
But I would like to yarn build (instead of yarn start) and serve these results for my tests (cypress, etc) - so I see how the tests goes on something that has gone through webpack.
I have tried a lot of different things (like start: yarn build && yarn serve -s build -p 3000) but have come to the conclusion that I need some guidance.
...
$ react-scripts build '&&' yarn serve -s build -p 3000
Creating an optimized production build...
Compiled successfully.
File sizes after gzip:
49.3 KB build/static/js/2.98954ae7.chunk.js
3.01 KB build/static/js/main.9bc31c1d.chunk.js
1.13 KB build/static/css/main.9e43f7ef.chunk.css
818 B build/static/css/2.a2fbc952.chunk.css
779 B build/static/js/runtime-main.fe4fcbcb.js
The project was built assuming it is hosted at /.
You can control this with the homepage field in your package.json.
The build folder is ready to be deployed.
You may serve it with a static server:
yarn global add serve
serve -s build
Find out more about deployment here:
bit.ly/CRA-deploy
Done in 10.36s.
http://localhost:3000 timed out on retry 61 of 2
Error: connect ECONNREFUSED 127.0.0.1:3000
You can use the build parameter to build the app using yarn build and the start parameter to start the server using npx serve.
- name: Cypress run
uses: cypress-io/github-action#v2
env:
PERCY_TOKEN: ${{ secrets.PERCY_TOKEN }}
with:
build: yarn build
start: npx serve -s build -l 3000
wait-on: 'http://localhost:3000'
command-prefix: 'percy exec -- npx'

GitLabCI deploy expressjs app dont finish the deploy stage

just finished the ci cd build at gitlab, and i'm using a nodejs image with docker to build, and in the last step of deploy, the log show that is running yarn dev fine but the gitlab ci has a 1 hour limit of running pipeline.
What i need to do to run de expressjs app and finish the pipeline execution without stopping the app ?
I know that with docker i can ran with the detached option, but there is any way to do without build the app docker image ?
CICD Log with the app running:
image: node:12.18.1
stages:
- build
- test
- deploy
before_script:
- yarn
build-min-code:
stage: build
script:
- yarn
deploy-staging:
stage: deploy
script:
- yarn dev
only:
- dev
Like this works fine, but in one hour the timeout will finish the runner execution.
Bump up the timeout. The default is 60 minutes: https://docs.gitlab.com/ee/ci/pipelines/settings.html#timeout
Note: on the gitlab-runner - https://docs.gitlab.com/ee/ci/runners/README.html#set-maximum-job-timeout-for-a-runner

Cache build folder during gitlab CI build

I have a remote server where I serve my project via Nginx. I am using Gitlab CI to automatize my deploy process and I have ran into a problem. When I push my commits to the master branch the gitlab-runner run nicely but the problem is that it removes my React build folder (it is ok, as I have put it into the .gitignore), but because it always removes my build folder my Nginx could not serve any files until the build finish, and a new build folder creaeted. Is is there any solution for this problem? It would be nice if I could cache my build file until the build process finish. I attached my gitlab.ci.yml. Thank's in advance!
image: docker:latest
services:
- docker:dind
stages:
- build
- deploy
variables:
GIT_SSL_NO_VERIFY: "1"
build-step:
stage: build
tags:
- shell
script:
- docker image prune -f
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
deploy-step:
stage: deploy
tags:
- shell
script:
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
It should be possible to use git fetch and to disable git clean when your deploy job starts. Here are links for the variables to do this:
https://docs.gitlab.com/ee/ci/yaml/#git-clean-flags
https://docs.gitlab.com/ee/ci/yaml/#git-strategy
It would look something like this:
deploy-step:
variables:
GIT_STRATEGY: fetch
GIT_CLEAN_FLAGS: none
stage: deploy
tags:
- shell
script:
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
This should make GitLab use git fetch instead of git clone, and to not run any git clean ... commands. The build artifacts from your previous run should then not be removed.
There are some problems with this though. If something goes wrong with a build, you might end up in a situation where you will have to manually log into the server where the runner is to fix it. The reason that GitLab uses git clean is to prevent these types of problems.
A more proper solution is to use nginx to have a sort of dubble buffer. You can have two different build folders, change the config in nginx, and then send a signal to nginx to reload the config. nginx will then make sure to gracefully switch to the new version of your application, without any interruptions. Here is a link to someone that has done this:
https://syshero.org/2016-06-09-zero-downtime-deployments-using-nginx/

Docker/CI: Get access to nodeJS app, which is created in another stage

In my CI pipeline (gitlab) there is a build and an end2end-testing stage. In the build stage the files of the application will be created. Then I want to copy the generated files to the e2e_testing container to do some tests with this application.
How do I copy the generated files (/opt/project/build/core/bundle) to the image?
For e2e testing I want to use nightwatchJS - see the e2e docker image below. Maybe it is possible to use the build image within the e2e image?
What I need to do is nightwatchJS e2e testing for the generated nodeJS application
My attempt
Copy the generate files to e2e_testing container with docker cp command.
build:
stage: build
before_script:
- meteor build /opt/project/build/core --directory
script:
- cd /opt/jaqua/build/core/bundle
- docker build -t $CI_REGISTRY_IMAGE:latest .
after_script:
- docker cp /opt/project/build/core/bundle e2e_testing:/opt/project/build/core/
But this is not working, as the next stage (e2e) will create a container from the e2e:latest image. So in this container there is no bundle folder existing, so this sample script is failing.
e2e:
image: e2e:latest
stage: e2e
before_script:
- cd /opt/project/build/core/bundle && ls -la
script:
# - run nightwatchJS to do some e2e testing with the build bundle
e2e:latest image Dockerfile
FROM java:8-jre
## Node.js setup
RUN curl -sL https://deb.nodesource.com/setup_4.x | bash -
RUN apt-get install -y nodejs
## Nightwatch
RUN npm install -g nightwatch
A container called e2e_testing is created from this image and it is running all the time. So at the time the CI pipeline is running, the container is already existing.
But at the time, this image is created the application files are not existing, as they are generated at the build stage. So I cannot put those files in the docker image using a Dockerfile.
So how can I get access to the files generated in the build stage in the e2e stage?
Or is it possible to use the build image ($CI_REGISTRY_IMAGE:latest) within the nightwatch image (e2e)
What about using artifacts?
Basically move the bundle folder to the root of your repository after the build and define the bundle folder as an artifact. Then, from the e2e job, the bundle folder will be downloaded from the artifacts of the build stage and you will be able to use its contents. Here's an example of how to do this:
build:
stage: build
before_script:
- meteor build /opt/project/build/core --directory
script:
# Copy the contents of the bundle folder to ./bundle
- cp -r /opt/project/build/core/bundle ./bundle
- cd /opt/jaqua/build/core/bundle
- docker build -t $CI_REGISTRY_IMAGE:latest .
artifacts:
paths:
- bundle
e2e:
image: e2e:latest
stage: e2e
dependencies:
- build
script:
- mkdir -p /opt/project/build/core/bundle
- cp -r ./bundle /opt/project/build/core/bundle
# - run nightwatchJS to do some e2e testing with the build bundle
I don't know if you still need the docker build part, so I left it in there in case you want to push that container somewhere.

Resources