I'm trying to build Jekyll blog using gitlab runner (for gitlab pages). I get the following error: ERROR: Build failed: exit code 1. So far, everything worked. Link to project: https://gitlab.com/dash.plus/dashBlog
Just add
- apt-get update && apt-get install -y nodejs
And ofc
- bundle install
inside gitlab-cl.yaml
image: ruby:2.3
test:
stage: test
script:
- gem install jekyll
- bundle install
- apt-get update && apt-get install -y nodejs
- bundle exec jekyll -d test/
artifacts:
paths:
- test
except:
- master
pages:
stage: deploy
script:
- gem install jekyll
- bundle install
- apt-get update && apt-get install -y nodejs
- bundle exec jekyll -d public/
artifacts:
paths:
- public
only:
- master
Related
.deploy: &deploy
before_script:
- apt-get update -y
script:
- cd source/
- npm install multi-file-swagger
- multi-file-swagger -o yaml temp.yml > swagger.yml
I want to install multi-file-swagger package to compile the temp.yml( which has been split into multiple files) into the swagger.yml. So before using npm I need to install nodejs. How can I do that?
As the image is Debian-based, you should be able to install the source repo from Node and install the package from there. The relevant section of your Gitlab file would look like this:
.deploy: &deploy
before_script:
- apt-get update -y
script:
- curl -sL https://deb.nodesource.com/setup_17.x | bash
- apt-get install nodejs -yq
- cd source/
- npm install multi-file-swagger
- multi-file-swagger -o yaml temp.yml > swagger.yml
Please note that these additional steps will add a significant amount of time to your build process. If you are executing them more frequently, consider creating your own build image derived from the one you’re using now, and adding these steps into the image itself.
I'm setting up Gitlab CI/CD to automate deployment to heroku app with every push.
currently my .gitlab-ci.yml file looks like
production:
type: deploy
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=koober-production --api-key=$HEROKU_PRODUCTION_API_KEY
only:
- master
This works fine and deployment is successful and application is working.
But, I need to run few commands after successful deployment to migrate database.
At present, I need to do this manually by running command from terminal
heroku run python manage.py migrate -a myapp
How can I automate this to run this command after deployment?
First types are deprecated, you should use stages.
Back to the original question, I think you can use a new stage/type for this purpose.
Declaring something like:
stages:
- build
- test
- deploy
- post_deploy
post_production:
stage: post_deploy
script:
- heroku run python manage.py migrate -a myapp
only:
- master
This should then execute only in the case the deplyment succeds.
Solved using --run flag to run command using dpl
stages:
- deploy
production:
stage: deploy
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=koober-production --api-key=$HEROKU_PRODUCTION_API_KEY --run='python manage.py migrate && python manage.py create_initial_users'
only:
- master
I am using Gitlab-CI to build my Middleman application which also uses some node stuff for the front end (Gulp).
Here is my .gitlab-ci.yml (mostly copied from here):
image: ruby:2.3
cache:
paths:
- vendor
- node_modules
before_script:
- apt-get update -yqqq
- apt-get install -y npm
- ln -s /usr/bin/nodejs /usr/bin/node
- npm install
- bundle install --path vendor
test:
script:
- bundle exec middleman build
except:
- master
pages:
script:
- bundle exec middleman build
artifacts:
paths:
- public
only:
- master
Everything goes alright apart from the vital problem that it seems to be using an old version of node when it's npm installing. I'm getting lots of this:
npm WARN engine gulp-babel#7.0.0: wanted: {"node":">=4"} (current: {"node":"0.10.29","npm":"1.4.21"})
Before finally failing on the "const path" SyntaxError.
I included a line to symlink the new nodejs with the old name (- ln -s /usr/bin/nodejs /usr/bin/node) but it seems to have no effect...?
Been banging my head for long enough, there's got to be someone out there who has made this work?
Debian Jessie ships with a fixed NodeJs major version, follow NodeSource instructions to install a specific version, this would fit in your gitlab-ci.yml like this (you probably need to install curl first since its not installed in the ruby:2.3 image):
before_script:
- apt-get update -q && apt-get -qqy install curl
- curl -sL https://deb.nodesource.com/setup_9.x | bash -
- apt-get update -q && apt-get -qqy install nodejs npm
- ln -s /usr/bin/nodejs /usr/bin/node
- npm install
- bundle install --path vendor
I have the following .gitlab-ci.yml file
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add /root/gitlab-runner/.ssh/id_rsa
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- apt-get update -qq && apt-get install -y -qq apt-utils sqlite3 libsqlite3-dev nodejs tree
- gem install bundler --no-ri --no-rdoc
- bundle install --jobs $(nproc) "${FLAGS[#]}"
- cp /root/gitlab-runner/.database.gitlab-ci.yml config/database.yml
- RAILS_ENV=test rake parallel:setup
rspec:
script:
- rake parallel:spec
The issue is that we have so many projects using the exact same before_script actions and these actions sometimes change, so we have to update this file for every project. Is there a way to automatically configure the runner to execute actions so that the .gitlab-ci.yml in this case becomes:
rspec:
script:
- rake parallel:spec
You can save all the before_script commands into a Bash script, store it on the server hosting the runner and then just reference it in all the projects:
before_script:
- /[path on the host]/script.sh
If you are using Docker, you can either include the file in your own image or use volumes to mount the host directory in the Docker container.
It would be a bit more complicated in case you have multiple runners on different servers.
I have the following configuration as .gitlab-ci.yml
but I found out after successfully pass build stage (which
would create a virtualenv called venv), it seems that
in test stage you would get a brand new environment(there's
no venv directory at all). So I wonder should I put setup
script in before_script therefor it would run in each phase(build/test/deploy). Is it a right way to do it ?
before_script:
- uname -r
types:
- build
- test
- deploy
job_install:
type: build
script:
- apt-get update
- apt-get install -y libncurses5-dev
- apt-get install -y libxml2-dev libxslt1-dev
- apt-get install -y python-dev libffi-dev libssl-dev
- apt-get install -y python-virtualenv
- apt-get install -y python-pip
- virtualenv --no-site-packages venv
- source venv/bin/activate
- pip install -q -r requirements.txt
- ls -al
only:
- master
job_test:
type: test
script:
- ls -al
- source venv/bin/activate
- cp crawler/settings.sample.py crawler/settings.py
- cd crawler
- py.test -s -v
only:
- master
adasd
Gitlab CI jobs supposed to be independent, because they could run on different runners. It is not issue. There two ways to pass files between stages:
The right way. Using artefacts.
The wrong way. Using cache. With cache key "hack". Still need same runner.
So yes, supposed by gitlab way to have everything your job depends on in before script.
Artifacts example:
artifacts:
when: on_success
expire_in: 1 mos
paths:
- some_project_files/
Cache example:
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- node_modules/
- src/bower_components/
For correct running environment i suggest using docker with image containing apt-get dependencies. And use artefacts for passing job results between jobs. Note that artefact also uploaded to gitlab web interface and being able to download them. So if they are quite heavy use small expire_in time, for removing them after all jobs done.