Introduction: I am new to creating GitLab pipelines.
Details:
The type of executor I am using for Runner is; Shell.
(I am not sure if this can be used or new runner needs to be registered with a different executor.)
Gitlab-runner 13.11.0
On trying to execute the below code which I have written in the .gitlab-ci.yml file, it throws the error.
image: "ruby:2.6"
test:
script:
- sudo apt-get update -qy
- sudo apt-get -y install unzip zip
- gem install cucumber
- gem install rspec-expectations
##TODO grep on all folders searching for .feature files
- find . -name "*.feature"
The error I am receiving is as follows.
Outout from Gitlab pipeline execution:
Can I request you to please help me fix this and run this successfully?
Thanks.
Related
.deploy: &deploy
before_script:
- apt-get update -y
script:
- cd source/
- npm install multi-file-swagger
- multi-file-swagger -o yaml temp.yml > swagger.yml
I want to install multi-file-swagger package to compile the temp.yml( which has been split into multiple files) into the swagger.yml. So before using npm I need to install nodejs. How can I do that?
As the image is Debian-based, you should be able to install the source repo from Node and install the package from there. The relevant section of your Gitlab file would look like this:
.deploy: &deploy
before_script:
- apt-get update -y
script:
- curl -sL https://deb.nodesource.com/setup_17.x | bash
- apt-get install nodejs -yq
- cd source/
- npm install multi-file-swagger
- multi-file-swagger -o yaml temp.yml > swagger.yml
Please note that these additional steps will add a significant amount of time to your build process. If you are executing them more frequently, consider creating your own build image derived from the one you’re using now, and adding these steps into the image itself.
For a old codebase, we're trying to go from just uploading changes through FTP to using Gitlab CI/CD. However, none of us have extensive Gitlab experience, and I've been trying to set the deployment up by following this guide:
https://savjee.be/2019/04/gitlab-ci-deploy-to-ftp-with-lftp/
I'm running a gitlab-runner on my own mac right now, however, it seems like the docker image in my yml file is not loaded correctly. When using the yml from the article:
image: ubuntu:18.04
before_script:
- apt-get update -qy
- apt-get install -y lftp
build:
script:
# Sync to FTP
- lftp -e "open ftp.mywebhost.com; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose --delete local-folder/ destination-folder/; bye"
It tells me apt-get: command not found. I've tried with apk-get as well, but no differences. I've tried to find a different docker image that has lftp installed ahead of time, but then I just get a lftp: command not found:
image: minidocks/lftp:4
before_script:
# - apt-get update -qy
#- apt-get install -y lftp
build:
script:
- lftp -e "open ftp.mywebhost.com; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose --delete local-folder/ destination-folder/; bye"
- echo 'test this'
If I comment out the lftp/apt-get bits, I do get to the echo command, however (and it does work).
I can't seem to find any reason for this when searching online. Apologies if this is a duplicate question or I've just been looking in the wrong places.
From your question, it seems you are executing your tasks on a gitlab-runner using the shell executor.
The shell executor does not handle the image keyword as exposed in the runner compatibility matrix.
Moreover, since you want to deploy on docker containers, you need the docker executor anyway.
I'm setting up Gitlab CI/CD to automate deployment to heroku app with every push.
currently my .gitlab-ci.yml file looks like
production:
type: deploy
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=koober-production --api-key=$HEROKU_PRODUCTION_API_KEY
only:
- master
This works fine and deployment is successful and application is working.
But, I need to run few commands after successful deployment to migrate database.
At present, I need to do this manually by running command from terminal
heroku run python manage.py migrate -a myapp
How can I automate this to run this command after deployment?
First types are deprecated, you should use stages.
Back to the original question, I think you can use a new stage/type for this purpose.
Declaring something like:
stages:
- build
- test
- deploy
- post_deploy
post_production:
stage: post_deploy
script:
- heroku run python manage.py migrate -a myapp
only:
- master
This should then execute only in the case the deplyment succeds.
Solved using --run flag to run command using dpl
stages:
- deploy
production:
stage: deploy
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=koober-production --api-key=$HEROKU_PRODUCTION_API_KEY --run='python manage.py migrate && python manage.py create_initial_users'
only:
- master
I have the following configuration as .gitlab-ci.yml
but I found out after successfully pass build stage (which
would create a virtualenv called venv), it seems that
in test stage you would get a brand new environment(there's
no venv directory at all). So I wonder should I put setup
script in before_script therefor it would run in each phase(build/test/deploy). Is it a right way to do it ?
before_script:
- uname -r
types:
- build
- test
- deploy
job_install:
type: build
script:
- apt-get update
- apt-get install -y libncurses5-dev
- apt-get install -y libxml2-dev libxslt1-dev
- apt-get install -y python-dev libffi-dev libssl-dev
- apt-get install -y python-virtualenv
- apt-get install -y python-pip
- virtualenv --no-site-packages venv
- source venv/bin/activate
- pip install -q -r requirements.txt
- ls -al
only:
- master
job_test:
type: test
script:
- ls -al
- source venv/bin/activate
- cp crawler/settings.sample.py crawler/settings.py
- cd crawler
- py.test -s -v
only:
- master
adasd
Gitlab CI jobs supposed to be independent, because they could run on different runners. It is not issue. There two ways to pass files between stages:
The right way. Using artefacts.
The wrong way. Using cache. With cache key "hack". Still need same runner.
So yes, supposed by gitlab way to have everything your job depends on in before script.
Artifacts example:
artifacts:
when: on_success
expire_in: 1 mos
paths:
- some_project_files/
Cache example:
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- node_modules/
- src/bower_components/
For correct running environment i suggest using docker with image containing apt-get dependencies. And use artefacts for passing job results between jobs. Note that artefact also uploaded to gitlab web interface and being able to download them. So if they are quite heavy use small expire_in time, for removing them after all jobs done.
I have just installed gitlab-ci-multi-runner by following the documentation https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/linux-repository.md
I use the public server ci.gitlab.com and the registration of the runner seems OK (the runner appears with a green light).
With debug activated I can see that the runner fetch regularly the CI server.
But when a new commit is pushed no build is done.
Everything is green: https://ci.gitlab.com/projects/4656 but no test is done...
My .gitlab-ci.yml is pretty simple:
before_script:
- apt install python3-pip
- pip3 install -q -r requirements.txt
master:
script: "make test"
only:
- master
script:
- python setup.py test
By the way I can find any error message and I don't know where to search.
I am pretty knew to CI and there is perhaps an obvious point I am missing.
Give this a try. this is assuming your pyunit tests are in a file called runtests.py in the working directory.
before_script:
- apt install python3-pip
- pip3 install -q -r requirements.txt
master:
script: "python runtests.py"
only:
- master