here is my gitlab-ci.yml file:
image: python:3.6
before_script:
- python --version
- pip install -r requirements.txt
stages:
- test
test:
stage: test
script:
- chmod +x ./scripts/lint.sh
- ./scripts/lint.sh
- chmod +x ./scripts/tests.sh
- ./scripts/tests.sh
Note that on my local machine, the job is running without any problem and it is using python 3.6.13
Running the job test online, I got this error: It does not make any sense!
below is the config of the runner which can run untagged job and the error message.
The screenshot you've showed, the job is run using the shell-executor, and so is using the Python version on whichever machine you have installed the gitlab-runner on.
It looks like you want to use the docker-executor for using image: python:3.6, and so I would reinstall the runner to use the docker executor.
Alternatively, you can update your machine which is using the shell executor, to have Python 3 instead.
Another issue could be that you have not tagged your runners, and are using the wrong gitlab-runner. Make sure you've tagged your shell / docker runners, ie with shell-runner or docker-runner, and then in the test job, add:
tags:
- docker-runner
Related
I am trying to add a .gitlab-ci.yml file to my gitlab project; the file looks like:
image: continuumio/miniconda3:latest
before_script:
- conda env create -f environment.yml
- conda activate py3p10
- export MY_PROJECT_ROOT=$PWD
- export PYTHONPATH+=:$PWD
tests:
stage: test
script:
- pytest tests -W ignore::DeprecationWarning
now, environment.yml contains about 30 packages and when I push to a branch the jobs seem to be downloading and installing all the packages. This is making the jobs take about 10 minutes and it seems pretty wasteful. Is there a way to tell gitlab to cache that conda environment so that it gets reused?
From:
https://docs.gitlab.com/ee/ci/caching/#cache-python-dependencies
it seems that we can cache, but only for virtualenv not conda. From:
Caching virtual environment for gitlab-ci
the top answer discourages caching with conda.
Cheers.
I am expecting to be able to cache the environment and the full job should test around 20 seconds.
I have a project that requires npm and gradle for build, and docker for building and pushing the image.
At first I thought that I should create my own ubuntu image with gradle and npm setup, but I found out that is not what docker images are for.
So I hoped to run official Gradle and node images as a service so that my script can call those commands, but that is not happening for some reason.
My .gitlab-ci.yml:
variables:
IMAGE_NAME: my.registry.production/project
IMAGE_TAG: $CI_COMMIT_BRANCH
GIT_SUBMODULE_STRATEGY: recursive
stages:
- build
- deploy
build_project:
stage: build
image: ubuntu:jammy
services:
- name: node:12.20
alias: npm
- name: gradle:6.3.0-jre8
alias: gradle
before_script:
- git submodule init && git submodule update --remote --recursive
script:
- cd project-server && npm install && gradle clean build -Pprod -Pwar -x test -x integrationTest
deploy_image:
stage: deploy
image: docker:20.10.17
services:
- name: docker:20.10.17-dind
alias: docker
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
DOCKER_DRIVER: overlay2
script:
- docker login -u $REGISTRY_USER -p $REGISTRY_PASSWORD my.registry.production
- docker build -t $IMAGE_NAME:$IMAGE_TAG .
- docker push $IMAGE_NAME:$IMAGE_TAG
If anyone has any info on how to solve this I would greatly appreciate it, since I’m a novice DevOps.
Edit 1:
My Dockerfile for custom image with Gradle and Node installed.
FROM ubuntu:jammy
LABEL key=DevOps
SHELL ["/bin/bash", "--login", "-i", "-c"]
RUN apt update && apt upgrade -y && apt install curl -y
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
RUN source /root/.bashrc && nvm install 12.14.1
RUN nvm install 12.20.0
RUN apt install zip unzip
RUN curl -s "https://get.sdkman.io" | bash
RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
RUN sdk install java 8.0.302-open
RUN sdk install gradle 3.4.1
SHELL ["/bin/bash", "--login", "-c"]
CMD [ "bin/bash" ]
After I run it, it says that npm is not found in $PATH, I tried Java, Gradle as well but they weren't found in the path as well.
I don't know why since I installed them as you can tell from the Dockerfile.
As I know, a docker image is equal to one build. So if you have multiple services you need to build each one into docker image then you can encapsulate all images into docker-compose.yml file.
I think you can do the following:
Build the npm project into a docker image
Build the Gradle project into a docker image
Write the docker-compose.yml file and put both images.
Once you have done it, the pipeline calls the docker-compose.yml file.
I hope this will be helpful.
Consider a few suggestions based on the fundamental concepts about the deployment in your CI/CD pipeline:
Remove the services keyword. Reference GitLab's official documents on what the services keyword inside gitlab-ci.yaml file is not for. The feature is used
to provide network accessable services to your job runtime (like
a database): https://docs.gitlab.com/ee/ci/services/index.html
Your project uses npm as a dependency management system, Gradle is
a build tool. Both of these pieces of software are more than
appropriate to run on the host operating system of the container
runtime inside GitLab's Pipeline job. You need these tools to assemble some build artifact as a result of the job on the same host your code has been downloaded on in the Runner.
Think about the overall size of the base image in your build_project job and consider how time to download the image over the network on to the Runner will impact your job and overall pipeline duration. If performance can be improved by baking build dependencies into a custom Dockerfile do this. If your image is too large, instead use shell commands inside the script keyword block to download them at the runtime of the job. There can be pros and cons for both.
Break shell scripts to one command per line for easier troubleshooting of failures in your scripts. You will be able to see the line number of the command which returned a non-zero exit code in your job logs:
...
script:
- cd project-server
- npm install
- gradle clean build -Pprod -Pwar -x test -x integrationTest
...
It's recommended to use the Gradle wrapper (gradlew) most of the time instead of the gradle executable directly. Configure this within your project and check the configuration files for the wrapper into your version control system and this will simplify your build dependency: https://docs.gradle.org/current/userguide/gradle_wrapper.html
I am trying to run a CI in gitlab
image: node:latest
stages:
- deploy
production:
stage: deploy
before_script:
- npm config set prefix /usr/local
- npm install -g serverless
script:
- serverless deploy
I am using the docker image like they suggest but it cannot find npm (or node)
How can I get this working?
Well, this is a bit weird, as your ci is correct.
If you are just using gitlab.com and their shared runners then this .gitlab-ci.yml will work.
One possible reason could be you have runners added as ssh/shell executors in the project repo. If so then the image tag you specified will be simply ignored.
So error like command not found could occur because of the server where you have added the runner doesn't have nodejs installed, and this error will occur for the npm config... command in before script with exit code 127 and pipeline will fail just there and stop.
If you have multiple runners then tag them and tag your jobs in ci.yml as well.
And if you are trying to run the job on your own server then you got to install docker first.
BTW for docker image node:latest you don't need npm config set prefix /usr/local as it already is /usr/local
I'm experiencing a random problem.
before_script:
- cd sources
- php -v
test:
script:
- phpunit -c mypath
70% of the time I'm getting this error:
$ php -v
bash: line 24: php: command not found
ERROR: Build failed with: exit code 1
But the weird thing is if I keep re running the same build it'll pass.
Any ideas?
Actually several runners are available, but I can only use one of them. All I had to do is to add tags to my job to select the runner.
before_script:
- cd sources
- php -v
test:
script:
- phpunit -c mypath
tags:
- php
PHP is not installed in the runners enviromnent where the tests are executed.
You have to make sure that the runner has an enviromnent which has PHP installed.
You did not specified what kind of runner you are using in your question, so I suggest you have a runner which runs docker container (as standard).
To accomplish your goal (avoiding bash: line 24: php: command not found) you can go two ways:
Let your project run in an docker image which has php installed
image: php
before_script:
- cd sources
- php -v
test:
script:
- phpunit -c mypath
OR
Use a rudementary image and install php
image: debian
before_script:
- cd sources
- apt-get install php5*
- php -v
test:
script:
- phpunit -c mypath
If you are not using docker as runner executor then install php on your mashine where the runner runs.
I am new to GitLab-CI and Docker, I am stuck getting a runner to run my phpunit builds. Following the instructions here:
https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/linux-repository.md
However, The container per their instructions obviously doesn't contain the tools I need. So question is, what is the configuration when registering a multi-runner to have a runner that supports phpunit, composer so I can test my laravel builds.
Follow their instructions to register gitlab-ci-multi-runner with docker executor. Than in your .gitlab-ci.yml give an appropriate image that contains bare minimum for your requirements. Everything else you can install through command line in before_script. I'm posting a sample working config for you:
image: tetraweb/php
services:
- mysql
variables:
MYSQL_DATABASE: effiocms_db
MYSQL_USER: effio_user
MYSQL_PASSWORD: testpassword
WITH_XDEBUG: "1"
before_script:
# enable necessary php extensions
- docker-php-ext-enable zip && docker-php-ext-enable mbstring && docker-php-ext-enable gd && docker-php-ext-enable pdo_mysql
# composer update
- composer self-update && composer --version
- composer global require --no-interaction --quiet "fxp/composer-asset-plugin:~1.1.0"
- export PATH="$HOME/.composer/vendor/bin:$PATH"
- composer install --dev --prefer-dist --no-interaction --quiet
# codeception install
- composer global require --no-interaction --quiet "codeception/codeception=2.0.*" "codeception/specify=*" "codeception/verify=*"
# setup application
- |
php ./init --env=Development --overwrite=All
cd tests/codeception/backend && codecept build
cd ../common && codecept build
cd ../console && codecept build
cd ../frontend && codecept build
cd ../../../
- cd tests/codeception/bin && php yii migrate --interactive=0 && cd ../../..
codeception:
stage: test
script:
- |
php -S localhost:8080 > /dev/null 2>&1 &
cd tests/codeception/frontend
codecept run
Obviously this config is for my application running on Yii2. So you need to adjust it per your requirements.
Inline with what Arman P. said. When running your tests, make sure you have a docker image which contains all the tools that you will need for your build/test.
You have two options:
You can build your own image, with all the tools you need, and
maintain this as your project evolves; or,
you can simply install a basic image from the docker hub and install
all the tools before you run your jobs.
Both options have their pros and cons:
Option (1) gives you complete control, but you would need to add this to a private registry which the CI can pull from (Gitlab gives you a private registry support). The only slight con here is that you would have to setup the private registry (e.g Gitlab's) first. Not too difficult, and only need to do it once.
But then it is up to you to maintain the images, etc.
Option (2) allows you to run all the tools without having to maintain a private registry or the docker containers. You simply run the install scrips before your jobs as Arman P. mentioned. The disadvantage on that is that your jobs and builds/tests take longer to run as you now have to wait for the install to happen before every run.
A simple example: Using option (2)
We need PHPUnit and composer.
So, use a container from the public docker hub which has php: select one that has the version of PHP you want to test (e.g. 5.6 or 7). Let's assume 5.6.
In that container we need to install composer and PHPUnit before we can run our tests. This is what your CI file might look like:
.gitlab-ci.yml
image: php:5.6.25-cli
stages:
- build
- test
.install_composer_template:&install_composer
- apt-get install -y git
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
before_script:
<<:*install_composer
build app:
stage: build
script:
- composer install
- composer dump-autoload -o
test app:
stage: test
before_script:
<<:*install_composer
- composer global require -q phpunit/phpunit
script:
- phpunit --colors=always
Quick summary of what this actually does
Every job -- here we only have "build app" and "test app" -- will run through the official PHP 5.6 image.
We have two stages defined, build and test -- note: these are defined by default, but here we are being explicit for clarity.
The first job that runs will be the "build app" as it occurs in the first stage, build. Before the script runs in this job, the global before_script runs, which installs git and composer (Composer requires Git). Then the script runs and installs all our composer dependencies. Done.
The next stage then executes, which is test. So our jobs attached to that run (in parallel if we had more than one), which for us is just "test app". Here, we use YML features to reuse the install composer instructions: the local before_script overrides the global before_script, installing Composer (via the YML template) and PHPUnit. Then the actual script runs: which runs our unit tests. The unit tests assume you have a phpunit config in the root -- you can customise this command as you would on you own terminal.
Note here, that due to the stage setup, the output of the build stages are automatically made available in the test stage. So we don't have to do anything, Gitlab will pass all the files for us!
The main reason for using the install_composer template is to improve job execution time. Most things are going to require Composer. So we have a global template and a script that runs for every job. If you need something a bit more specific, with tools that are only required for that job, e.g. PHPUnit, then override the before_script locally in the job and install the required tools. This way we don't have to install a whole load of tools which we might not need in every job!