I am trying to set up a CI/CD pipeline for a python project using a Windows runner on GitLab.
However, when executing pytest, pytest collects 10 items, and opens the first test file. After that, the pipeline continues running, but nothing happens and the pipeline stops after time-out. All test work correctly locally and take around 30 seconds in total.
The root directoty for pytest is correct.
This is my Gitlab yml file:
image: python:3.11.0-buster
cache:
paths:
- .cache/pip
- venv/
tests:
tags:
- windows
before_script:
- mkdir -p ~/.ssh
- echo "$DEPLOY_KEY" > ~/.ssh/id_rsa
- echo "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- Import-Module "$env:ChocolateyInstall\helpers\chocolateyProfile.psm1"
- choco install python --version=3.11 -y -f
# - dir
- C:\\Python311\\python.exe -m pip install --upgrade pip
- C:\\Python311\\python.exe -m pip install --upgrade setuptools
- C:\\Python311\\python.exe -m pip install -r requirements.txt
# - where python
- C:\\Python311\\python.exe -m pip install virtualenv
script:
- C:\\Python311\\python.exe -m pytest -p no:faulthandler
I've also tried - C:\Python311\python.exe -m pytest which had the same result
Related
I am trying to write a gitlab CI file as follows:
image: ubuntu:latest
variables:
GIT_SUBMODULE_STRATEGY: recursive
AWS_DEFAULT_REGION: eu-central-1
S3_BUCKET: $BUCKET_TRIAL
stages:
- deploy
.before_script_template: &before_script_definition
stage: deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
.after_script_template: &after_script_definition
after_script:
# Upload package to S3
# Install AWS CLI
- pip install awscli --upgrade # --user
- export PATH=$PATH:~/.local/bin # Add to PATH
# Configure AWS connection
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws sts get-caller-identity --output text --query 'Account' # current account
- aws s3 cp ~/forlambda/archive.zip $BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip
monatliche_strom:
variables:
LAMBDA_NAME: monthly_strom
before_script: *before_script_definition
script:
- mv some.py ~
- mv requirements.txt ~
# Move submodules
- mv submodule1/submodule1 ~
- mv submodule1/submodule2/submodule2 ~
# Setup virtual environment
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
# Package environment and dependencies
- cd ~/forlambda/venv/lib/python3.7/site-packages/
- zip -r9 ~/forlambda/archive.zip .
- cd ~
- zip -g ~/forlambda/archive.zip some.py
- zip -r ~/forlambda/archive.zip submodule1/*
- zip -r ~/forlambda/archive.zip submodule2/*
after_script: *after_script_definition
When I run it in the gitlab CI lint, it gives me the following error:
jobs:monatliche_strom:before_script config should be an array
containing strings and arrays of strings
jobs:monatliche_strom:after_script config should be an array
containing strings and arrays of strings
I am fairly new to gitlab CI, so can someone please tell what is the mistake I am doing?
Try this:
image: ubuntu:latest
variables:
GIT_SUBMODULE_STRATEGY: recursive
AWS_DEFAULT_REGION: eu-central-1
S3_BUCKET: $BUCKET_TRIAL
stages:
- deploy
.before_script_template: &before_script_definition
stage: deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
.after_script_template: &after_script_definition
after_script:
# Upload package to S3
# Install AWS CLI
- pip install awscli --upgrade # --user
- export PATH=$PATH:~/.local/bin # Add to PATH
# Configure AWS connection
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws sts get-caller-identity --output text --query 'Account' # current account
- aws s3 cp ~/forlambda/archive.zip $BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip
monatliche_strom:
variables:
LAMBDA_NAME: monthly_strom
<<: *before_script_definition
script:
- mv some.py ~
- mv requirements.txt ~
# Move submodules
- mv submodule1/submodule1 ~
- mv submodule1/submodule2/submodule2 ~
# Setup virtual environment
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
# Package environment and dependencies
- cd ~/forlambda/venv/lib/python3.7/site-packages/
- zip -r9 ~/forlambda/archive.zip .
- cd ~
- zip -g ~/forlambda/archive.zip some.py
- zip -r ~/forlambda/archive.zip submodule1/*
- zip -r ~/forlambda/archive.zip submodule2/*
<<: *after_script_definition
Since you already described before_script & after_script in the anchors, you have to use << to merge the given hash into the current one
I have a set of scripts in .travis.yml which runs perfectly fine at the moment
...
install:
- scripts/travis/install_deps.sh
- virtualenv -p /opt/pyenv/versions/3.6/bin/python3.6 venv
- source venv/bin/activate
- pip install -r requirements.txt
before_script:
- scripts/test.sh
script:
- scripts/travis/build.sh
after_success:
- deactivate
- virtualenv -p /opt/pyenv/versions/2.7/bin/python2.7 venv
- source venv/bin/activate
- pip install -r requirements.txt
...
However, I would like to clean it up a bit so there's less repetition such that .travis.yml looks like
...
install:
- scripts/travis/install_deps.sh
- export PYTHON_VERSION=3.6
- scripts/travis/install_python_deps.sh
before_script:
- scripts/test.sh
script:
- scripts/travis/build.sh
after_success:
- export PYTHON_VERSION=2.7
- scripts/travis/install_python_deps.sh
...
where install_python_deps.sh looks like
#!/usr/bin/env bash
set -e
if [ ! -z "$VIRTUAL_ENV" ]; then deactivate; fi
virtualenv -p "/opt/pyenv/versions/${PYTHON_VERSION}/bin/python${PYTHON_VERSION}" venv
source venv/bin/activate
pip install -r requirements.txt
The problem arises when this is run in travis. The build breaks when test.sh, which runs a python script that relies on a module declared in requirements.txt is not found. Any pointers as to why this is occurring would be greatly appreciated.
The source venv/bin/activate inside scripts/travis/install_python_deps.sh only has effect until the script install_python_deps.sh exits.
If you want to use the installed modules outside the install_python_deps.sh script,
you need to run source venv/bin/activate (again) outside the script too, for example:
...
install:
- scripts/travis/install_deps.sh
- scripts/travis/install_python_deps.sh 3.6
- source venv/bin/activate
before_script:
- scripts/test.sh
script:
- scripts/travis/build.sh
after_success:
- scripts/travis/install_python_deps.sh 2.7
- source venv/bin/activate
...
Note that to make it shorter, I replaced the PYTHON_VERSION environment variable with a command line parameter. You could adjust the scripts/travis/install_python_deps.sh script accordingly:
#!/usr/bin/env bash
set -euo pipefail
PYTHON_VERSION=$1
if [ "$VIRTUAL_ENV" ]; then deactivate; fi
virtualenv -p "/opt/pyenv/versions/${PYTHON_VERSION}/bin/python${PYTHON_VERSION}" venv
source venv/bin/activate
pip install -r requirements.txt
Gitlab runner doesn't give any output or error. It just stays at loading screen forever when run this command
python manage.py test
image: python:3.6
services:
- postgres:10
before_script:
- apt-get update
- pip install pipenv
- pipenv install --system --deploy --ignore-pipfile
stages:
- test
test:
script:
- export DATABASE_URL=postgres://postgres:#postgres:5432/test-master-tool
- python manage.py migrate
- python manage.py test
- coverage report
I have the following configuration as .gitlab-ci.yml
but I found out after successfully pass build stage (which
would create a virtualenv called venv), it seems that
in test stage you would get a brand new environment(there's
no venv directory at all). So I wonder should I put setup
script in before_script therefor it would run in each phase(build/test/deploy). Is it a right way to do it ?
before_script:
- uname -r
types:
- build
- test
- deploy
job_install:
type: build
script:
- apt-get update
- apt-get install -y libncurses5-dev
- apt-get install -y libxml2-dev libxslt1-dev
- apt-get install -y python-dev libffi-dev libssl-dev
- apt-get install -y python-virtualenv
- apt-get install -y python-pip
- virtualenv --no-site-packages venv
- source venv/bin/activate
- pip install -q -r requirements.txt
- ls -al
only:
- master
job_test:
type: test
script:
- ls -al
- source venv/bin/activate
- cp crawler/settings.sample.py crawler/settings.py
- cd crawler
- py.test -s -v
only:
- master
adasd
Gitlab CI jobs supposed to be independent, because they could run on different runners. It is not issue. There two ways to pass files between stages:
The right way. Using artefacts.
The wrong way. Using cache. With cache key "hack". Still need same runner.
So yes, supposed by gitlab way to have everything your job depends on in before script.
Artifacts example:
artifacts:
when: on_success
expire_in: 1 mos
paths:
- some_project_files/
Cache example:
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- node_modules/
- src/bower_components/
For correct running environment i suggest using docker with image containing apt-get dependencies. And use artefacts for passing job results between jobs. Note that artefact also uploaded to gitlab web interface and being able to download them. So if they are quite heavy use small expire_in time, for removing them after all jobs done.
I have just installed gitlab-ci-multi-runner by following the documentation https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/linux-repository.md
I use the public server ci.gitlab.com and the registration of the runner seems OK (the runner appears with a green light).
With debug activated I can see that the runner fetch regularly the CI server.
But when a new commit is pushed no build is done.
Everything is green: https://ci.gitlab.com/projects/4656 but no test is done...
My .gitlab-ci.yml is pretty simple:
before_script:
- apt install python3-pip
- pip3 install -q -r requirements.txt
master:
script: "make test"
only:
- master
script:
- python setup.py test
By the way I can find any error message and I don't know where to search.
I am pretty knew to CI and there is perhaps an obvious point I am missing.
Give this a try. this is assuming your pyunit tests are in a file called runtests.py in the working directory.
before_script:
- apt install python3-pip
- pip3 install -q -r requirements.txt
master:
script: "python runtests.py"
only:
- master