Command do not end - gitlab

Gitlab runner doesn't give any output or error. It just stays at loading screen forever when run this command
python manage.py test
image: python:3.6
services:
- postgres:10
before_script:
- apt-get update
- pip install pipenv
- pipenv install --system --deploy --ignore-pipfile
stages:
- test
test:
script:
- export DATABASE_URL=postgres://postgres:#postgres:5432/test-master-tool
- python manage.py migrate
- python manage.py test
- coverage report

Related

GitLab CICD Pytest not starting tests on Windows Runner

I am trying to set up a CI/CD pipeline for a python project using a Windows runner on GitLab.
However, when executing pytest, pytest collects 10 items, and opens the first test file. After that, the pipeline continues running, but nothing happens and the pipeline stops after time-out. All test work correctly locally and take around 30 seconds in total.
The root directoty for pytest is correct.
This is my Gitlab yml file:
image: python:3.11.0-buster
cache:
paths:
- .cache/pip
- venv/
tests:
tags:
- windows
before_script:
- mkdir -p ~/.ssh
- echo "$DEPLOY_KEY" > ~/.ssh/id_rsa
- echo "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- Import-Module "$env:ChocolateyInstall\helpers\chocolateyProfile.psm1"
- choco install python --version=3.11 -y -f
# - dir
- C:\\Python311\\python.exe -m pip install --upgrade pip
- C:\\Python311\\python.exe -m pip install --upgrade setuptools
- C:\\Python311\\python.exe -m pip install -r requirements.txt
# - where python
- C:\\Python311\\python.exe -m pip install virtualenv
script:
- C:\\Python311\\python.exe -m pytest -p no:faulthandler
I've also tried - C:\Python311\python.exe -m pytest which had the same result

run command after deploy using Gitlab CI/CD

I'm setting up Gitlab CI/CD to automate deployment to heroku app with every push.
currently my .gitlab-ci.yml file looks like
production:
type: deploy
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=koober-production --api-key=$HEROKU_PRODUCTION_API_KEY
only:
- master
This works fine and deployment is successful and application is working.
But, I need to run few commands after successful deployment to migrate database.
At present, I need to do this manually by running command from terminal
heroku run python manage.py migrate -a myapp
How can I automate this to run this command after deployment?
First types are deprecated, you should use stages.
Back to the original question, I think you can use a new stage/type for this purpose.
Declaring something like:
stages:
- build
- test
- deploy
- post_deploy
post_production:
stage: post_deploy
script:
- heroku run python manage.py migrate -a myapp
only:
- master
This should then execute only in the case the deplyment succeds.
Solved using --run flag to run command using dpl
stages:
- deploy
production:
stage: deploy
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=koober-production --api-key=$HEROKU_PRODUCTION_API_KEY --run='python manage.py migrate && python manage.py create_initial_users'
only:
- master

Correct Dockerfile syntax for pyramid app for use with Python 3.5?

I want to run a pyramid app in a docker container, but I'm struggling with the correct syntax in the Dockerfile. Pyramid doesn't have an official Dockerfile, butI found this site that recommended using an Ubuntu base image.
https://runnable.com/docker/python/dockerize-your-pyramid-application
But this is for Python 2.7. Any ideas how I can change this to 3.5? This is what I tried:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev && \
pip3 install --upgrade pip setuptools
# We copy this file first to leverage docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
CMD [ "pserve development.ini" ]
and I run this from the command line:
docker build -t testapp .
but that generates a slew of errors ending with this
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.5/dist-packages/appdirs-1.4.3.dist-info/METADATA'
The command '/bin/sh -c pip3 install -r requirements.txt' returned a non-zero code: 2
And even if that did build, how will pserve execute in 3.5 instead of 2.7? I tried modifying the Dockerfile to create a virtual environment to force execution in 3.5, but still, no luck. For what it's worth, this works just fine on my machine with a 3.5 virtual environment.
So, can anyone help me build the proper Dockerfile so I can run this Pyramid application with Python 3.5? I'm not married to the Ubuntu image.
If that can help, here's my Dockerfile for a Pyramid app that we develop using Docker. It's not running in production using Docker though.
FROM python:3.5.2
ADD . /code
WORKDIR /code
ENV PYTHONUNBUFFERED 0
RUN echo deb http://apt.postgresql.org/pub/repos/apt/ jessie-pgdg main >> /etc/apt/sources.list.d/pgdg.list
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get update
RUN apt-get install -y \
gettext \
postgresql-client-9.5
RUN pip install -r requirements.txt
RUN python setup.py develop
As you may notice, we use Postgres and gettext, but you can install whatever dependencies you need.
The line ENV PYTHONUNBUFFERED 0 I think we added that because Python would buffer all outputs so nothing would be printed in the console.
And we use Python 3.5.2 for now. We tried a version a bit more recent, but we ran into issues. Maybe that's fixed now.
Also, if that can help, here's an edited version of the docker-compose.yml file:
version : '2'
services:
db:
image: postgres:9.5
ports:
- "15432:5432"
rabbitmq:
image: "rabbitmq:3.6.6-management"
ports:
- '15672:15672'
worker:
image: image_from_dockerfile
working_dir: /code
command: command_for_worker development.ini
env_file: .env
volumes:
- .:/code
web:
image: image_from_dockerfile
working_dir: /code
command: pserve development.ini --reload
ports:
- "6543:6543"
env_file: .env
depends_on:
- db
- rabbitmq
volumes:
- .:/code
We build the image by doing
docker build -t image_from_dockerfile .
Instead of passing directly the Dockerfile path in the docker-compose.yml config, because we use the same image for the web app and the worker, so we would have to rebuild twice every time we have to rebuild.
And one last thing, if you run locally for development like we do, you have to run
docker-compose run web python setup.py develop
one time in the console, otherwise, you'll get an error like if the app was not accessible when you docker-compose up. This happens because when you mount the volume with the code in it, it removes the one from the image, so the package files (like .egg) are "removed".
Update
Instead of running docker-compose run web python setup.py develop to generate the .egg locally, you can tell Docker to use the .egg directory from the image by including the directory in the volumes.
E.g.
volumes:
- .:/code
- /code/packagename.egg-info

Gitlab runner error "Build failed: exit code 1"

I'm trying to build Jekyll blog using gitlab runner (for gitlab pages). I get the following error: ERROR: Build failed: exit code 1. So far, everything worked. Link to project: https://gitlab.com/dash.plus/dashBlog
Just add
- apt-get update && apt-get install -y nodejs
And ofc
- bundle install
inside gitlab-cl.yaml
image: ruby:2.3
test:
stage: test
script:
- gem install jekyll
- bundle install
- apt-get update && apt-get install -y nodejs
- bundle exec jekyll -d test/
artifacts:
paths:
- test
except:
- master
pages:
stage: deploy
script:
- gem install jekyll
- bundle install
- apt-get update && apt-get install -y nodejs
- bundle exec jekyll -d public/
artifacts:
paths:
- public
only:
- master

GitLab CI runner doesn't build

I have just installed gitlab-ci-multi-runner by following the documentation https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/linux-repository.md
I use the public server ci.gitlab.com and the registration of the runner seems OK (the runner appears with a green light).
With debug activated I can see that the runner fetch regularly the CI server.
But when a new commit is pushed no build is done.
Everything is green: https://ci.gitlab.com/projects/4656 but no test is done...
My .gitlab-ci.yml is pretty simple:
before_script:
- apt install python3-pip
- pip3 install -q -r requirements.txt
master:
script: "make test"
only:
- master
script:
- python setup.py test
By the way I can find any error message and I don't know where to search.
I am pretty knew to CI and there is perhaps an obvious point I am missing.
Give this a try. this is assuming your pyunit tests are in a file called runtests.py in the working directory.
before_script:
- apt install python3-pip
- pip3 install -q -r requirements.txt
master:
script: "python runtests.py"
only:
- master

Resources