I am just trying to use precommit as a git hook on my python project. What I want to do is to run black only on the committed files. I am also running black through poetry.
At the moment my config looks like:
fail_fast: true
repos:
- repo: local
hooks:
- id: system
name: Black
entry: poetry run black .
pass_filenames: false
language: system
This, of course, runs black on the whole project structure and this is not what I want.
Is it possible to just run black on the committed files.
If you are not constrained by using poetry, you can use the following that will solve your purpose:
- repo: https://github.com/psf/black
rev: 22.6.0
hooks:
- id: black
language_version: python3.9 # Change this with your python version
args:
- --target-version=py39 # Change this with your python version
This will respect any and all settings you may have defined in pyproject.toml.
This will work.
fail_fast: true
repos:
- repo: local
hooks:
- id: black
name: Black
entry: poetry run black
language: system
types: [file, python]
stages: [commit]
Explanation:
Remove pass_filenames: false to use the default true.
Remove . from the poetry run black command. When pre-commit runs it will append the file names to the end. I.e: poetry run black file1 file2 file3
Specify the file types.
Optional: specify the stage.
Related
I am trying to use the extends keyword in the .gitlab-ci.yml of a GitLab Python project. It's not working, and I can't figure out why not.
I am using GitLab's CI/CD framework to test my Python project. The project has unit tests written with pytest and the following Dockerfile.
# syntax=docker/dockerfile:1
FROM python:3.9
WORKDIR /install
COPY . .
RUN pip install --no-cache-dir --upgrade .
EXPOSE 8000
CMD ["uvicorn", "sample_api.api:app"]
When I have the following .gitlab-ci.yml, GitLab's CI/CD system starts the python:3.9.16-slim-buster image and successfully runs the test job.
include:
- template: Auto-DevOps.gitlab-ci.yml
test:
stage: test
image: python:3.9.16-slim-buster
before_script:
- pip install .
script:
- pytest tests/unit
services: []
However, the test job fails when I change it to use the extends keyword like so.
include:
- template: Auto-DevOps.gitlab-ci.yml
.tests:
stage: test
image: python:3.9.16-slim-buster
before_script:
- pip install .
services: []
test:
extends: .tests
script:
- pytest tests/unit
The log of the failed test job looks like this.
...
Preparing the "docker" executor
00:11
Using Docker executor with image gliderlabs/herokuish:latest ...
...
Executing "step_script" stage of the job script
00:03
Using docker image sha256:686c154e24a2373406bdf9c8f44904b5dbe5cd36060c61d3da137086389d18d3 for gliderlabs/herokuish:latest with digest gliderlabs/herokuish#sha256:5d5914135908a234c20eec80daaa6a386bfa74293310bc0c79148fe7a7e4a926 ...
$ pip install .
/bin/bash: line 154: pip: command not found
Cleaning up project directory and file based variables
00:02
ERROR: Job failed: exit code 1
It is failing because the default herokuish:latest image is being used instead of python:3.9.16-slim-buster. It appears that the .tests section is never used.
I assume there's something wrong with my syntax, but it seems so simple and I can't figure out what it is.
I have two projects JWT and RELEASE-MGMT under the same group name in Gitlab.I have the pipelines as follows.
gitlab-ci.yml
JWT:
stages:
- prjname
include:
- project: 'testing-group/RELEASE-MGMT'
ref: 'main'
file:
- '/scripts/testing-prj-name.yml'
RELEASE-MGMT:(/scripts/testing-prj-name.yml)
testyqcommand:
stage: prjname
before_script:
- pip3 install jq
- pip3 install awscli
- pip3 install yq
script:
- pwd
- ls -ltr
- echo $CI_PROJECT_NAME
- yq -r '.$CI_PROJECT_NAME.projectname' projectnames.yml
Getting the below error
yq: error: argument files: can't open
'./scripts/testing-service-name.yml': [Errno 2] No such file or
directory: './scripts/testing-service-name.yml'
I was thinking since the two projects exists in the same group we can do this without using multi-project pipelines and also RELEASE-MGMT is the one that is included in all the microservices we have got.
include: is a logical mechanism in rendering a pipeline configuration. It won't actually bring any files to the workspace of the project running the pipeline.
If you want to run yq against a YAML file in another project, you'll have to clone the project first or otherwise retrieve the file as part of your CI job -- for example by using the files API or cloning the repo with the job token:
script:
- git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/<namespace>/<project>
i am trying to configure a variable in this case _PATH using cloudbuild. I've multiple paths (folders) on my github repo with tf files, and want to the terraform recognize any change that have been made on any folder at the moment to push and trigger.
I was wondering if there is any way to looping values separated by "comma" on trigger options and then use "for" in bash script.., or perhaps exists another better way that i dont really know yet,
Thanks for the help!
code cloudbuild
cloudbuild sample
Sadly, I haven't found a way yet to set variables at the cloudbuild.yaml level.
Note CloudBuild was originally called Cloud Container Builder, which is why it acts differently than other CI/CD tools.
I think there may be another way to get the behavior you want though:
Implement the bash looping logic in a script (ie. sh/run_terraform_applys.sh), and create a Dockerfile for it in your repo:
FROM hashicorp/terraform:1.0.0
WORKDIR /workspace
COPY sh/ /workspace/sh/
COPY requirements.txt /workspace/
RUN pip install -r requirements.txt
RUN . sh/run_terraform_applys.sh
COPY . /workspace/
RUN . sh/other_stuff_to_do.sh
Use the cloud-builders image to build your image and as a consequence the docker build will run sh/run_terraform_applys.sh within the Docker image (You can push your image to GCR to allow for layer-caching):
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |-
# Build Dockerfile
docker build -t ${MY_GCR_CACHE} --cache-from ${MY_GCR_CACHE} -f cicd/Dockerfile .
docker push ${MY_GCR_CACHE}
id: 'Run Terraform Apply'
waitFor: ['-']
I don't understand why Job finished with exit code 1 in gitlab CI. If I don't use flake8 (f.ex run script line echo "hello world" > $FOLDER_NAME/my_test.txt) all good.
But I see that flake8 found errors in directory:
...
$ mkdir -p $FOLDER_NAME
$ flake8 --max-line-length=120 --ignore=W605,W504 --tee --output-file=$FOLDER_NAME/$LOG_NAME $CHECKING_FOLDER
./framework/tests/test_5_helper.py:30:30: W292 no newline at end of file
./framework/tests/test_1_start.py:2:1: F401 'pprint.pprint' imported but unused
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1
yml-file:
stages:
- check
pep8_check:
stage: check
image: python:3.8-alpine
variables:
FOLDER_NAME: 'logs'
LOG_NAME: 'linter.log'
CHECKING_FOLDER: './framework/tests'
when: always
before_script:
- python -m pip install --upgrade pip
- pip install flake8
- export
- mkdir -p $FOLDER_NAME
script:
- flake8 --max-line-length=120 --ignore=W605,W504 --tee --output-file=$FOLDER_NAME/$LOG_NAME $CHECKING_FOLDER
artifacts:
expire_in: 7 days
paths:
- $FOLDER_NAME/
Flake8 finds 2 errors so exits with 1, this makes the GitLab pipeline fail.
You have a few options:
if you want GitLab to ignore any error flake8 may find, then you can just add the parameter --exit-zero, this will make flake8 exit with 0 which makes the GitLab pipeline successful
if you want to ignore those specific errors from your output:
./framework/tests/test_5_helper.py:30:30: W292 no newline at end of file
./framework/tests/test_1_start.py:2:1: F401 'pprint.pprint' imported but unused
then you just have to add those to the ignore list like you did for others:
change this --ignore=W605,W504 to this --ignore=W605,W504,W292,F401
you can also go and "fix/amend/change" your code so flake8 stops flagging those lines when parsing your source code
In any case reading the help with flake8 --help may give some more ideas on how to tackle these corner cases depending on what you want to achieve.
Also see here the lists of error/warning/violation codes E***, W***, F***:
https://pycodestyle.pycqa.org/en/latest/intro.html#error-codes
https://flake8.pycqa.org/en/latest/user/error-codes.html
My goal is to show badges (ex : ) based on pipeline results.
I have a private gitlab ce omnibus instance with the following .gitlab-ci.yml :
image: python:3.6
stages:
- lint
- test
before_script:
- python -V
- pip install pipenv
- pipenv install --dev
lint:
stage: lint
script:
- pipenv run pylint --output-format=text --load-plugins pylint_django project/ | tee pylint.txt
- score=$(sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' pylint.txt)
- echo "Pylint score was $score"
- ls
- pwd
- pipenv run anybadge --value=$score --file=pylint.svg pylint
artifacts:
paths:
- pylint.svg
test:
stage: test
script:
- pipenv run python manage.py test
So I thought that I would store the image in the artifacts of the lint job and display it via the badge feature.
But I encounter the following issue : when I browse https://example.com/[group]/[project]/-/jobs/[ID]/artifacts/file/pylint.svg, instead of seeing the badge I have the following message :
The image could not be displayed because it is stored as a job artifact. You can download it instead.
And anyways, I feel like this is the wrong way, because even if I could get the image, there don't seems to be a way to get the image from the last job since gitlab URL for badges images only supports %{project_path}, %{project_id}, %{default_branch}, %{commit_sha}
So how one would add badge to a gitlab project based on a svg generated from results in a gitlab pipeline ?
My guess is that I could push to a .badge folder but that doesn't sound like a clean solution.
You can indeed get the artifact(s) for the latest job (see documentation here), but the trick is that you need to use a slightly different URL:
https://example.com/[group]/[project]/-/jobs/artifacts/[ref]/raw/pylint.svg?job=lint
where [ref] is the reference to your branch/commit/tag.
Speaking of badge placeholders available in Gitlab, you can potentially put %{default_branch} or %{commit_sha} into [ref]. This won't allow you to get the correct badge for every branch, but at least your default branch will get one.
Please also note that ?job=lint query parameter is required, without it the URL won't work.