pre-commit hooks terminate if previous stage fails - pre-commit-hook

I'm using the pre-commit hooks configuration https://pre-commit.com/ to enable pre-commit hooks
repos:
- repo: local
hooks:
- id: pytest-check
name: pytest-check
entry: pytest
language: system
pass_filenames: false
always_run: false
- id: flake8
name: flake8
entry: flake8
language: python
types: [python]
args: ['src/']
If pytest-check fails it will also execute the flake8 hook.
Is it possible to terminate the execution if a previous hook failed, in this case flake8 would not run if the pytest-check failed.
I read through the docs but couldn't find any information on this...

you're looking for fail_fast: true
it can be specified both at the top level and at the hook level
an aside you have a few unrelated problems with your configuration:
~generally you don't want to run tests as part of pre-commit, they'll be slow which will frustrate your users and often lead to them turning the whole thing off
always_run: false is the default, no need to specify it
flake8 as you've configured it is both double-linting and fork bombing (you're passing src/ and pre-commit is additionally passing filenames, and you've misconfigured the multiprocessing mode) -- I'd recommend using the pycqa/flake8 repository directly which configures this correctly
for repo: local hooks there's no reason to use args -- just specify it directly in entry
disclaimer: I wrote pre-commit

Related

Gitlab runner does not always update environment

I have a gitlab job that does not seem to update the repository before being run. Sometimes it leaves some files in their old states and run the script... Any idea ?
For instance when I have a
packagePython:
stage: package
script:
- .\scripts\PackagePython.ps1
tags:
- myServer
cache:
paths:
- .\python\cache\
only:
changes:
- python/**/*
I finally managed to understand what was happening :
I realised that the gitlab-runner did not use exactly the same path for each run on my server, and my script assumed that it did... So I ended up pointing on a build made on the wrong path.
I guess if you think that it is not updating the repository (like I did) make sure you are not referencing hardcoded path/package in your scripts that could refer to previous versions !

How to run Cypress tests on circle-ci orb using Cypress-tags

I am trying to run a small test collection with Cypress using the cucumber plugin and the official Circle-ci Orb.
I've been going through the documentation and I've got them running without issues locally. The script I'm using to run them locally is this one:
"test:usa": "cypress-tags run --headless -b chrome -e TAGS='#usa'"
*Note the cypress-tags command and the TAGS option.
For the CI I use the official Circle-ci Orb and have a configuration like this:
- cypress/run:
name: e2e tests
executor: with-chrome
record: true
parallel: true
parallelism: 2
tags: 'on-pr'
group: 2x-chrome
ci-build-id: '${CIRCLE_BUILD_NUM}'
requires:
- org/deployment
As you can read, I want to raise 2 machines where I divide my feature files, setting the tag to 'on-pr' and grouping the run under '2x-chrome' using as well the ci-build-id.
The thing is that the official Orb uses the cypress run command which does not filter scenarios by their tags, so it is of no use here. My options were:
Using the command parameter in the orb to call the required script as I do locally:
command: npm run test:usa
My problem with this option is that the parallel configuration does not work as expected so I discarded it.
I tried to pass the TAGS parameter as an env var within the Circle-ci executor to see if the Orb was able to see it, but it was of no use as the Orb does not use cypress-tags run but cypress run.
environment:
TAGS: '#usa'
At this point my question is, is there a workaround to this (using cypress-tags in the Circle-ci orb) or shall I opt for a different way of testing this?
Thanks in advance

Is possible to list the files affected by pre-commit run?

While using pre-commit sometimes I just want to know that filenames are going to be passed to the hook, just to verify that by --from-ref and to-ref are correct. For example, I was running:
pre-commit run flake8 --from-ref $(git merge-base master HEAD) --to-ref HEAD
and I wasn't sure of what files where passed to my flake8 hook, adding --verbose does not help because flake8 will not output the filenames either.
So it's there any way to tell pre-commit to only output the list of filenames without running the actual hook?
pre-commit provides a special identity hook for this purpose
you can configure it by doing:
- repo: meta
hooks:
- id: identity
alternatively, if you're just trying to figure out --from-ref / --to-ref -- you can use git diff A...B --name-only as that's what pre-commit is using behind the scenes
disclaimer: I'm the author of pre-commit
As a workaround you can define a new hook in your .pre-commit-config.yaml that runs ls
- id: ls
name: show files
language: system
entry: "ls"
pass_filename: true
Then you can run:
git add .pre-commit-config.yaml
git commit # so that the new config takes effect
pre-commit run ls --verbose --from-ref $(git merge-base master HEAD) --to-ref HEAD
and you will get a list of the files "selected" by pre-commit.

Concourse CI - Build Artifacts inside source, pass all to next task

I want to set up a build pipeline in Concourse for my web application. The application is built using Node.
The plan is to do something like this:
,-> build style guide -> dockerize
source code -> npm install -> npm test -|
`-> build website -> dockerize
The problem is, after npm install, a new container is created so the node_modules directory is lost. I want to pass node_modules into the later tasks but because it is "inside" the source code, it doesn't like it and gives me
invalid task configuration:
you may not have more than one input or output when one of them has a path of '.'
Here's my jobs set up
jobs:
- name: test
serial: true
disable_manual_trigger: false
plan:
- get: source-code
trigger: true
- task: npm-install
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
outputs:
- name: node_modules
run:
path: npm
args: [ install ]
- task: npm-test
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
- name: node_modules
run:
path: npm
args: [ test ]
Update 2016-06-14
Inputs and outputs are just directories. So you put what you want output into an output directory and you can then pass it to another task in the same job. Inputs and Outputs can not overlap, so in order to do it with npm, you'd have to either copy node_modules, or the entire source folder from the input folder to an output folder, then use that in the next task.
This doesn't work between jobs though. Best suggestion I've seen so far is to use a temporary git repository or bucket to push everything up. There has to be a better way of doing this since part of what I'm trying to do is avoid huge amounts of network IO.
There is a resource specifically designed for this use case of npm between jobs. I have been using it for a couple of weeks now:
https://github.com/ymedlop/npm-cache-resource
It basically allow you to cache the first install of npm and just inject it as a folder into the next job of your pipeline. You could quite easily setup your own caching resources from reading the source of that one as well, If you want to cache more than node_modules.
I am actually using this npm-cache-resource in combination with a Nexus proxy to speed up the initial npm install further.
Be aware that some npm packages have native bindings that need to be built with the standardlibs that matches the containers linux versions standard libs so, If you move between different types of containers a lot you may experience some issues with libmusl etc, in that case I recommend either streamlinging to use the same container types through the pipeline or rebuilding the node_modules in question...
There is a similar one for gradle (on which the npm one is based upon)
https://github.com/projectfalcon/gradle-cache-resource
This doesn't work between jobs though.
This is by design. Each step (get, task, put) in a Job is run in an isolated container. Inputs and outputs are only valid inside a single job.
What connects Jobs is Resources. Pushing to git is one way. It'd almost certainly be faster and easier to use a blob store (eg S3) or file store (eg FTP).

Puppet + Beaker + Travis: acceptance test failing

I A have a Puppet module with acceptance tests based on Beaker. The module is working fine and when running locally all the acceptance tests run fine. But when I run the tests on Travis I got the following error in the module execution:
/Stage[main]/Alfred::Services/Service[alfred]: Could not evaluate: undefined method `[]' for nil:NilClass
Alfred is a system service based on upstart that is part of my module.
I am using Puppet 4.3.2. Here is the travis build: https://travis-ci.org/nicopaez/alfred-puppet
Any idea?
Looking at the code, there's a few issues.
One is that the environment variable you're using in Travis doesn't set the Puppet version. You need to add that code to your spec_helper_acceptance.rb:
hosts.each do |host|
install_puppet_on(host,
:puppet => ENV['PUPPET_VERSION'] || '4.3.2',
)
end
Right now it's still installing Puppet 3.8 (the default latest)
For further information on what is actually causing the issues in Travis, I forked your repo and did a build where I enabled the debug and trace options for beaker:
result = apply_manifest(pp, :trace => true, :debug => true)
From this, looking at the Travis build, theres an issue with the git clone exec:
Debug: Exec[clone-repo](provider=posix): Executing 'git clone https://github.com/fiuba/alfred.git /var/www/alfred'
Debug: Executing 'git clone https://github.com/fiuba/alfred.git /var/www/alfred'
Notice: /Stage[main]/Alfred::App/Exec[clone-repo]/returns: fatal: Could not change back to '/root': Permission denied
Error: git clone https://github.com/fiuba/alfred.git /var/www/alfred returned 128 instead of one of [0]
You could fix this by using the vcsrepo module, which performs git clones in an more idempotent way:
vcsrepo { '/var/www/alfred':
ensure => present,
source => 'https://github.com/fiuba/alfred.git',
provider => git,
user => 'deployer',
}
There's a few other fixes, I'm PRing some fixes to your module to fix them, and will add a summary here to the Stack Overflow answer after you've reviewed and merged, as some are significant refactors with a few different approaches.

Resources