My team uses Pre-commit in our repositories to run various code checks and formatters. Most of my teammates use it but some skip it entirely by committing with git commit --no-verify. Is there anyway to run something in CI/CD to ensure all Pre-commit hooks pass (we're using GitHub actions). If at least one hooks fails, then throw an error.
There are several choices:
an adhoc job which runs pre-commit run --all-files as demonstrated in pre-commit.com docs (you probably also want --show-diff-on-failure if you're using formatters)
pre-commit.ci a CI system specifically built for this purpose
pre-commit/action a github action which is in maintenance-only mode (not recommended for new usecases)
disclaimer: I created pre-commit, and pre-commit.ci, and the github action
Related
We have lot of python code residing in local git repository. Having installed gitlab locally, need to implement CI/CD pipeline. Need is to ensure, that all code is sanitized before being pushed to remote git repository. pre commit hooks that come by default with git, should help in doing so. Question is will it help to integrate git hooks with CI / CD pipeline? How ?
That hook is a client-side hook.
While CI/CD is done on the server side. Which means the hook itself is not integrated, but the script/command used by that hook can be reused in a gateway pipeline (on a runner configured to run Python).
(See also those CICD pipelines examples)
you push your topic/feature branch
the gateway pipeline is triggered (by the push event)
if it passes, it merges in turn your code to an integration branch (like development)
if it does not pass, your code does not end up on the dev branch, forcing you to fix whatever issue was highlighted by the gateway pipeline execution.
You also have Code Quality reports, to analyze how your improvements are impacting your code’s quality.
I’m trying to set up GitLab CI/CD for an old client-side project that makes use of Grunt (https://github.com/yeoman/generator-angular).
Up to now the deployment worked like this:
run ’$ grunt build’ locally which built the project and created files in a ‘dist’ folder in the root of the project
commit changes
changes pulled onto production server
After creating the .gitlab-ci.yml and making a commit, the GitLab CI/CD job passes but the files in the ‘dist’ folder in the repository are not updated. If I define an artifact, I will get the changed files in the download. However I would prefer the files in ‘dist’ folder in the to be updated so we can carry on with the same workflow which suits us. Is this achievable?
I don't think commiting into your repo inside a pipeline is a good idea. Version control wouldn't be as clear, some people have automatic pipeline trigger when their repo is pushed, that'd trigger a loop of pipelines.
Instead, you might reorganize your environment to use Docker, there are numerous reasons for using Docker in a professional and development environments. To name just a few: that'd enable you to save the freshly built project into a registry and reuse it whenever needed right with the version you require and with the desired /dist inside. So that you can easily run it in multiple places, scale it, manage it etc.
If you changed to Docker you wouldn't actually have to do a thing in order to have the dist persistent, just push the image to the registry after the build is done.
But to actually answer your question:
There is a feature request hanging for a very long time for the same problem you asked about: here. Currently there is no safe and professional way to do it as GitLab members state. Although you can push back changes as one of the GitLab members suggested (Kamil Trzciński):
git push http://gitlab.com/group/project.git HEAD:my-branch
Just put it in your script section inside gitlab-ci file.
There are more hack'y methods presented there, but be sure to acknowledge risks that come with them (pipelines are more error prone and if configured in a wrong way, they might for example publish some confidential information and trigger an infinite pipelines loop to name a few).
I hope you found this useful.
We are using Gitlab Community Edition 8.15.2 and are using custom global git hooks for all our repos (i.e. all repos use the same hooks).
For one of our repos, I want to use a <project>.git/custom_hooks hooks and NOT the global hooks.
According to the Gitlab documentation for chained git hooks (https://docs.gitlab.com/ce/administration/custom_hooks.html) it's going to go through all the possible locations and execute as long as the previous ones successfully exit.
I don't want it to execute both the custom_hooks and the global hooks...just the custom one. Is this possible?
The problem is, after gitlab-shell merge_requests 93, that <project>.git/hooks/ is a symlink to gitlab-shell/hooks global dir.
So if you want to be sure to not use global hooks, you would either
change the symlink to an empty folder (but that might have side-effects if gitlab-shell expects to run for instance a common global pre-reveive or update hooks)
change the global scripts in order to detect the Git repo they are executed in, and exit immediately (with status 0) if the repo matches one you don't want global hooks.
We have setup and perfectly running gitlab + gitlab-ci installation. We are now looking how to do cross-project builds. Our project is divided into several repositories and everything is joined during build process via composer.
What I would like to achieve is - when you commit to any of those repositories, they trigger main repository to get built. I was trying to achieve this via webhooks, unfortunately I need a lot of information about commit from the main repository, that I don't have.
Any idea how to do it?
I updated gitlab-ci code a little bit: https://github.com/gitlabhq/gitlab-ci/commit/7c7066b0d5a35097a04bb31848d6b622195940ed
I can now call the api.
For my nodejs application, I am trying to run a hook through gitolite which performs the following actions (on the server side):
Update the repo to take into account the new changes (git fetch + git reset --hard newref)
Update the application dependencies (bower update, npm update / install)
Check some basic rules (coding rules, unit tests 100% ok, etc). Basically, it runs something like grunt test (jshint, mocha, ...)
Compile everything (grunt build)
Run the application
If one of these steps somehow fails, the whole process is stopped, the old application is restored and the push is denied.
My first approach was to add a pre-receive hook to this specific repo, but since it is triggered before the gitolite update hook (which checks about your rights), this was bad anyway.
So I now think about using VREFs which kind of work like a secondary update hook. So I'm pretty sure it would work like this, however it seems VREFs are usually here to perform only some basic checks, and don't intend to be used as something such a full deployment process.
I've done some research and it seems that usually people use a post-receive hook to deploy their app. This means that if something fails, the push is accepted anyway. I really would like to avoid accepting a commit which breaks the application at some point.
Is there a better way to achieve this deployment?