For my nodejs application, I am trying to run a hook through gitolite which performs the following actions (on the server side):
Update the repo to take into account the new changes (git fetch + git reset --hard newref)
Update the application dependencies (bower update, npm update / install)
Check some basic rules (coding rules, unit tests 100% ok, etc). Basically, it runs something like grunt test (jshint, mocha, ...)
Compile everything (grunt build)
Run the application
If one of these steps somehow fails, the whole process is stopped, the old application is restored and the push is denied.
My first approach was to add a pre-receive hook to this specific repo, but since it is triggered before the gitolite update hook (which checks about your rights), this was bad anyway.
So I now think about using VREFs which kind of work like a secondary update hook. So I'm pretty sure it would work like this, however it seems VREFs are usually here to perform only some basic checks, and don't intend to be used as something such a full deployment process.
I've done some research and it seems that usually people use a post-receive hook to deploy their app. This means that if something fails, the push is accepted anyway. I really would like to avoid accepting a commit which breaks the application at some point.
Is there a better way to achieve this deployment?
Related
My team uses Pre-commit in our repositories to run various code checks and formatters. Most of my teammates use it but some skip it entirely by committing with git commit --no-verify. Is there anyway to run something in CI/CD to ensure all Pre-commit hooks pass (we're using GitHub actions). If at least one hooks fails, then throw an error.
There are several choices:
an adhoc job which runs pre-commit run --all-files as demonstrated in pre-commit.com docs (you probably also want --show-diff-on-failure if you're using formatters)
pre-commit.ci a CI system specifically built for this purpose
pre-commit/action a github action which is in maintenance-only mode (not recommended for new usecases)
disclaimer: I created pre-commit, and pre-commit.ci, and the github action
I have a project running on a remote server. I cloned it into the server to run. Problem is everytime I make a change to the code via git, I have to go into the remote server delete the folder and clone it once again. How can it automatically detect a change in the repo and update it?
You're looking for what's called continuous deployment|delivery.
Since you're using GitHub, you may want to look at GitHub Actions. This is one of many mechanisms that are available.
You can configure Actions to trigger various actions (including building, testing and deployment of your code [to the Digital Ocean droplet]) every time you make a commit.
I’m trying to set up GitLab CI/CD for an old client-side project that makes use of Grunt (https://github.com/yeoman/generator-angular).
Up to now the deployment worked like this:
run ’$ grunt build’ locally which built the project and created files in a ‘dist’ folder in the root of the project
commit changes
changes pulled onto production server
After creating the .gitlab-ci.yml and making a commit, the GitLab CI/CD job passes but the files in the ‘dist’ folder in the repository are not updated. If I define an artifact, I will get the changed files in the download. However I would prefer the files in ‘dist’ folder in the to be updated so we can carry on with the same workflow which suits us. Is this achievable?
I don't think commiting into your repo inside a pipeline is a good idea. Version control wouldn't be as clear, some people have automatic pipeline trigger when their repo is pushed, that'd trigger a loop of pipelines.
Instead, you might reorganize your environment to use Docker, there are numerous reasons for using Docker in a professional and development environments. To name just a few: that'd enable you to save the freshly built project into a registry and reuse it whenever needed right with the version you require and with the desired /dist inside. So that you can easily run it in multiple places, scale it, manage it etc.
If you changed to Docker you wouldn't actually have to do a thing in order to have the dist persistent, just push the image to the registry after the build is done.
But to actually answer your question:
There is a feature request hanging for a very long time for the same problem you asked about: here. Currently there is no safe and professional way to do it as GitLab members state. Although you can push back changes as one of the GitLab members suggested (Kamil Trzciński):
git push http://gitlab.com/group/project.git HEAD:my-branch
Just put it in your script section inside gitlab-ci file.
There are more hack'y methods presented there, but be sure to acknowledge risks that come with them (pipelines are more error prone and if configured in a wrong way, they might for example publish some confidential information and trigger an infinite pipelines loop to name a few).
I hope you found this useful.
I am having a build issue on travis with my node.js project. The issue stems from the fact that I have a rather complex test that I want to run, which requires building and running some test scaffolding framework on the VM before I get to 'npm test'. Somewhere along the line it is failing, and I find myself adding debugging statements to my .travis.yml to try to root out the problem, but its annoying to have my commit history littered with these changes/attempted fixes.
I guess I want to be able to either (a) get on the travis box at the time the test is running (or afterwards) so I can inspect what is going on/went wrong, or (b) at least be able to tweak and run my .travis.yml file and associated scripts somehow and re-run immediately without having to formally check those changes in in order to kick off travis again.
I find myself adding debugging statements to my .travis.yml to try to root out the problem, but its annoying to have my commit history littered with these changes/attempted fixes.
If the history is important, maybe because your changelog is generated from it, then my suggestion is, to create a private sandbox for experiments by cloning the repo.
clone a organization repo to a user repo.
activate travis on the user repo
try and error commit as long as you need to your .travis.yml
when everything is working like you want, squash the git commits into 1
do a pull request of this single commit from user repo to company repo
et voila: history stays clean
Big Warning: When you have no contributors with forks to worry about, then you could simply commit till you get it right and squash the history into a single commit and do a force push.
get on the travis box at the time the test is running (or afterwards) so I can inspect what is going on/went wrong
That's not possible. But you can view or download the log from the builds.
If you view the build log directly after a push, then you get live view of the processing steps on the Travis env. You can also cancel it manually.
at least be able to tweak and run my .travis.yml file and associated scripts somehow and re-run immediately without having to formally check those changes in in order to kick off travis again.
When you are logged in on Travis and you will find a button to rerun a build.
You could try executing your build commands inside a normal Ubuntu VM.
Back in the days box images were available over at http://files.travis-ci.org/boxes/provisioned/travis-ruby.box
But Travis switched from Vagrant to BlueBox and stopped providing the downloads.
You could try on IRC and ask to get access to your “box” for debugging.
I'm not sure if you get access.
I am looking for a way to easily deploy a nodejs app via a command line script.
I found one solution:
https://github.com/Skookum/nimbus
I also heard that the whole thing can be done with git and post commit hooks.
What would people recommend?
edit: i am deploying it to my own box where i have root
You have two options on a self hosted setup.
Do it all yourself
This entails git post-receive hooks. In short you setup your production box to host a copy of your repository, on your local machine you setup a remote, let's call the remote production.
Now when you run git push production master on your local machine, the updates are sent and the server executes the post-receive hook on your server which runs whatever you wish.
Actions you may want are: checking out/writing the data in the repo to files/folders (the git repo on the server is stored as a bare repo); restarting your webserver; notifying you that there's been a deployment etc.
I'd suggest reading up on it at http://git-scm.com/book/en/Customizing-Git-Git-Hooks and taking a look at a few tutorials, this one (http://ryanflorence.com/deploying-websites-with-a-tiny-git-hook/) looks prety legit.
Use a service to manage it for you, http://www.deployhq.com/ is the only one that springs to mind but I'm sure there's other.
Good Luck and Happy Hacking :)
There is a tool called shipit.js (https://github.com/shipitjs/shipit) which allows you to perform different deployment tasks like:
moving code from the repo to the server
restarting server
installing node_modules
etc.
You create a config file, and then runs: npx shipit deploy and all tasks you specify are performed. In case of failure, it has a rollback mechanism.
There is a nice screencast about it: https://youtu.be/8PpBySjkWEM.