I have a small project on GitLab and I keep a CHANGELOG.md file in it. I want to update it with every merge to master, but occasionally I forget. I'm using GitLab CI and so I'd like to employ it to check if the changelog file was changed. How can I do it?
This was my solution after a lot of trials. Hope it helps. Basically, I only want to trigger the job when we have a new merge request. Using git, I get the list of files that changed within the merge request and if the file I want to track is inside that list, then the job is successful. If the file is not found, the job fails
update_file_check:
stage: test
script:
- git fetch
- FILES_CHANGED=$(git diff --name-only $CI_MERGE_REQUEST_DIFF_BASE_SHA...HEAD)
- |+
for i in $FILES_CHANGED
do
if [[ "$i" == "<filename>" ]]
then
exit 0
fi
done
exit 1
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
It is possible and there are several ways to achieve it. I would propose to use the same host where gitlab is located as runner with shell executor. Basically, you are opening a way in order to run a few commands into this gitlab runner. Now, there are a lot of resources in internet and even in the official docs of gitlab, but to sum up, you will need to follow the next flow:
1. .gitlab-ci.yml
This file should be in the root of your project. It is read and interpreted by gitlab when running CI CD tasks. It can be as complex as you wish, but in my opinion, I like to keep things simple, so I will just invoke an script when master branch is changed.
The content might be something like this:
Check Changelog:
script:
- sh .gitlab/CI-CD-Script.sh ## Execute in the gitlab runner the script.
only:
- master ## It will run only when master branch changes.
2. .gitlab/CI-CD-Script.sh
As mentioned, I prefer to call a script which it will manage the logic of all CI CD. But as previously said, there are multiple ways to achieve same results. Below, you can build an script in the next way:
#!/bin/bash
# Download the original changelog from master branch.
wget http://<yourgitlabAddress>/<pathToProject>/-/raw/master/CHANGELOG.md /tmp/CHANGELOG.md
if cmp -s /tmp/CHANGELOG.md CHANGELOG.md; then ## Checking if files are different.
echo "Changelog not changed"
exit 1 ## Job will fail
else
echo "Changelog changed"
exit 0 ## Job will pass
fi
That would be all so far. I can't try it in your environment, I hope it helps you.
Related
I have a CI/CD pipeline in GitHub Actions that runs and one of the steps is to commit to another repository. What it does is that it clones to external repository, moves files into it, and then commits it back to the external repository.
The thing is, there is no grantee that there will be a new file to commit.
When that happens, the repository fails because Git throws an error as shown
How can I get around that?
You could use the git status --porcelain command (reference 1 + reference 2) to check if some changes occurred.
It could look like this using bash:
run:
if [[ `git status --porcelain` ]]; then
echo "OK: Changes detected."
else
echo "WARNING: No changes were detected."
fi
shell: bash
Obs: I'm using it in a action to git commit push changes.
When working with many projects and branches at the same time, I occasionally do stupid mistake like pulling into the wrong branch. For example being on branch master I did git pull origin dangerous_code and didn't notice that for quite some time. This small mistake caused a lot of mess.
Is there any way to make git ask for confirmation when I attempt to pull a branch other than branch that is currently checked out? Basically I want it to ask for confirmation if the branch name doesn't match (checked out and the one being pulled).
For now, I'll focus on how to prompt the user for confirmation before any pull is carried out.
Unfortunately, because there is no such thing as a pre-pull hook, I don't think you can get the actual pull command to directly do that for you. As I see it, you have two options:
1 - Use fetch then merge (instead of pull)
Instead of running git pull, run git fetch, then git merge or git rebase; breaking down pull into the two steps it naturally consists of will force you to double-check what you're about to merge/rebase into what.
2 - Define an alias that asks for confirmation before a pull
Define and use a pull wrapper (as a Git alias) that prompts you for confirmation if you attempt to pull from a remote branch whose name is different from the current local branch.
Write the following lines to a script file called git-cpull.sh (for confirm, then pull) in ~/bin/:
#!/bin/sh
# git-cpull.sh
if [ "$2" != "$(git symbolic-ref --short HEAD)" ]
then
while true; do
read -p "Are you sure about this pull?" yn
case "$yn" in
[Yy]*)
git pull $#;
break
;;
[Nn]*)
exit
;;
*)
printf %s\\n "Please answer yes or no."
esac
done
else
git pull $#
fi
Then define the alias:
git config --global alias.cpull '!sh git-cpull.sh'
After that, if, for example, you run
git cpull origin master
but the current branch is not master, you'll be asked for confirmation before any pull is actually carried out.
Example
$ git branch
* master
$ git cpull origin foobar
Are you sure about this pull?n
$ git cpull origin master
From https://github.com/git/git
* branch master -> FETCH_HEAD
Already up-to-date.
Is there any way to block deletion of remote branches?
I want to block deletion of remote branches but normal flow like code checking and check out should work fine!!
without using gitolite! is it possible ?
please help !
Yes, it is possible. Just add a suitable server side git hook.
You probably want to use a pre-receive hook. For details have a look at here or here.
Example:
#create repositories
git init a
git init --bare b
#add the hook in "b"
echo -e '#!/usr/bin/bash\nread old new ref\ntest $new != 0000000000000000000000000000000000000000' >>b/hooks/pre-receive
chmod +x b/hooks/pre-receive
#create a commit in "a"
cd a
echo foo >test
git add .
git commit -m testcommit
#push it to "b"
git push ../b master
#try to delete remote branch
git push ../b :master
refs/heads/*,delete)
# delete branch
if [ "$allowdeletebranch" != "true" ]; then
echo "*** Deleting a branch is not allowed in this repository" >&2
exit 1
fi
adding this in update hook solved my problem
Hope this will help someone else too
I'm not sure why you're avoiding gitolite (which is sort of the end point of all access control, as it were), but I have a sample pre-receive script here that uses hooks.* git config entries to do some simple access controls. It's not as fancy as gitolite but it does some things I cared about once. :-)
I've got a git workflow set up similar to this http://joemaller.com/990/a-web-focused-git-workflow/. Essentially i have local repositories that report to a remote repository that is bare. Then I have my deployment directory accessible via web also set up as a repository that also reports to the same bare repository.
It's set up with git hooks so that when local developer pushes changes to remote repository, hook goes into web folder and pulls from the repository so it always has the latest and greatest. All works pretty good.
My crux is that I'm looking to appease to the people who don't want to you GIT and just want to upload files to the web folder via FTP. I've kinda got this working by setting up an inotifywait monitor on the web folder for whenever files are written, modified, moved, deleted, created, etc... my bash script for this is as follows.
!/bin/sh
inotifywait #*.swp -rm -e modify,move,create,delete,delete_self,unmount /var/www/html/mysite | while read
do
now=$(date +"%m_%d_%Y:%T")
echo $now >> temp.txt
cd /var/www/html/mysite || exit
git add --all
git commit -m "ftp update $now" -a
done
This too actually works, but what I'm observing is that I'm stuck in the while loop once I trigger the inotifywait by modifying a file in my web folder. Anyone have any ideas on this? Ideally would love it to do it's thing and not be stuck in the while loop continuously running unnecessary git commands.
Thanks!
The man page for inotifywait suggests that you do a different loop style:
while inotifywait -e modify /var/log/messages; do
…
done
have you tried that?
I would like a local GIT is my home directory to implement autosave to the repository that happens every five minutes.
I have two Questions:
Is this s sane thing to do?
How does one go about writing a script that implements this functionality for a specified set of directories in the home directory on linux?
The aim is to capture all the histories all the important files in my home directory automatically without any input from me. I can use this whenever I screw-up.
Sanity is all relative!
I guess it depends on why you are backing up. If it's for hardware failure, then this won't work because the repository is in the same folder (/home/) so if the folder goes, the repo goes. Unless of course you are pushing it to a storage repo on another machine somewhere as the actual backup.
We do use git to store important things, especially research papers and PDF's, so we can easily share them.
You would write a cron job that runs a script every so often. Basically you would write a simple bash script that does a git commit -a -m "commit message" periodically in your folder. The tricky part is doing the git add on the new files that were created so they are tracked. You will likely need to do a git status and parse the output from it in your script to find the new files, then git add that list. Python may be the easiest way to do that. Then you register that with cron.
Google is your friend here, there are plenty of examples on how to register scripts with cron.
Write a shell script that would enter each directory you want and run
git add .
git commit -m "new change"
git push
and then use cron to run the script each 5 minutes.
Write a shell script to do the following
1) git status --u=no //It gives you the files which are modified
2) Iterate through the file list from step 1 and do git add <file>
3) git commit -m "latest change <date:time>"
Schedule this script in cron.