How do I convert this post-checkout hook to be usable with pre-commit? - pre-commit-hook

I have a post-checkout hook that I'm trying to convert to be usable with pre-commit.
#!/bin/bash
# 0 means 'git checkout somefile' (don't do anything)
# 1 means 'git checkout branchname'
echo "> $*"
(($3)) || exit 0
declare -a blocked
blocked+=('master' 'main' 'examples')
printf -v blocked_rx '%s|' "${blocked[#]}"
blocked_rx="${blocked_rx%?}"
# shellcheck disable=SC2034
read -r prev cur < <(git reflog | awk 'NR==1{ print $6 " " $8; exit }')
[[ $cur =~ $blocked_rx ]] \
&& echo "WARNING: You cannot push $cur branch to remote!"
exit 0
I've created a .pre-commit-hooks.yaml file.
- id: warn-branch-on-checkout
name: Message to stderr if branch
language: script
pass_filenames: false
always_run: true
stages: [post-checkout]
entry: pre-commit-hooks/warn-branch-on-checkout
And my .pre-commit-config.yaml file looks like:
default_install_hook_types:
- pre-commit
- post-checkout
repos:
- repo: https://MyCompany#dev.azure.com/MyCompany/MyProject/_git/myrepo
rev: v0.1.12
hooks:
- id: warn-branch-on-checkout
args: ['examples']
The bash script lives in pre-commit-hooks off the top level of the repository.
As far as I can tell, pre-commit is not calling warn-branch-on-checkout (I added the echo "> $*" in the script).
pre-commit.log in the cache dir is not being created.
What am I doing wrong?
Added examples of run:
$ git checkout examples
Switched to branch 'examples'
Your branch is up to date with 'origin/examples'.
HERE: /home/harleypig/projects/guardrail/.git/hooks
1: /usr/bin/python3 -mpre_commit hook-impl --config=.pre-commit-config.yaml --hook-type=post-checkout --hook-dir /home/harleypig/projects/guardrail/.git/hooks -- 79d1096b98caa40e672a502855cb139d72de2ada 79d1096b98caa40e672a502855cb139d72de2ada 1
Message to stderr if branch..............................................Passed
I added a couple of echo statements to the pre-commit generated hook (the HERE: and 1: lines above).
I don't see > blah blah blah so the script isn't being called at all.

thanks for adding the output
pre-commit hides the output by default unless there is a failure -- this is to keep your output clean and noise free (noisy outputs tend to get ignored)
you can change this to always display output by setting the verbose: true option (or by exiting nonzero -- which doesn't affect post-checkout since it is too late to affect "success"). note that verbose: true is intended mainly as a debugging mechanism so generally adding noise to the output is discouraged
disclaimer: I created pre-commit

Related

Suppress gitlab CI stage if push only changes README.md

I have a CI build stage that runs whenever someone pushes to the repo, and it takes a long time to run. So I want to configure a .gitlab-ci.yml rule that says if the user is only updating the documentation in README.md, it doesn't need to execute the build stage.
Following the tips in How to exclude gitlab-ci.yml changes from triggering a job, it seems like the way to do this would be to add something like:
stages:
- A
Stage A:
stage: A
rules:
- changes:
- "README.md"
when: never
However, based on the documentation in https://docs.gitlab.com/ee/ci/yaml/#ruleschanges, I think this will also suppress the stage if the push contains multiple files, if one of them is README.md.
In that situation, I want the stage to run, not be suppressed. Is there a way to handle this distinction?
I use this syntax to not trigger pipelines when modifying *.md files
workflow:
rules:
- if: '$CI_COMMIT_BRANCH && $CI_COMMIT_BEFORE_SHA !~ /0{40}/'
changes:
- "{*[^.]md*,*.[^m]*,*.m,*.m[^d]*,*.md?*,*[^d]}"
- if: '$CI_PIPELINE_SOURCE == "web"'
- if: '$CI_PIPELINE_SOURCE == "schedule"'
- if: '$CI_PIPELINE_SOURCE == "pipeline"'
- if: '$CI_COMMIT_TAG'
Following the idea from #j_b, I was able to solve this (with the caveat that it exits with pipeline having been marked as failed, as described in the code below). Here is the relevant code I added to my .gitlab-ci.yml.
stages:
- Check Only Docs Changed
- mybuild
# Stage that checks whether only documentation changed. If so, then we won't build a new release.
check-only-docs-changed:
stage: Check Only Docs Changed
script:
- BRANCH=origin/$CI_COMMIT_BRANCH
- echo "Checking commit to see if only docs have changed. "
# The line below is from
# https://stackoverflow.com/questions/424071/how-do-i-list-all-the-files-in-a-commit
- GET_CMD="git diff-tree --no-commit-id --name-only -r $CI_COMMIT_SHA"
- FILES_CHANGED=`$GET_CMD`
- echo "Files in this commit:" $FILES_CHANGED
- NUM_FILES_CHANGED=`$GET_CMD | wc -l`
# We consider any file that ends in .md to be a doc file
# The "|| true" trick on the line below is to deal with the fact that grep will exit with non-zero if it doesn't find any matches.
# See https://stackoverflow.com/questions/42251386/the-return-code-from-grep-is-not-as-expected-on-linux
- NUM_DOC_FILES_CHANGED=`$GET_CMD | grep -c ".*\.md$" || true`
- echo $NUM_FILES_CHANGED "files changed," $NUM_DOC_FILES_CHANGED "of which were documentation."
- |
# We have to test whether NUM_FILES_CHANGED is > 0 because when one branch gets merged into another
# it will be 0, as will NUM_DOC_FILES_CHANGED.
if [[ $NUM_FILES_CHANGED -gt 0 && $NUM_FILES_CHANGED -eq $NUM_DOC_FILES_CHANGED ]]
then
DID_ONLY_DOCS_CHANGE="1"
# Write out the env file before we exit. Otherwise, gitlab will complain that the doccheck.env artifact
# didn't get generated.
echo "DID_ONLY_DOCS_CHANGE=$DID_ONLY_DOCS_CHANGE" >> doccheck.env
echo "Only documentation files have changed. Exiting in order to skip further stages of the pipeline."
# Ideally it would be great to not have to exit with a non-zero code, because this will make the gitlab pipeline
# look like it failed. However, there is currently no easy way to do this, as discussed in
# https://stackoverflow.com/questions/67269109/how-do-i-exit-a-gitlab-pipeline-early-without-failure
# The only way would be to use child pipelines, which is more effort than it's worth for this.
# See https://stackoverflow.com/questions/67169660/dynamically-including-excluding-jobs-in-gitlab-pipeline and
# https://stackoverflow.com/questions/71017961/add-gitlab-ci-job-to-pipeline-based-on-script-command-result
exit 1
else
DID_ONLY_DOCS_CHANGE="0"
echo "DID_ONLY_DOCS_CHANGE=$DID_ONLY_DOCS_CHANGE" >> doccheck.env
fi
# The section below makes the environment variable available to other jobs, but those jobs
# unfortunately cannot access this environment variable in their "rules:" section to control
# whether they execute or not.
artifacts:
reports:
dotenv: doccheck.env

Is it possible to check if a rebase is required in a GitLab merge request pipeline?

I need to perform some technical checks on other systems before I can allow branches to be rebased in GitLab. This is why I want to add a pipeline step to the merge request to perform these checks in case a rebase is required. Is it possible to check if a rebase is required in the pipeline? I didn't find any CI variable for this use case.
Thanks for your help!
Thanks to Austin's post I ended up with this script:
script:
- LATEST_ON_TARGET_BRANCH=$(git log -n 1 --pretty=format:"%H" origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME)
- echo "Latest commit on the target branch is $LATEST_ON_TARGET_BRANCH"
- COMMON_ANCESTOR=$(git merge-base origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME origin/$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME)
- echo "Common ancestor of source and target branch is $COMMON_ANCESTOR"
- |
if [[ "${LATEST_ON_TARGET_BRANCH}" = "${COMMON_ANCESTOR}" ]]; then
echo "The source branch is up to date with the target branch"
else
echo "The source branch is not up to date with the target branch"
exit 1
fi
As far as I know there is no GitLab way to check whether or not a branch needs to be rebased.
Basing this response on this previous StackOverflow solution, I would suggest trying to use Git on the command line to determine if a rebase is required:
job:
script:
- export BRANCH_NAME=$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME || $CI_COMMIT_BRANCH
- hash1=$(git show-ref --heads -s $CI_DEFAULT_BRANCH)
- hash2=$(git merge-base $CI_DEFAULT_BRANCH $BRANCH_NAME)
- |
if [[ "${hash1}" = "${hash2}" ]]; then
echo "No rebase is not required"
else
echo "A rebase is required"
fi
I have not tested this myself. Please notify me if this fails.

Don't launch a job if the only changes are on a folder

I want to disable some jobs if the only changes that happen in last git commit are in a docs folder. The idea is to be able to push new documentation on master (in a docs folder) without launching jobs responsible for pushing to production.
What I tried:
prod:
script:
- whatever
rules:
- changes:
- docs/*
when: never
It does not work because I want to be able to launch this job if there are changes in docs AND other files.
I also tried with a regular expression (probably wrong) but it does not seem to be taken in charge by gitlab. Something like that:
prod:
script:
- whatever
rules:
- changes:
- (docs/*)!*
when: never
I also tried a suboptimal solution, with a CI variable $CI_COMMIT_MESSAGE:
prod:
script:
- whatever
rules:
- if: $CI_COMMIT_MESSAGE == "doconly"
when: never
and it does not work, except if the variable is set manually. Or this:
prod:
script:
- whatever
except:
variables:
- if: $CI_COMMIT_MESSAGE == "doconly"
I also tried with a bash script that would set up a variable but variables can't be passed between jobs.
Here the bash script:
LAST_COMMIT=$(git log --pretty=format:'%H' -n 1)
FILES=$(git diff-tree --no-commit-id --name-only -r ${LAST_COMMIT})
doconly=true
while read -r line && $doconly; do
if [[ ! "$line" =~ docs/* ]] ; then
doconly=false
exit 1
fi
done <<< "$FILES"
exit 0

Check if local git repo is ahead/behind remote

I'm developing a git plug-in, and I need to know when a local repo is changed (can commit changes), ahead (can push to remote) or behind (can pull from remote) using the command line.
This is what I am doing so far:
Can commit?
If git diff-index --name-only --ignore-submodules HEAD -- returns something,
then yes, there are changes to commit.
Can push?
If git status -sb contains the word ahead in it's output, then yes, there
are commits to push.
Can pull?
Nothing implemented yet.
The can commit? part seems to work properly. Can push? only works for the master branch, and this is a huge problem.
How can I safely check if, on every branch, a git repo has changes to commit, commits to push, or needs a git pull?
For future reference. As of Git v2.17.0
git status -sb
contains the word behind .
So that can be used directly to check for pulls.
Note: Remember to run git fetch before running git status -sb
From this answer.
Do a fetch: git fetch.
Get how many commits current branch is behind: behind_count = $(git rev-list --count HEAD..#{u}).
Get how many commits current branch is ahead: ahead_count = $(git rev-list --count #{u}..HEAD). (It assumes that where you fetch from is where you push to, see push.default configuration option).
If both behind_count and ahead_count are 0, then current branch is up to date.
If behind_count is 0 and ahead_count is greater than 0, then current branch is ahead.
If behind_count is greater than 0 and ahead_count is 0, then current branch is behind.
If both behind_count and ahead_count are greater than 0, then current branch is diverged.
Explanation:
git rev-list list all commits of giving commits range. --count option output how many commits would have been listed, and suppress all other output.
HEAD names current branch.
#{u} refers to the local upstream of current branch (configured with branch.<name>.remote and branch.<name>.merge). There is also #{push}, it is usually points to the same as #{u}.
<rev1>..<rev2> specifies commits range that include commits that are reachable from but exclude those that are reachable from . When either or is omitted, it defaults to HEAD.
You can do this with a combination of git merge-base and git rev-parse. If git merge-base <branch> <remote branch> returns the same as git rev-parse <remote branch>, then your local branch is ahead. If it returns the same as git rev-parse <branch>, then your local branch is behind. If merge-base returns a different answer than either rev-parse, then the branches have diverged and you'll need to do a merge.
It would be best to do a git fetch before checking the branches, though, otherwise your determination of whether or not you need to pull will be out of date. You'll also want to verify that each branch you check has a remote tracking branch. You can use git for-each-ref --format='%(upstream:short)' refs/heads/<branch> to do that. That command will return the remote tracking branch of <branch> or the empty string if it doesn't have one. Somewhere on SO there's a different version which will return an error if the branch doesn't haven't a remote tracking branch, which may be more useful for your purpose.
In the end, I implemented this in my C++11 git-ws plugin.
string currentBranch = run("git rev-parse --abbrev-ref HEAD");
bool canCommit = run("git diff-index --name-only --ignore-submodules HEAD --").empty();
bool canPush = stoi(run("git rev-list HEAD...origin/" + currentBranch + " --ignore-submodules --count")[0]) > 0;
Seems to work so far. canPull still needs to be tested and implemented.
Explanation:
currentBranch gets the console output, which is a string of the current branch name
canCommit gets whether the console outputs something (difference between current changes and HEAD, ignoring submodules)
canPush gets the count of changes between origin/currentBranch and the local repo - if > 0, the local repo can be pushed
Thanks to #Trebor I just threw together a simple fish function for the purpose:
#! /usr/bin/fish
#
# Echos (to stdout) whether your branch is up-to-date, behind, ahead or diverged from another branch.
# Don't forget to fetch before calling.
#
# #param branch
# #param otherbranch
#
# #echo string up-to-date/behind/ahead/diverged
#
# #example
#
# # if master is ahead of origin/master you can find out like this:
# #
# if test ( branch-status master origin/master ) = ahead
#
# echo "We should push"
#
# end
#
function branch-status
set -l a $argv[ 1 ]
set -l b $argv[ 2 ]
set -l base ( git merge-base $a $b )
set -l aref ( git rev-parse $a )
set -l bref ( git rev-parse $b )
if [ $aref = $bref ]; echo up-to-date
else if [ $aref = $base ]; echo behind
else if [ $bref = $base ]; echo ahead
else ; echo diverged
end
end
I made a bash version of #user1115652 answer.
function branch_status() {
local a="master" b="origin/master"
local base=$( git merge-base $a $b )
local aref=$( git rev-parse $a )
local bref=$( git rev-parse $b )
if [[ $aref == "$bref" ]]; then
echo up-to-date
elif [[ $aref == "$base" ]]; then
echo behind
elif [[ $bref == "$base" ]]; then
echo ahead
else
echo diverged
fi
}

Bash script to capture input, run commands, and print to file

I am trying to do a homework assignment and it is very confusing. I am not sure if the professor's example is in Perl or bash, since it has no header. Basically, I just need help with the meat of the problem: capturing the input and outputting it. Here is the assignment:
In the session, provide a command prompt that includes the working directory, e.g.,
$./logger/home/it244/it244/hw8$
Accept user’s commands, execute them, and display the output on the screen.
During the session, create a temporary file “PID.cmd” (PID is the process ID) to store the command history in the following format (index: command):
1: ls
2: ls -l
If the script is aborted by CTRL+C (signal 2), output a message “aborted by ctrl+c”.
When you quit the logging session (either by “exit” or CTRL+C),
a. Delete the temporary file
b. Print out the total number of the commands in the session and the numbers of successful/failed commands (according to the exit status).
Here is my code so far (which did not go well, I would not try to run it):
#!/bin/sh
trap 'exit 1' 2
trap 'ctrl-c' 2
echo $(pwd)
while true
do
read -p command
echo "$command:" $command >> PID.cmd
done
Currently when I run this script I get
command read: 10: arg count
What is causing that?
======UPDATE=========
Ok I made some progress not quite working all the way it doesnt like my bashtrap or incremental index
#!/bin/sh
index=0
trap bashtrap INT
bashtrap(){
echo "CTRL+C aborting bash script"
}
echo "starting to log"
while :
do
read -p "command:" inputline
if [ $inputline="exit" ]
then
echo "Aborting with Exit"
break
else
echo "$index: $inputline" > output
$inputline 2>&1 | tee output
(( index++ ))
fi
done
This can be achieved in bash or perl or others.
Some hints to get you started in bash :
question 1 : command prompt /logger/home/it244/it244/hw8
1) make sure of the prompt format in the user .bashrc setup file: see PS1 data for debian-like distros.
2) cd into that directory within you bash script.
question 2 : run the user command
1) get the user input
read -p "command : " input_cmd
2) run the user command to STDOUT
bash -c "$input_cmd"
3) Track the user input command exit code
echo $?
Should exit with "0" if everything worked fine (you can also find exit codes in the command man pages).
3) Track the command PID if the exit code is Ok
echo $$ >> /tmp/pid_Ok
But take care the question is to keep the user command input, not the PID itself as shown here.
4) trap on exit
see man trap as you misunderstood the use of this : you may create a function called on the catched exit or CTRL/C signals.
5) increment the index in your while loop (on the exit code condition)
index=0
while ...
do
...
((index++))
done
I guess you have enough to start your home work.
Since the example posted used sh, I'll use that in my reply. You need to break down each requirement into its specific lines of supporting code. For example, in order to "provide a command prompt that includes the working directory" you need to actually print the current working directory as the prompt string for the read command, not by setting the $PS variable. This leads to a read command that looks like:
read -p "`pwd -P`\$ " _command
(I use leading underscores for private variables - just a matter of style.)
Similarly, the requirement to do several things on either a trap or a normal exit suggests a function should be created which could then either be called by the trap or to exit the loop based on user input. If you wanted to pretty-print the exit message, you might also wrap it in echo commands and it might look like this:
_cleanup() {
rm -f $_LOG
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
So after analyzing each of the requirements, you'd need a few counters and a little bit of glue code such as a while loop to wrap them in. The result might look like this:
#/usr/bin/sh
# Define a function to call on exit
_cleanup() {
# Remove the log file as per specification #5a
rm -f $_LOG
# Display success/fail counts as per specification #5b
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
# Where are we? Get absolute path of $0
_abs_path=$( cd -P -- "$(dirname -- "$(command -v -- "$0")")" && pwd -P )
# Set the log file name based on the path & PID
# Keep this constant so the log file doesn't wander
# around with the user if they enter a cd command
_LOG=${_abs_path}/$$.cmd
# Print ctrl+c msg per specification #4
# Then run the cleanup function
trap "echo aborted by ctrl+c;_cleanup" 2
# Initialize counters
_line=0
_fail=0
_success=0
while true
do
# Count lines to support required logging format per specification #3
((_line++))
# Set prompt per specification #1 and read command
read -p "`pwd -P`\$ " _command
# Echo command to log file as per specification #3
echo "$_line: $_command" >>$_LOG
# Arrange to exit on user input with value 'exit' as per specification #5
if [[ "$_command" == "exit" ]]
then
_cleanup
fi
# Execute whatever command was entered as per specification #2
eval $_command
# Capture the success/fail counts to support specification #5b
_status=$?
if [ $_status -eq 0 ]
then
((_success++))
else
((_fail++))
fi
done

Resources