In a bash script, git does not recognize it own directory? - linux

So I have written a bash script within Atlassian-Stash for post-receive events. In this script, after a commit has been made, it creates a codecollaborator code review. To create a code review, it needs commit title, commit user and git SHA for any changes and uploading the changes to the code review. To get these informations, I clone'd the directory to --depth=1 (even without --depth=1) and work with git log (options).
The problem I am seeing is that if I run the script manually, it works just fine. However, if it runs after a commit has been made, it errors out after it clones the directory saying it is not a git directory. If I cd into the directory after the script exits, I am able to run git log (and other git commands).
Things I tried to troubleshoot are
1. Permissions issues (running it as root), so I am not seeing any permissions issues.
2. troubleshooting it with bash -xv and until that point everything looks good.
3. I also put it status checks with $?
4. I tried to move .git to git-backup, wait 3 seconds and moved it back, still the same issue.
5. I ran ls -ltra to make sure that it has all the files and .git directory.
Now, I am out of options. Has anyone ran into this kind of problem before?
Anyone know where I might be doing something wrong or missing something?
I tried to be as descriptive as possible, if the question does not make sense or need a sample script, please let me know.
Adding the script and its error output below.
#!/bin/bash -xv
CCollabExe='/usr/local/bin/ccollab'
CCollabUrl='--url http://***:8080'
CCollabUser='--user ******'
CCollabPassword='--password ******'
CCollabConnection="${CCollabExe} ${CCollabUrl} ${CCollabUser} ${CCollabPassword}"
CCollabStuff='/home/stash/repositories/tmp'
CloneDir="${CCollabStuff}/ClonnedDir"
StashUser='******'
StashPass='******'
RepoURLlinkGit="http://${StashUser}:${StashPass}#******:7990/scm/t/test1.git"
unset SSH_ASKPASS
# Test function to check if a varibale is empty
CheckIfVarEmpty () {
local Variable="$1"
if [[ -z ${Variable} ]] ; then
echo "Variable $1 '\${Variable}' is empty, exiting"
echo "Lets try to go back in the git dir" && cd ${CloneDir} && git log -10
cd /root && cd ${CloneDir}
[[ -d .git ]] && cp -rp .git git-backup && rm -rf .git && echo "sleeping 3" && sleep 3 && mv git-backup .git
git log -10
exit 0
fi
}
#Create a new CCollab temp dir, clone the directory and get commit title, user and SHA info
rm -rf ${CCollabStuff} && mkdir ${CCollabStuff} && cd ${CCollabStuff}
git clone ${RepoURLlinkGit} ${CloneDir}
cd ${CloneDir}
# below is where its erroring out.
CommitTitle=$(git log --pretty=format:"%s" -1)
CheckIfVarEmpty ${CommitTitle}
CommitUser=$(git log --pretty=format:"%an" -1)
CheckIfVarEmpty ${CommitUser}
CommitSHA=$(git log --pretty=format:"%h" -2)
CheckIfVarEmpty ${CommitSHA}
CommitSHA1=$(echo $CommitSHA | awk -F' ' '{ print $1 }')
CommitSHA2=$(echo $CommitSHA | awk -F' ' '{ print $2 }')
echo "=========="
Error out is:
remote: rm -rf ${CCollabStuff} && mkdir ${CCollabStuff} && cd ${CCollabStuff}
remote: + rm -rf /home/stash/repositories/tmp
remote: + mkdir /home/stash/repositories/tmp
remote: + cd /home/stash/repositories/tmp
remote: git clone ${RepoURLlinkGit} ${CloneDir}
remote: + git clone http://******:******#******:7990/scm/t/test1.git /home/stash/repositories/tmp/ClonnedDir
remote: Cloning into '/home/stash/repositories/tmp/ClonnedDir'...
remote: cd ${CloneDir}
remote: + cd /home/stash/repositories/tmp/ClonnedDir
remote: CommitTitle=$(git log --pretty=format:"%s" -1)
remote: git log --pretty=format:"%s" -1
remote: ++ git log --pretty=format:%s -1
remote: fatal: Not a git repository: '.'

I know nothing about Atlassian but it's clear from the error output that you're tripping over one of the hook traps I noted in an answer I can't find now:
In a git hook, the environment variable GIT_DIR is set (to . in --bare repos, to .git in non-bare ones). This is valid only until you cd to some other directory, often in a sub-process run from the hook script that has no idea that $GIT_DIR is pointing off to some now-inappropriate place.
(The git clone step works because it is not looking for a git directory, it's just creating a new one.)
The quick and easy fix is unset GIT_DIR.

Related

Change the remote of all git repositories on a system from http to ssh

Recently Github came up with a deprecation notice that the HTTP method of pushing to our repositories is going to expire soon. I've decided to change to the SSH method. On doing that I found that we need to change the remote URL of the repos after setting up keys.
But the change is a tedious process and to do it for all the repositories I have on my local system is quite a lengthy job. Is there some way we can write a Bash script that will go through the directories one by one and then change the remote URL from the HTTP version to the SSH version?
This makes the necessary change from HTTP -> SSH.
git remote set-url origin git#github.com:username/repo-name
The things that we need to change would be the repo-name which can be the same as the directory name.
What I thought about was to run a nested for loop on the parent directory that contains all the git repos. This would be something like:
for DIR in *; do
for SUBDIR in DIR; do
("git remote set-url..."; cd ..;)
done
done
This will identify all subfolders containing a file or folder named .git, consider it a repo, and run your command.
I strongly recommend you make a backup before running it.
#!/bin/bash
USERNAME="yourusername"
for DIR in $(find . -type d); do
if [ -d "$DIR/.git" ] || [ -f "$DIR/.git" ]; then
# Using ( and ) to create a subshell, so the working dir doesn't
# change in the main script
# subshell start
(
cd "$DIR"
REMOTE=$(git config --get remote.origin.url)
REPO=$(basename `git rev-parse --show-toplevel`)
if [[ "$REMOTE" == "https://github.com/"* ]]; then
echo "HTTPS repo found ($REPO) $DIR"
git remote set-url origin git#github.com:$USERNAME/$REPO
# Check if the conversion worked
REMOTE=$(git config --get remote.origin.url)
if [[ "$REMOTE" == "git#github.com:"* ]]; then
echo "Repo \"$REPO\" converted successfully!"
else
echo "Failed to convert repo $REPO from HTTPS to SSH"
fi
elif [[ "$REMOTE" == "git#github.com:"* ]]; then
echo "SSH repo - skip ($REPO) $DIR"
else
echo "Not Github - skip ($REPO) $DIR"
fi
)
# subshell end
fi
done

Verifying multiple directories exist on their appropriate branches

I need to create a new Makefile that sources the master Makefile, and then uses the variables defined within to check if the directories exist in their appropriate local branches. I've read a lot of posts on StackOverflow about checking if directories exists, but I'm stuck on how to find out if their in the appropriate branches.
#!bin/ksh
DIRLOC=/var/tmp
DIRNAMES="SchemaExtract SQL Count SchExtArchive"
for DIRNAME in ${DIRNAMES}
do
if [ -d ${DIRLOC}/${DIRNAME} ]
then
echo ${DIRLOC}/${DIRNAME} already exists
else
echo ${DIRLOC}/${DIRNAME} Creating ...
mkdir ${DIRLOC}/${DIRNAME}
chmod 755 ${DIRLOC}/${DIRNAME}
fi
done
Any help would be appreciated!
Clarification-
I want to specify in my new Makefile what git branch each directory is supposed to be in. So I need a code that reads the directories from the master Makefile, checks if they exist and if so, compare the location of the directories found with the locations that I specify in the new Makefile to determine everything is in its correct git branch.
You can use the git ls-tree command to check for a directories existence in a given branch.
As an example, consider the following repository:
# There are 3 branches.
$ git branch
branch1
branch2
* master
# master contains master_dir
$ ls
master_dir
# branch1 contains master_dir and branch1_dir
$ git checkout branch1
Switched to branch 'branch1'
$ ls
branch1_dir master_dir
# branch2 contains master_dir and branch2_dir
$ git checkout branch2
Switched to branch 'branch2'
$ ls
branch2_dir master_dir
# switch back to the master branch
$ git checkout master
Switched to branch 'master'
$ ls
master_dir
The following commands are run from the master branch.
For branch1:
$ git ls-tree -d branch1:branch1_dir
$ git ls-tree -d branch1:branch2_dir
fatal: Not a valid object name branch1:branch2_dir
For branch2:
$ git ls-tree -d branch2:branch2_dir
$ git ls-tree -d branch2:branch1_dir
fatal: Not a valid object name branch2:branch1_dir
In your shell script, you can use the return value of the command in your conditional:
$ git ls-tree -d branch1:branch1_dir 2&> /dev/null; \
> if [[ $? -eq 0 ]]; then echo "Exists"; else echo "Does not exist"; fi
Exists
$ git ls-tree -d branch1:branch2_dir 2&> /dev/null; \
> if [[ $? -eq 0 ]]; then echo "Exists"; else echo "Does not exist"; fi
Does not exist
EDIT: Example shell script using directory definitions in an external file.
$ cat branch-dirs.txt
branch1:branch1_dir
branch2:branch2_dir
branch2:non_existent_dir
$ cat check_dirs.sh
#!/bin/bash
readonly BRANCH_DIR_FILE="./branch-dirs.txt"
for dir_to_check in $(cat "$BRANCH_DIR_FILE"); do
git ls-tree -d "${dir_to_check}" 2&> /dev/null
if [[ $? -eq 0 ]]; then
echo "${dir_to_check} exists."
else
echo "${dir_to_check} does not exist."
fi
done
$ ./check_dirs.sh
branch1:branch1_dir exists.
branch2:branch2_dir exists.
branch2:non_existent_dir does not exist.
So I was browsing through and came across this post. Wouldn't this work a little better for what I need it to do in the long run since I need it to work from the top-level down?
MY_DIRNAME=../External
ifneq "$(wildcard $(MY_DIRNAME) )" ""
# if directory MY_DIRNAME exists:
INCLUDES += -I../External
else
# if it doesn't:
INCLUDES += -I$(HOME)/Code/External
endif

How to create a git alias to recursively run a command on all submodules?

I have the following command I run manually:
$ git fetch --all && git reset --hard #{u} && git submodule foreach --recursive "git fetch --all && git reset --hard #{u}"
I'm using Git v2.13.0. My goals are:
Run the specified command on the parent (current) repository first.
Execute the same command on all submodules, recursively.
I tried to create an alias to do this like so:
[alias]
run = !f() { \"$#\" && git submodule foreach --recursive \"$#\"; }; f"
Which would be run like this (using the earlier example):
$ git run "git fetch --all && git reset --hard #{u}"
However I get the following error (with git trace enabled for diagnostics):
09:38:31.170812 git.c:594 trace: exec: 'git-run' 'git fetch --all && git reset --hard #{u}'
09:38:31.170899 run-command.c:369 trace: run_command: 'git-run' 'git fetch --all && git reset --hard #{u}'
09:38:31.172819 run-command.c:369 trace: run_command: 'f() { "$#" && git submodule foreach --recursive "$#"; }; f' 'git fetch --all && git reset --hard #{u}'
09:38:31.173268 run-command.c:228 trace: exec: '/bin/sh' '-c' 'f() { "$#" && git submodule foreach --recursive "$#"; }; f "$#"' 'f() { "$#" && git submodule foreach --recursive "$#"; }; f' 'git fetch --all && git reset --hard #{u}'
f() { "$#" && git submodule foreach --recursive "$#"; }; f: git fetch --all && git reset --hard #{u}: command not found
fatal: While expanding alias 'run': 'f() { "$#" && git submodule foreach --recursive "$#"; }; f': No such file or directory
How can I get the alias to work as I want?
This should work:
[alias]
# Forcefully perform a git command in the current repo and recursively in all submodules (regardless of exit status).
# NOTE: The command (after `git run`) must be in quotes
# Example: git run "git checkout master"
run = !sh -c '\
$# && \
git submodule foreach --recursive \"$# || :\" && \
:' -
I tend to just write commands using !sh -c. It find that it’s a bit easier for more complicated commands. The tricky part with this approach usually has to do with escaping quote marks and arguments.
On a related note, I really wanted to write an alias that would run a git command recursively like such git r checkout master. I tried writing a git alias like:
[alias]
r = !sh -c '\
git $# && \
git submodule foreach --recursive \"git $# || :\" && \
:' -
This works great for single-argument git commands (like git r status), but when trying to run a multi-argument git command, it breaks. For example, running git r checkout master results in the following output:
Already on 'master'
Your branch is up-to-date with 'origin/master'.
Entering 'submodules/MY_SUBMODULE'
/usr/local/git/libexec/git-core/git-submodule: line 360: git checkout: command not found
Stopping at 'submodules/MY_SUBMODULE'; script returned non-zero status.
The workaround is to pass in the remaining arguments in a string, like git r "checkout master".

Find a specific folder in all remote and local GIT branches

I have few hundreds of remote and local branches. I wonder whether there is a command to help me find a folder with a specific name in all branches.
My git version is 1.8.3.1. I also have smartgit installed if it matters.
Thanks in advance.
The following command will output all refs (local and remotes) that point to a commit which contains the path specified in the variable SEARCH_PATH
SEARCH_PATH="somePath"
git for-each-ref --format="%(refname)" refs/heads refs/remotes |
while read ref
do
if [[ `git ls-tree -r --name-only $ref` =~ "$SEARCH_PATH" ]] ; then
echo $ref;
fi
done
You can run following to list your required folders/files
for line in `git for-each-ref --format="%(refname)" refs/heads`;
do
git ls-tree -r $line | grep 'file_regex'
done

How to run a series of commands with a single command in the command line?

I typically run the following commands to deploy a particular app:
compass compile -e production --force
git add .
git commit -m "Some message"
git push
git push production master
How can I wrap that up into a single command?
I'd need to be able to customize the commit message. So the command might look something like:
deploy -m "Some message"
There are two possibilities:
a script, as others answered
a function, defined in your .bash_profile:
deploy() {
compass compile -e production --force &&
git add . &&
git commit -m "$#" &&
git push &&
git push production master
}
Without arguments, you'd have a third option, namely an alias:
alias deploy="compass compile -e production --force &&
git add . &&
git commit -m 'Dumb message' &&
git push &&
git push production master"
You could create a function that does what you want, and pass the commit message as argument:
function deploy() {
compass compile -e production --force
git add .
git commit "$#"
git push
git push production master
}
Put that in your .bashrc and you're good to go.
You can make a shell script. Something that looks like this (note no input validation etc):
#!/bin/sh
compass compile -e production --force
git add .
git commit -m $1
git push
git push production master
Save that to myscript.sh, chmod +x it, then do something like ./myscript.sh "Some message".
You can write a shell script for this
#!/bin/bash
compass compile -e production --force
git add .
git commit -m $1
git push
git push production master
Save this to 'deploy' and do a chmod 7xx on it. Now you can use it as ./deploy "Some message"
you could write these commands into a file named deploy.sh .
Then make it executable and run as sh deploy.sh
You could even add it to your path by exporting the path where you save the script.
everyone mentions about writing a script and this is probably the best way of doing it.
However you might someday want to use another way - merge commands with &&, for example:
cd ../ && touch abc
will create a file "abc" in a parent directory :)
It is just to let you know about such thing, for this particular scenario (and 99% of the others) please take a look at other answers :)
I would go through the effort of making the command work for more than just the current directory. One of the most versitle ways of doing this is to use getopt in a BASH script. Make sure you have getopt installed, create deploy.sh then chmod 755 deploy.sh and then do something like this:
#!/bin/bash
declare -r GETOPT=/usr/bin/getopt
declare -r ECHO='builtin echo'
declare -r COMPASS=/path/to/compass
declare -r GIT=/path/to/git
sanity() {
# Sanity check our runtime environment to make sure all needed apps are there.
for bin in $GETOPT $ECHO $COMPASS $GIT
do
if [ ! -x $bin ]
then
log error "Cannot find binary $bin"
return 1
fi
done
return 0
}
usage() {
$CAT <<!
${SCRIPTNAME}: Compile, add and commit directories
Usage: ${SCRIPTNAME} -e <env> [-v]
-p|--path=<path to add>
-c|--comment="Comment to add"
-e|--environment=<production|staging|dev>
Example:
$SCRIPTNAME -p /opt/test/env -c "This is the comment" -e production
!
}
checkopt() {
# Since getopt is used within this function, it must be called as
# checkopt "$#"
local SHORTOPT="-hp::c::e::"
local LONGOPT="help,path::,comment::,environment::"
eval set -- "`$GETOPT -u -o $SHORTOPT --long $LONGOPT -n $SCRIPTNAME -- $#`"
while true
do
case "$1" in
-h|--help)
return 1
;;
-|--path)
PATH="$2"
shift 2
;;
-c|--comment)
COMMENT=$2
shift 2
;;
-e|--environment)
ENV="$2"
shift 2
;;
--)
shift
break
;;
*)
$ECHO "what is $1?"
;;
esac
done
}
if ! sanity
then
die "Sanity check failed - Cant find proper system binaries"
fi
if checkopt $#
then
$ECHO "Running Compass Compile & Git commit sequence..."
$COMPASS compile -e $ENV --force
$GIT add $PATH
$GIT commit -m $COMMENT
$GIT push
$GIT push ENV master
else
usage
exit 1
fi
exit 0

Resources