Shell script to clone a GitHub Repo - linux

I am trying to automate a process that contains a series of git commands.
I want the shell script to deal with some interactive commands, like passing the username and password to git clone url -v. I verified that if I just run git clone url -v it will show the following in order:
cloning into someRepo
asking for username
asking for password
I've tried:
echo -e 'username\n' | git clone url -v
echo -e 'username\npassword\n' | git clone url -v
git clone url -v <<< username\npassword\n
(sleep 5;echo -e 'username\n' | git clone url -v)
I thought that the first message cloning into repo will take some time. None of them is working, but all of them are showing the same message that Username for url:
Having spent lots of time in this, I know that
git clone https://$username:$password#enterpriseGithub.com/org/repo
is working, but it is UNSAFE to use since the log show the username and password explicitly.

Better practice would be to avoid user/password authentication at all (as by configuring agent-based auth, ideally backed by private keys stored on physical tokens), or set up credential storage in a keystore provided (and hopefully secured) by your operating system -- but if you just want to keep credentials off the command line, that can be done:
# Assume that we already know the credentials we want to store...
gitUsername="some"; gitPassword="values"
# Create a file containing the credentials readable only to the current user
mkdir -p "$HOME/.git-creds/https"
chmod 700 "$HOME/.git-creds"
cat >"$HOME/.git-creds/https/enterprise-github.com" <<EOF
username=$gitUsername
password=$gitPassword
EOF
# Generate a script that can retrieve stored credentials
mkdir -p -- "$HOME/bin"
cat >"$HOME/bin/git-retrieve-creds" <<'EOF'
#!/usr/bin/env bash
declare -A args=( )
while IFS= read -r line; do
case $line in
*..*) echo "ERROR: Invalid request" >&2; exit 1;;
*=*) args[${line%%=*}]=${line#*=} ;;
'') break ;;
esac
done
[[ ${args[protocol]} && ${args[host]} ]] || {
echo "Did not retrieve protocol and host" >&2; exit 1;
}
f="$HOME/.git-creds/${args[protocol]}/${args[host]}"
[[ -s $f ]] && cat -- "$f"
EOF
chmod +x "$HOME/bin/git-retrieve-creds"
# And configure git to use that
git config --global credential.helper "$HOME/bin/git-retrieve-creds"

Related

Change the remote of all git repositories on a system from http to ssh

Recently Github came up with a deprecation notice that the HTTP method of pushing to our repositories is going to expire soon. I've decided to change to the SSH method. On doing that I found that we need to change the remote URL of the repos after setting up keys.
But the change is a tedious process and to do it for all the repositories I have on my local system is quite a lengthy job. Is there some way we can write a Bash script that will go through the directories one by one and then change the remote URL from the HTTP version to the SSH version?
This makes the necessary change from HTTP -> SSH.
git remote set-url origin git#github.com:username/repo-name
The things that we need to change would be the repo-name which can be the same as the directory name.
What I thought about was to run a nested for loop on the parent directory that contains all the git repos. This would be something like:
for DIR in *; do
for SUBDIR in DIR; do
("git remote set-url..."; cd ..;)
done
done
This will identify all subfolders containing a file or folder named .git, consider it a repo, and run your command.
I strongly recommend you make a backup before running it.
#!/bin/bash
USERNAME="yourusername"
for DIR in $(find . -type d); do
if [ -d "$DIR/.git" ] || [ -f "$DIR/.git" ]; then
# Using ( and ) to create a subshell, so the working dir doesn't
# change in the main script
# subshell start
(
cd "$DIR"
REMOTE=$(git config --get remote.origin.url)
REPO=$(basename `git rev-parse --show-toplevel`)
if [[ "$REMOTE" == "https://github.com/"* ]]; then
echo "HTTPS repo found ($REPO) $DIR"
git remote set-url origin git#github.com:$USERNAME/$REPO
# Check if the conversion worked
REMOTE=$(git config --get remote.origin.url)
if [[ "$REMOTE" == "git#github.com:"* ]]; then
echo "Repo \"$REPO\" converted successfully!"
else
echo "Failed to convert repo $REPO from HTTPS to SSH"
fi
elif [[ "$REMOTE" == "git#github.com:"* ]]; then
echo "SSH repo - skip ($REPO) $DIR"
else
echo "Not Github - skip ($REPO) $DIR"
fi
)
# subshell end
fi
done

Using git command with -q but doesn't stay quiet when failed?

I'm creating a script where I clone git repositories. I want my script to print "Cloning OK" when the cloning is successful and "Cloning FAILED" when it fails and ignore all command output for both occasions. This is the code I'm using:
(git clone -q "$url" && echo "$url: Cloning OK") || echo "$url: Cloning FAILED" >&2
The problem is that for successful cloning the command stays quiet but for unsuccessful cloning it doesn't. How can I make it quiet for both occasions?
output of command
Thanks in advance
You have to silence the command by sending standard output and standard error to somewhere other than the terminal. This is easiest achieved by sending both output streams to /dev/null:
(git clone -q "$url" >/dev/null 2>&1 && …
Please note that silencing this command will make your script harder to debug.

run gpg encryption command through cronjob

I have a script which executes the gpg encryption command in a sh script throught cronjob.
This is a part of my script
do
gpg --batch --no-tty --yes --recipient $Key --output $Outputdir/${v}.pgp --encrypt ${v}
echo "$?"
if ["$?" -eq 0 ];
then
mv $Inputdir/${v} $Readydir/
echo "file moved"
else
echo "error in encryption"
fi
done
the echo $? gives value as 2.
tried the bellow command also
gpg --batch --home-dir dir --recipient $Key --output $Outputdir/${v}.pgp --encrypt ${v}
where dir=/usr/bin/gpg
My complete script
#set -x
PT=/gonm1_apps/xfb/ref/phoenix_drop
Inputdir=`grep Inputdir ${PT}/param.cfg | cut -d "=" -f2`
Outputdir=`grep Outputdir ${PT}/param.cfg | cut -d "=" -f2`
Key=`grep Key ${PT}/param.cfg | cut -d "=" -f2`
Readydir=`grep Readydir ${PT}/param.cfg | cut -d "=" -f2`
echo $USER
if [ "$(ls -la $Inputdir | grep -E 'S*.DAT')" ]; then
echo "Take action $Inputdir is not Empty"
cd $Inputdir
for v in `ls SID_090_*`
do
gpg --recipient $Key --output $Outputdir/${v}.pgp --encrypt ${v}
echo "$?"
if ["$?" -eq 0 ];
then
mv $Inputdir/${v} $Readydir/
echo "file moved"
else
echo "error in encryption"
fi
done
cd ${PT}
else
echo "$Inputdir is Empty"
fi
GnuPG manages individual keyrings and "GnuPG home directories" per user. A commmon problem when calling GnuPG from web services or cronjobs is executing them as another user.
This means that the other user's GnuPG does look up keys in the wrong key ring (home directory), and if that's fixed it should not have access permissions to the GnuPG home directory at all (not an issue when running a cron or web server as root, but that shouldn't be done for pretty much this reason first hand).
There are different ways to mitigate the issue:
Run the web server or cron job under another user. This might be a viable solution for cron jobs, but very likely not for web services. sudo or su might help at running GnuPG as another user.
Import the required (private/public) keys to the other user's GnuPG home directory, for example by switching to the www-data or root user (or whatever it's called on your machine).
Change GnuPG's behavior to use another user's home directory. You can do so with --home-dir /home/[username]/.gnupg or shorter --home-dir ~username/.gnupg if your shell resolves the short-hand. Better don't do this, as GnuPG is very strict at verifying access privileges and refuse to work if those are too relaxed. GnuPG doesn't like permissions allowing other users but the owner to access a GnuPG home directory at all, for good reasons.
Change GnuPG's behavior to use a completely unrelated folder as home directory, for example somewhere your application is storing data anyway. Usually, the best solution. Make sure to set the owner and access permissions appropriately. An example would be the option --home-dir /var/lib/foo-product/gnupg.
if
the echo $USER prints as root when executed on cronjob and as
username when executed manually
Then you need to login as the user and use a command such as "crontab -e" to add a cronjob for that user to run your script

How to run a series of commands with a single command in the command line?

I typically run the following commands to deploy a particular app:
compass compile -e production --force
git add .
git commit -m "Some message"
git push
git push production master
How can I wrap that up into a single command?
I'd need to be able to customize the commit message. So the command might look something like:
deploy -m "Some message"
There are two possibilities:
a script, as others answered
a function, defined in your .bash_profile:
deploy() {
compass compile -e production --force &&
git add . &&
git commit -m "$#" &&
git push &&
git push production master
}
Without arguments, you'd have a third option, namely an alias:
alias deploy="compass compile -e production --force &&
git add . &&
git commit -m 'Dumb message' &&
git push &&
git push production master"
You could create a function that does what you want, and pass the commit message as argument:
function deploy() {
compass compile -e production --force
git add .
git commit "$#"
git push
git push production master
}
Put that in your .bashrc and you're good to go.
You can make a shell script. Something that looks like this (note no input validation etc):
#!/bin/sh
compass compile -e production --force
git add .
git commit -m $1
git push
git push production master
Save that to myscript.sh, chmod +x it, then do something like ./myscript.sh "Some message".
You can write a shell script for this
#!/bin/bash
compass compile -e production --force
git add .
git commit -m $1
git push
git push production master
Save this to 'deploy' and do a chmod 7xx on it. Now you can use it as ./deploy "Some message"
you could write these commands into a file named deploy.sh .
Then make it executable and run as sh deploy.sh
You could even add it to your path by exporting the path where you save the script.
everyone mentions about writing a script and this is probably the best way of doing it.
However you might someday want to use another way - merge commands with &&, for example:
cd ../ && touch abc
will create a file "abc" in a parent directory :)
It is just to let you know about such thing, for this particular scenario (and 99% of the others) please take a look at other answers :)
I would go through the effort of making the command work for more than just the current directory. One of the most versitle ways of doing this is to use getopt in a BASH script. Make sure you have getopt installed, create deploy.sh then chmod 755 deploy.sh and then do something like this:
#!/bin/bash
declare -r GETOPT=/usr/bin/getopt
declare -r ECHO='builtin echo'
declare -r COMPASS=/path/to/compass
declare -r GIT=/path/to/git
sanity() {
# Sanity check our runtime environment to make sure all needed apps are there.
for bin in $GETOPT $ECHO $COMPASS $GIT
do
if [ ! -x $bin ]
then
log error "Cannot find binary $bin"
return 1
fi
done
return 0
}
usage() {
$CAT <<!
${SCRIPTNAME}: Compile, add and commit directories
Usage: ${SCRIPTNAME} -e <env> [-v]
-p|--path=<path to add>
-c|--comment="Comment to add"
-e|--environment=<production|staging|dev>
Example:
$SCRIPTNAME -p /opt/test/env -c "This is the comment" -e production
!
}
checkopt() {
# Since getopt is used within this function, it must be called as
# checkopt "$#"
local SHORTOPT="-hp::c::e::"
local LONGOPT="help,path::,comment::,environment::"
eval set -- "`$GETOPT -u -o $SHORTOPT --long $LONGOPT -n $SCRIPTNAME -- $#`"
while true
do
case "$1" in
-h|--help)
return 1
;;
-|--path)
PATH="$2"
shift 2
;;
-c|--comment)
COMMENT=$2
shift 2
;;
-e|--environment)
ENV="$2"
shift 2
;;
--)
shift
break
;;
*)
$ECHO "what is $1?"
;;
esac
done
}
if ! sanity
then
die "Sanity check failed - Cant find proper system binaries"
fi
if checkopt $#
then
$ECHO "Running Compass Compile & Git commit sequence..."
$COMPASS compile -e $ENV --force
$GIT add $PATH
$GIT commit -m $COMMENT
$GIT push
$GIT push ENV master
else
usage
exit 1
fi
exit 0

Retaining file permissions with Git

I want to version control my web server as described in Version control for my web server, by creating a git repo out of my /var/www directory. My hope was that I would then be able to push web content from our dev server to github, pull it to our production server, and spend the rest of the day at the pool.
Apparently a kink in my plan is that Git won't respect file permissions (I haven't tried it, only reading about it now.) I guess this makes sense in that different boxes are liable to have different user/group setups. But if I wanted to force permissions to propagate, knowing my servers are configured the same, do I have any options? Or is there an easier way to approach what I'm trying to do?
Git is Version Control System, created for software development, so from the whole set of modes and permissions it stores only executable bit (for ordinary files) and symlink bit. If you want to store full permissions, you need third party tool, like git-cache-meta (mentioned by VonC), or Metastore (used by etckeeper). Or you can use IsiSetup, which IIRC uses git as backend.
See Interfaces, frontends, and tools page on Git Wiki.
The git-cache-meta mentioned in SO question "git - how to recover the file permissions git thinks the file should be?" (and the git FAQ) is the more staightforward approach.
The idea is to store in a .git_cache_meta file the permissions of the files and directories.
It is a separate file not versioned directly in the Git repo.
That is why the usage for it is:
$ git bundle create mybundle.bdl master; git-cache-meta --store
$ scp mybundle.bdl .git_cache_meta machine2:
#then on machine2:
$ git init; git pull mybundle.bdl master; git-cache-meta --apply
So you:
bundle your repo and save the associated file permissions.
copy those two files on the remote server
restore the repo there, and apply the permission
This is quite late but might help some others. I do what you want to do by adding two git hooks to my repository.
.git/hooks/pre-commit:
#!/bin/bash
#
# A hook script called by "git commit" with no arguments. The hook should
# exit with non-zero status after issuing an appropriate message if it wants
# to stop the commit.
SELF_DIR=`git rev-parse --show-toplevel`
DATABASE=$SELF_DIR/.permissions
# Clear the permissions database file
> $DATABASE
echo -n "Backing-up permissions..."
IFS_OLD=$IFS; IFS=$'\n'
for FILE in `git ls-files --full-name`
do
# Save the permissions of all the files in the index
echo $FILE";"`stat -c "%a;%U;%G" $FILE` >> $DATABASE
done
for DIRECTORY in `git ls-files --full-name | xargs -n 1 dirname | uniq`
do
# Save the permissions of all the directories in the index
echo $DIRECTORY";"`stat -c "%a;%U;%G" $DIRECTORY` >> $DATABASE
done
IFS=$IFS_OLD
# Add the permissions database file to the index
git add $DATABASE -f
echo "OK"
.git/hooks/post-checkout:
#!/bin/bash
SELF_DIR=`git rev-parse --show-toplevel`
DATABASE=$SELF_DIR/.permissions
echo -n "Restoring permissions..."
IFS_OLD=$IFS; IFS=$'\n'
while read -r LINE || [[ -n "$LINE" ]];
do
ITEM=`echo $LINE | cut -d ";" -f 1`
PERMISSIONS=`echo $LINE | cut -d ";" -f 2`
USER=`echo $LINE | cut -d ";" -f 3`
GROUP=`echo $LINE | cut -d ";" -f 4`
# Set the file/directory permissions
chmod $PERMISSIONS $ITEM
# Set the file/directory owner and groups
chown $USER:$GROUP $ITEM
done < $DATABASE
IFS=$IFS_OLD
echo "OK"
exit 0
The first hook is called when you "commit" and will read the ownership and permissions for all the files in the repository and store them in a file in the root of the repository called .permissions and then add the .permissions file to the commit.
The second hook is called when you "checkout" and will go through the list of files in the .permissions file and restore the ownership and permissions of those files.
You might need to do the commit and checkout using sudo.
Make sure the pre-commit and post-checkout scripts have execution permission.
We can improve on the other answers by changing the format of the .permissions file to be executable chmod statements, and to make use of the -printf parameter to find. Here is the simpler .git/hooks/pre-commit file:
#!/usr/bin/env bash
echo -n "Backing-up file permissions... "
cd "$(git rev-parse --show-toplevel)"
find . -printf 'chmod %m "%p"\n' > .permissions
git add .permissions
echo done.
...and here is the simplified .git/hooks/post-checkout file:
#!/usr/bin/env bash
echo -n "Restoring file permissions... "
cd "$(git rev-parse --show-toplevel)"
. .permissions
echo "done."
Remember that other tools might have already configured these scripts, so you may need to merge them together. For example, here's a post-checkout script that also includes the git-lfs commands:
#!/usr/bin/env bash
echo -n "Restoring file permissions... "
cd "$(git rev-parse --show-toplevel)"
. .permissions
echo "done."
command -v git-lfs >/dev/null 2>&1 || { echo >&2 "\nThis repository is configured for Git LFS but 'git-lfs' was not found on you
r path. If you no longer wish to use Git LFS, remove this hook by deleting .git/hooks/post-checkout.\n"; exit 2; }
git lfs post-checkout "$#"
In case you are coming into this right now, I've just been through it today and can summarize where this stands. If you did not try this yet, some details here might help.
I think #Omid Ariyan's approach is the best way. Add the pre-commit and post-checkout scripts. DON'T forget to name them exactly the way Omid does and DON'T forget to make them executable. If you forget either of those, they have no effect and you run "git commit" over and over wondering why nothing happens :) Also, if you cut and paste out of the web browser, be careful that the quotation marks and ticks are not altered.
If you run the pre-commit script once (by running a git commit), then the file .permissions will be created. You can add it to the repository and I think it is unnecessary to add it over and over at the end of the pre-commit script. But it does not hurt, I think (hope).
There are a few little issues about the directory name and the existence of spaces in the file names in Omid's scripts. The spaces were a problem here and I had some trouble with the IFS fix. For the record, this pre-commit script did work correctly for me:
#!/bin/bash
SELF_DIR=`git rev-parse --show-toplevel`
DATABASE=$SELF_DIR/.permissions
# Clear the permissions database file
> $DATABASE
echo -n "Backing-up file permissions..."
IFSold=$IFS
IFS=$'\n'
for FILE in `git ls-files`
do
# Save the permissions of all the files in the index
echo $FILE";"`stat -c "%a;%U;%G" $FILE` >> $DATABASE
done
IFS=${IFSold}
# Add the permissions database file to the index
git add $DATABASE
echo "OK"
Now, what do we get out of this?
The .permissions file is in the top level of the git repo. It has one line per file, here is the top of my example:
$ cat .permissions
.gitignore;660;pauljohn;pauljohn
05.WhatToReport/05.WhatToReport.doc;664;pauljohn;pauljohn
05.WhatToReport/05.WhatToReport.pdf;664;pauljohn;pauljohn
As you can see, we have
filepath;perms;owner;group
In the comments about this approach, one of the posters complains that it only works with same username, and that is technically true, but it is very easy to fix it. Note the post-checkout script has 2 action pieces,
# Set the file permissions
chmod $PERMISSIONS $FILE
# Set the file owner and groups
chown $USER:$GROUP $FILE
So I am only keeping the first one, that's all I need. My user name on the Web server is indeed different, but more importantly you can't run chown unless you are root. Can run "chgrp", however. It is plain enough how to put that to use.
In the first answer in this post, the one that is most widely accepted, the suggestion is so use git-cache-meta, a script that is doing the same work that the pre/post hook scripts here are doing (parsing output from git ls-files). These scripts are easier for me to understand, the git-cache-meta code is rather more elaborate. It is possible to keep git-cache-meta in the path and write pre-commit and post-checkout scripts that would use it.
Spaces in file names are a problem with both of Omid's scripts. In the post-checkout script, you'll know you have the spaces in file names if you see errors like this
$ git checkout -- upload.sh
Restoring file permissions...chmod: cannot access '04.StartingValuesInLISREL/Open': No such file or directory
chmod: cannot access 'Notebook.onetoc2': No such file or directory
chown: cannot access '04.StartingValuesInLISREL/Open': No such file or directory
chown: cannot access 'Notebook.onetoc2': No such file or directory
I'm checking on solutions for that. Here's something that seems to work, but I've only tested in one case
#!/bin/bash
SELF_DIR=`git rev-parse --show-toplevel`
DATABASE=$SELF_DIR/.permissions
echo -n "Restoring file permissions..."
IFSold=${IFS}
IFS=$
while read -r LINE || [[ -n "$LINE" ]];
do
FILE=`echo $LINE | cut -d ";" -f 1`
PERMISSIONS=`echo $LINE | cut -d ";" -f 2`
USER=`echo $LINE | cut -d ";" -f 3`
GROUP=`echo $LINE | cut -d ";" -f 4`
# Set the file permissions
chmod $PERMISSIONS $FILE
# Set the file owner and groups
chown $USER:$GROUP $FILE
done < $DATABASE
IFS=${IFSold}
echo "OK"
exit 0
Since the permissions information is one line at a time, I set IFS to $, so only line breaks are seen as new things.
I read that it is VERY IMPORTANT to set the IFS environment variable back the way it was! You can see why a shell session might go badly if you leave $ as the only separator.
In pre-commit/post-checkout an option would be to use "mtree" (FreeBSD), or "fmtree" (Ubuntu) utility which "compares a file hierarchy against a specification, creates a specification for a file hierarchy, or modifies a specification."
The default set are flags, gid, link, mode, nlink, size, time, type, and uid. This can be fitted to the specific purpose with -k switch.
I am running on FreeBSD 11.1, the freebsd jail virtualization concept makes the operating system optimal. The current version of Git I am using is 2.15.1, I also prefer to run everything on shell scripts. With that in mind I modified the suggestions above as followed:
git push: .git/hooks/pre-commit
#! /bin/sh -
#
# A hook script called by "git commit" with no arguments. The hook should
# exit with non-zero status after issuing an appropriate message if it wants
# to stop the commit.
SELF_DIR=$(git rev-parse --show-toplevel);
DATABASE=$SELF_DIR/.permissions;
# Clear the permissions database file
> $DATABASE;
printf "Backing-up file permissions...\n";
OLDIFS=$IFS;
IFS=$'\n';
for FILE in $(git ls-files);
do
# Save the permissions of all the files in the index
printf "%s;%s\n" $FILE $(stat -f "%Lp;%u;%g" $FILE) >> $DATABASE;
done
IFS=$OLDIFS;
# Add the permissions database file to the index
git add $DATABASE;
printf "OK\n";
git pull: .git/hooks/post-merge
#! /bin/sh -
SELF_DIR=$(git rev-parse --show-toplevel);
DATABASE=$SELF_DIR/.permissions;
printf "Restoring file permissions...\n";
OLDIFS=$IFS;
IFS=$'\n';
while read -r LINE || [ -n "$LINE" ];
do
FILE=$(printf "%s" $LINE | cut -d ";" -f 1);
PERMISSIONS=$(printf "%s" $LINE | cut -d ";" -f 2);
USER=$(printf "%s" $LINE | cut -d ";" -f 3);
GROUP=$(printf "%s" $LINE | cut -d ";" -f 4);
# Set the file permissions
chmod $PERMISSIONS $FILE;
# Set the file owner and groups
chown $USER:$GROUP $FILE;
done < $DATABASE
IFS=$OLDIFS
pritnf "OK\n";
exit 0;
If for some reason you need to recreate the script the .permissions file output should have the following format:
.gitignore;644;0;0
For a .gitignore file with 644 permissions given to root:wheel
Notice I had to make a few changes to the stat options.
Enjoy,
One addition to #Omid Ariyan's answer is permissions on directories. Add this after the for loop's done in his pre-commit script.
for DIR in $(find ./ -mindepth 1 -type d -not -path "./.git" -not -path "./.git/*" | sed 's#^\./##')
do
# Save the permissions of all the files in the index
echo $DIR";"`stat -c "%a;%U;%G" $DIR` >> $DATABASE
done
This will save directory permissions as well.
Another option is git-store-meta. As the author described in this superuser answer:
git-store-meta is a perl script which integrates the nice features of git-cache-meta, metastore, setgitperms, and mtimestore.
Improved version of https://stackoverflow.com/users/9932792/tammer-saleh answer:
It only updates the permissions on changed files.
It handles symlinks
It ignores empty directories (git can not handle them)
.git/hooks/pre-commit:
#!/usr/bin/env bash
echo -n "Backing-up file permissions... "
cd "$(git rev-parse --show-toplevel)"
find . -type d ! -empty -printf 'X="%p"; chmod %m "$X"; chown %U:%G "$X"\n' > .permissions
find . -type f -printf 'X="%p"; chmod %m "$X"; chown %U:%G "$X"\n' >> .permissions
find . -type l -printf 'chown -h %U:%G "%p"\n' >> .permissions
git add .permissions
echo done.
.git/hooks/post-merge:
#!/usr/bin/env bash
echo -n "Restoring file permissions... "
cd "$(git rev-parse --show-toplevel)"
git diff -U0 .permissions | grep '^\+' | grep -Ev '^\+\+\+' | cut -c 2- | /usr/bin/bash
echo "done."

Resources