zip or tar the resulting files with command "git status" over modified files. Linux - linux

Recently, i've been searching how to compress the git status modified files on command line linux. This git status before git add and git commit commands.
$ git status
.
.
modified: app/model/solicitud/Solicitud.js
modified: app/view/basura/Grilla.js
modified: app/view/excepcion/Grilla.js
modified: app/view/modulo/Contenedor.js
modified: app/view/modulo/Grilla.js
.
.
So, i came with this solutions, to .tar.gz and zip respectively:
$ git status |grep -i "modified:" |cut -d':' -f2 |tee |tr -d " " | tar -T - -zcvf ~/myfolderdesttar/myfile.tar.gz
$ git status |grep -i "modified:" |cut -d':' -f2 |tee |tr -d " " | zip ~/myfolderdestzip/myzipfile.zip -#
¿Do you have a shorter solution to this or a better way with git command?.
Thanks.

to tar modified files:
git status | grep modified | awk '{print $3}' | xargs tar cvf modified.tar
to tar new files:
git status | grep new | awk '{print $3}' | xargs tar cvf new.tar

Related

rsync files from different path

I need to copy files from remote server (different path) to local path
Iget file list in this working way:
ssh user#remote " ls -R /path/ \
" |grep "o1_" | awk -F '_' '{if ($4 > 55146) print $0}' >file_list.txt
or
ssh user#remote " find /path/ " \
|grep "o1_" | awk -F '_' '{if ($8 > 55152) print $0}' >files_full_path.txt
example files_full_path.txt
/path/path1/file1
/path/path1/file2
/path/path2/file3
/path/path2/file4
I've tried with full or non full path without success, examples below:
rsync -aver --include-from=files_full_path.txt user#remote:/path/ /destination_path
rsync -ave --include-from=files_full_path.txt --include /path/ --exclude='/*/' /path/
Can you help me?
Thanks
Perhaps this will help you:
$ ssh user#remote touch 1 2
$ mkdir 12
$ echo -e "1\n2" | rsync --include-from=- user#remote:\* 12/
$ ls 12 | cat
1
2
I'have found solution
cat to_apply.txt |xargs -i scp user#host:{} destination_path/

Git: speed up this command for searching Git blame for todos

I'm using this command:
git ls-tree -r --name-only HEAD -- . | grep -E '\.(ts|tsx|js|jsx|css|scss|html)$' | xargs -n1 git blame -c -e | sed -E 's/^.+\.com>\s+//' | LC_ALL=C grep -F 'todo: ' | sort
This gets all the todos in my codebase, sorted by date. This is mostly from Use git to list TODOs in code sorted by date introduced, I'm not very good with the command line.
However, the grep 'todo: ' part takes a long time. It takes 1min for ~400 files, without any particularly large files. Is it possible to speed this up somehow?
Edit: I realized it's the git blame that's slow, not grep, so I did a search before running git blame:
git ls-tree -r --name-only HEAD -- . | grep -E '\\.(ts|tsx|js|jsx|css|scss|html)$' | LC_ALL=C xargs grep -F -l 'todo: ' | xargs -n1 git blame -c -e | sed -E 's/^.+\\.com>\\s+//' | LC_ALL=C grep -F 'todo: ' | sort
Now it's 6s.

echo $variable in cron not working

Im having trouble printing the result of the following when run by a cron. I have a script name under /usr/local/bin/test
#!/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ARAW=`date +%y%m%d`
NAME=`hostname`
TODAY=`date '+%D %r'`
cd /directory/bar/foo/
VARR=$(ls -lrt /directory/bar/foo/ | tail -1 | awk {'print $8'} | ls -lrt `xargs` | grep something)
echo "Resolve2 Backup" > /home/user/result.txt
echo " " >> /home/user/result.txt
echo "$VARR" >> /home/user/result.txt
mail -s "Result $TODAY" email#email.com < /home/user/result.txt
I configured it in /etc/cron.d/test to run every 1am:
00 1 * * * root /usr/local/bin/test
When Im running it manually in command line
# /usr/local/bin/test
Im getting the complete value. But when I let cron do the work, it never display the part of echo "$VARR" >> /home/user/result.txt
Any ideas?
VARR=$(ls -lrt /directory/bar/foo/ | tail -1 | awk {'print $8'} | ls -lrt `xargs` | grep something)
ls -ltr /path/to/dir will not include the directory in the filename part of the output. Then, you call ls again with this output, and this will look in your current directory, not in /path/to/dir.
In cron, your current directory is likely to be /, and in your manual testing, I bet your current directory is /path/to/dir
Here's another approach to finding the newest file in a directory that emits the full path name:
stat -c '%Y %n' /path/to/dir/* | sort -nr | head -1 | cut -d" " -f 2-
Requires GNU stat, check your man page for the correct invocation for your system.
I think your VARR invocation can be:
latest_dir=$(stat -c '%Y %n' /path/to/dir/* | sort -nr | head -1 | cut -d" " -f 2-)
interesting_files=$(ls -ltr "$latest_dir"/*something*)
Then, no need for a temp file:
{
echo "Resolve2 Backup"
echo
echo "$interesting_files"
} |
mail -s "Result $TODAY" email#email.com
Thanks for all your tips and response. I solved my problem. The problem is the ouput of $8 and $9 in cron. I dont know what special field being read while it is being run in cron. Im just a newbie in scripting so sorry for my bad script =)

xargs (or something else) without space before parameter

I would like to execute something like this (git squash):
git rebase -i HEAD~3
extracting the 3 from git log:
git log | blabla | xargs git rebase -i HEAD~
This does not work because xargs inserts a space after HEAD~.
The problem is that I want to alias this command, so I cannot just use
git rebase -i HEAD~`git log | blabla`
because the number would be evaluated just when I define the alias.
I don't have to use xargs, I just need an alias (preferably not a function).
You can use the -I option of xargs:
git log | blabla | xargs -I% git rebase -i HEAD~%
Try this:
git log | blabla | xargs -i bash -c 'git rebase -i HEAD~{}'

linux shell stream redirection to run a list of commands directly

I have this svn project... to get a list of unadded files (in my case, hundreds):
svn status |grep "^?"
outputs
? somefile.txt
? somefile1.txt
? somefile2.txt
I was recently introduced to sed... so now I have a list of commands I want to run
svn status | grep "^?"|sed "s/^?/svn add/"
outputs
svn add somefile.txt
svn add somefile1.txt
svn add somefile2.txt
I realize I could just pipe it to a file
svn status | grep "^?"|sed "s/^?/svn add/" >out.sh && sh out.sh && rm out.sh
But I'd like to avoid writing to a temporary file. Is there a way I pipe it to some command like this:
svn status | grep "^?"|sed "s/^?/svn add/" |some_command_that_runs_each_line
What about bash/sh?
svn status | grep "^?"|sed "s/^?/svn add/" | bash
What you are looking for is the xargs command:
svn status | grep "^?" | sed "s/^..//" | xargs svn add
You can also use substitution:
svn add `svn status | grep "^?"` | cut -c 3-`
How about:
for i in `svn status | grep "^?"|sed "s/^?/svn add/"`
do
$i
done

Resources