Git: speed up this command for searching Git blame for todos - linux

I'm using this command:
git ls-tree -r --name-only HEAD -- . | grep -E '\.(ts|tsx|js|jsx|css|scss|html)$' | xargs -n1 git blame -c -e | sed -E 's/^.+\.com>\s+//' | LC_ALL=C grep -F 'todo: ' | sort
This gets all the todos in my codebase, sorted by date. This is mostly from Use git to list TODOs in code sorted by date introduced, I'm not very good with the command line.
However, the grep 'todo: ' part takes a long time. It takes 1min for ~400 files, without any particularly large files. Is it possible to speed this up somehow?
Edit: I realized it's the git blame that's slow, not grep, so I did a search before running git blame:
git ls-tree -r --name-only HEAD -- . | grep -E '\\.(ts|tsx|js|jsx|css|scss|html)$' | LC_ALL=C xargs grep -F -l 'todo: ' | xargs -n1 git blame -c -e | sed -E 's/^.+\\.com>\\s+//' | LC_ALL=C grep -F 'todo: ' | sort
Now it's 6s.

Related

remove file in Cron task

I have the following command line that I run on a Debian:
ls -d mypath/filename* -tp | grep -v '/$' | tail -n +1 | xargs rm -f
Result : OKAY (the file is removed)
But I want to run this command periodically, and if I put this command in a Cron job, then it doesnt work (file not removed)
I checked the logs, and there are no errors nor warnings
What I tried :
* * * * * bash -c "ls -d mypath/filename* -tp | grep -v '/$' | tail -n +1 | xargs rm -f"
Any idea ?

Diffrence between $(git ls-files -s | wc -l ) and $(git ls-files -s >out && wc -l <out)

Are the two commands $(git ls-files -s | wc -l) and $(git ls-files -s >out && wc -l <out) different or same,
as when first is written in the second form, i end up getting errors.
When you pipe the output of one program into the input of another, as in:
$(git ls-files -s | wc -l)
...the programs run concurrently. wc will start counting lines as soon as it receives them. The pipe also directs the output of git to the input of wc without any intermediate file.
Note that in this case, wc will run even if the git command fails for some reason, so you'll get the wc output (in most cases, 0).
In your second example:
$(git ls-files -s >out && wc -l <out)
...the git command runs first, and stores its results in a file called out. Then, if that was successful, wc runs and counts the lines. Because of &&, if the git command fails, wc won't run at all. In either case, you'll have a file named out laying around with the results of the git command in it.
Piping is generally better; it'll run faster and if you don't need to keep the intermediate results, it won't have any side effects.

zip or tar the resulting files with command "git status" over modified files. Linux

Recently, i've been searching how to compress the git status modified files on command line linux. This git status before git add and git commit commands.
$ git status
.
.
modified: app/model/solicitud/Solicitud.js
modified: app/view/basura/Grilla.js
modified: app/view/excepcion/Grilla.js
modified: app/view/modulo/Contenedor.js
modified: app/view/modulo/Grilla.js
.
.
So, i came with this solutions, to .tar.gz and zip respectively:
$ git status |grep -i "modified:" |cut -d':' -f2 |tee |tr -d " " | tar -T - -zcvf ~/myfolderdesttar/myfile.tar.gz
$ git status |grep -i "modified:" |cut -d':' -f2 |tee |tr -d " " | zip ~/myfolderdestzip/myzipfile.zip -#
¿Do you have a shorter solution to this or a better way with git command?.
Thanks.
to tar modified files:
git status | grep modified | awk '{print $3}' | xargs tar cvf modified.tar
to tar new files:
git status | grep new | awk '{print $3}' | xargs tar cvf new.tar

xargs (or something else) without space before parameter

I would like to execute something like this (git squash):
git rebase -i HEAD~3
extracting the 3 from git log:
git log | blabla | xargs git rebase -i HEAD~
This does not work because xargs inserts a space after HEAD~.
The problem is that I want to alias this command, so I cannot just use
git rebase -i HEAD~`git log | blabla`
because the number would be evaluated just when I define the alias.
I don't have to use xargs, I just need an alias (preferably not a function).
You can use the -I option of xargs:
git log | blabla | xargs -I% git rebase -i HEAD~%
Try this:
git log | blabla | xargs -i bash -c 'git rebase -i HEAD~{}'

linux shell stream redirection to run a list of commands directly

I have this svn project... to get a list of unadded files (in my case, hundreds):
svn status |grep "^?"
outputs
? somefile.txt
? somefile1.txt
? somefile2.txt
I was recently introduced to sed... so now I have a list of commands I want to run
svn status | grep "^?"|sed "s/^?/svn add/"
outputs
svn add somefile.txt
svn add somefile1.txt
svn add somefile2.txt
I realize I could just pipe it to a file
svn status | grep "^?"|sed "s/^?/svn add/" >out.sh && sh out.sh && rm out.sh
But I'd like to avoid writing to a temporary file. Is there a way I pipe it to some command like this:
svn status | grep "^?"|sed "s/^?/svn add/" |some_command_that_runs_each_line
What about bash/sh?
svn status | grep "^?"|sed "s/^?/svn add/" | bash
What you are looking for is the xargs command:
svn status | grep "^?" | sed "s/^..//" | xargs svn add
You can also use substitution:
svn add `svn status | grep "^?"` | cut -c 3-`
How about:
for i in `svn status | grep "^?"|sed "s/^?/svn add/"`
do
$i
done

Resources