parse grep output and run vim with result - vim

I'm current using command line to grep a pattern in a source tree. A line of grep output is in the form:
path/to/a/file.java:123: some text here
If I want to open the file at the location specified in the grep output, I would have to manually enter the vim command as:
$ vim +123 path/to/a/file.java
Is there an easier method that would allow me to use the raw grep output and have the relevant components parsed and run vim for the file at the line#.
I am interested in a command line solution. I am aware that I can do greps inside vim.
Thanks

The file-line plugin is exactly what you want. With that installed, you can just run
vim path/to/a/file.java:123

You could simply run grep from Vim itself and benefit from the quickfix list/window:
:grep -Rn foo **/*.h
:cw
(scroll around)
<CR>
Or you could pass your grep output to Vim for the same benefits:
$ vim -q <(grep -Rn foo **/*.h)
:cw
(scroll around)
<CR>
Or, if you are already in Vim, you could insert the output of your grep in a buffer and use gF to jump to the right line of the right file:
:r !grep -Rn foo **/*.h
(scroll around)
gF
Or, from your shell:
$ vim <(grep -Rn foo **/*.h)
(scroll around)
gF
Or, if you just ran your grep, you can reuse it like so:
$ vim <(!!)
(scroll around)
gF
Or, if you know its number in history:
$ vim <(!884)
(scroll around)
gF

> vim $(cat the.file | grep xxx)
will evauluates the $() - find xxx in the.file then will pipe xxx to vim
also possible with backticks ``:
> vim `cat the.file | grep xxx`

Try this:
grep -nr --null pattern | { IFS= read -rd "" f; IFS=: read -d "" n match; vim +$n "$f" </dev/tty; }
grep does a recursive search for pattern. For the first file that it finds, vim is started with the +linenum parameter to put you on the line of interest.
This approach uses NUL-separated i/o. It should be safe for all file names, even ones that contain white space or other difficult characters.
This was tested on GNU tools (Linux). It may work on BSD/OSX as well.
Multiline version
For those who prefer their commands spread over multiple lines:
grep -nr --null pattern | {
IFS= read -rd "" f
IFS=: read -d "" n match
vim +$n "$f" </dev/tty
}
Convenience function
Because the above command is long, one may want to put it in a shell function:
vigrep() { grep -nr --null "$1" | { IFS= read -rd "" f; IFS=: read -d "" n match; vim +$n "$f" </dev/tty; }; }
Once this has been defined, it can be used to search for a file containing any pattern. For example:
vigrep 'some text here'
To make the definition of vigrep permanent, put it in your ~/.bashrc file.
How it works
grep -nr --null pattern
-r tells grep to search recursively.
-n tells grep to return line number of the matches.
-null tells grep to use NUL-separated output
pattern is the regex to search for.
IFS= read -rd "" f
This reads the first NUL-separated section of input (which will be a file name) and assigns it to the shell variable f.
IFS=: read -d "" n match
This reads the next NUL-separated section of input using : as the word separator. The first word (which is the line number) is assigned to shell variable n. The rest of this line will be ignored.
vim +$n "$f" </dev/tty
This starts vim on line number $n of file $f using the terminal, /dev/tty, for input.
Generally, when running vim, one really wants to have vim accept input from the keyboard. That is why, for this case, we hard-coded input from /dev/tty.
Using cut-and-paste to launch vim
Start the following and cut-and-paste a line of grep -n output to it:
IFS=: read f n rest; vim +$n "$f"
The read command will wait for a line on standard input. The type of input it expects looks like:
path/to/a/file.java:123: some text here
Because IFS=:, it divides up the line on colons and assigns the file name to shell variable f and the line number to shell variable n. When this is done, it launches the vim command.
This command could also, if desired, be saved as a shell function:
grvim() { IFS=: read f n rest; vim "+$n" "$f"; }

I have this function in my .bashrc:
grep_edit(){
grep "$#" | sed 's/:/ +/;s/:/ /';
}
So, the output is in the form:
path/to/a/file.java +123 some text here
Then I can directly use
$ vi path/to/a/file.java +123
Note: I have also heard of file-line plugin, but I was not sure how it will work with netrw plugin.
e.g. vi can open remote files with this syntax:
vi scp://root#remote-system//var/log/daemon.log
But if that is not a concern, then you can better use file-line plugin.

Related

Linux cat command output with new lines to be read using vim

I am trying to open all the files listed in file a.lst:
symptom1.log
symptom2.log
symptom3.log
symptom4.log
But trying the following command:
cat a.lst | tr "\n" " " | vim -
opens only the stdin output
symptom1.log symptom2.log symptom3.log symptom4.log
It doesn't open symptom1.log, symptom2.log, symptom3.log & symptom4.log in vim.
How to open all the files listed in a.lst using vim?
You could use xargs to line upp the arguments to vi:
vim $(cat 1.t | xargs)
or
cat a.lst | xargs vim
If you want them open in split view, use -o (horizontal) or -O (vertical):
cat a.lst | xargs vim -o
cat a.lst | xargs vim -O
while read f ; do cat $f ; done < a.lst | vim -
I like a variation on Qiau's xargs option:
xargs vim < a.lst
This works because the input redirection is applied to the xargs command rather than vim.
If your shell is bash, another option is this:
vim $(<a.lst)
This works because within the $(...), input redirection without a command simply prints the results of the input, hence expanding the file into a list of files for vim to open.
UPDATE:
You mentioned in comments that you are using csh as your shell. So another option for you might be:
vim `cat a.lst`
This should work in POSIX shells as well, but I should point out that backquotes are deprecated in some other shells (notably bash) in favour of the $(...) alternative.
Note that redirection can happen in multiple places on your command line. This should also work in both csh and bash:
< a.lst xargs vim
vim may complain that its input is not coming from a terminal, but it appears to work for me anyway.

How can I provide stdin to ed, which need a filename?

Need some unix shell basic here:
For command that I see no "-" target in , say ed:
print '%-2p\nq' | ed -s FILE
Can I provide a stream from stdout of some cmd, rather than FILE name, as the data to be processed:
SomeCMD | ed -s SOMETHING_MAGICAL <<< 'print '%-2p\nq'
Is is possible?
ed reads its commands from stdin, so if your file is also on stdin, how do you work?
In fact, you can feed file input over stdin, if you concatenate its output with a single line
i
at the beginning, to start writing in the data, then append a single . to end the input, followed by any commands. You can even output the results to stdout. Do remember that it will break if there is a line in the file with nothing but a single . in it.
So if a file input.file contains this:
First line
Second line
Third line
And a file commands.list contains this:
.
1d
1,$w /dev/stdout
Then this command line...
echo i | cat - input.file commands.list | ed -s
Will output this:
Second line
Third line
Dare I say tadaaaaa! ?
Note: you can probably protect against the case of single . lines in the file by piping the file through a filter that escapes any such lines and then unescaping them again with ed commands. I leave that to your ingenuity.
Another note: you really should use sed for this, but I couldn't let the it can't be done comments go by.
You use r to read a command output into the text buffer. So, portable:
printf '%s\n' 'r !df -h' g/tmpfs/d ,p q | ed -s
or
ed -s << IN
r !df -h
g/tmpfs/d
,p
q
IN
The above reads in the output of df -h, deletes the lines matching tmpfs and prints the result.
If your shell supports process substitution:
printf '%s\n' g/tmpfs/d ,p q | ed -s <(df -h)
With gnu ed that SOMETHING_MAGICAL is called !.
As per the man page:
Start edit by reading in 'file' if given. If 'file' begins with a
'!', read output of shell command.
printf '%s\n' g/tmpfs/d ,p q | ed -s '!df -h'
or, with herestring:
ed -s '!df -h' <<< $'g/tmpfs/d\n,p\nq\n'
Yes. Effectively, instead of 'piping' into ed, you can use 'process substitution' to pass the output of your command as the input to be edited, leaving the standard pipe free to take pre-scripted ed commands.
Example:
echo '#
,s/\/dev\/\(\w*\) .* \b\(.*\)%.*$/DEVICE \1 is \2% full!/
,p
Q' | ed -s <(df 2> /dev/null | tail -n +2 | egrep "^/dev/")
DEVICE sda6 is 90% full!
DEVICE sda2 is 88% full!
Explanation:
Process substitution (the <() part) turns the output of df 2> /dev/null | tail -n +2 | egrep "^/dev/" into the contents of a temporary file descriptor, which is then used as an input file to ed -s.
At the same time, ed commands are passed via echo into a pipe.
Echo here is used in multiline single-quote mode without interpretation of escape sequences; if you're not too bothered about not having everything appear on a single line, then this is the most straightforward way to pass ed commands without having to go into escape-sequences hell.
Specifically, we are passing four ed commands:
A comment (just to align the remaining commands on the console)
A substitution command
A 'print all' command
A 'quit unconditionally' command, to prevent any warning messages that could have been printed on the terminal.

How to insert a text at the beginning of a file?

So far I've been able to find out how to add a line at the beginning of a file but that's not exactly what I want. I'll show it with an example:
File content
some text at the beginning
Result
<added text> some text at the beginning
It's similar but I don't want to create any new line with it...
I would like to do this with sed if possible.
sed can operate on an address:
$ sed -i '1s/^/<added text> /' file
What is this magical 1s you see on every answer here? Line addressing!.
Want to add <added text> on the first 10 lines?
$ sed -i '1,10s/^/<added text> /' file
Or you can use Command Grouping:
$ { echo -n '<added text> '; cat file; } >file.new
$ mv file{.new,}
If you want to add a line at the beginning of a file, you need to add \n at the end of the string in the best solution above.
The best solution will add the string, but with the string, it will not add a line at the end of a file.
sed -i '1s/^/your text\n/' file
If the file is only one line, you can use:
sed 's/^/insert this /' oldfile > newfile
If it's more than one line. one of:
sed '1s/^/insert this /' oldfile > newfile
sed '1,1s/^/insert this /' oldfile > newfile
I've included the latter so that you know how to do ranges of lines. Both of these "replace" the start line marker on their affected lines with the text you want to insert. You can also (assuming your sed is modern enough) use:
sed -i 'whatever command you choose' filename
to do in-place editing.
Use subshell:
echo "$(echo -n 'hello'; cat filename)" > filename
Unfortunately, command substitution will remove newlines at the end of file. So as to keep them one can use:
echo -n "hello" | cat - filename > /tmp/filename.tmp
mv /tmp/filename.tmp filename
Neither grouping nor command substitution is needed.
To insert just a newline:
sed '1i\\'
You can use cat -
printf '%s' "some text at the beginning" | cat - filename
To add a line to the top of the file:
sed -i '1iText to add\'
my two cents:
sed -i '1i /path/of/file.sh' filename
This will work even is the string containing forward slash "/"
Hi with carriage return:
sed -i '1s/^/your text\n/' file
Note that on OS X, sed -i <pattern> file, fails. However, if you provide a backup extension, sed -i old <pattern> file, then file is modified in place while file.old is created. You can then delete file.old in your script.
There is a very easy way:
echo "your header" > headerFile.txt
cat yourFile >> headerFile.txt
PROBLEM: tag a file, at the top of the file, with the base name of the parent directory.
I.e., for
/mnt/Vancouver/Programming/file1
tag the top of file1 with Programming.
SOLUTION 1 -- non-empty files:
bn=${PWD##*/} ## bn: basename
sed -i '1s/^/'"$bn"'\n/' <file>
1s places the text at line 1 of the file.
SOLUTION 2 -- empty or non-empty files:
The sed command, above, fails on empty files. Here is a solution, based on https://superuser.com/questions/246837/how-do-i-add-text-to-the-beginning-of-a-file-in-bash/246841#246841
printf "${PWD##*/}\n" | cat - <file> > temp && mv -f temp <file>
Note that the - in the cat command is required (reads standard input: see man cat for more information). Here, I believe, it's needed to take the output of the printf statement (to STDIN), and cat that and the file to temp ... See also the explanation at the bottom of http://www.linfo.org/cat.html.
I also added -f to the mv command, to avoid being asked for confirmations when overwriting files.
To recurse over a directory:
for file in *; do printf "${PWD##*/}\n" | cat - $file > temp && mv -f temp $file; done
Note also that this will break over paths with spaces; there are solutions, elsewhere (e.g. file globbing, or find . -type f ... -type solutions) for those.
ADDENDUM: Re: my last comment, this script will allow you to recurse over directories with spaces in the paths:
#!/bin/bash
## https://stackoverflow.com/questions/4638874/how-to-loop-through-a-directory-recursively-to-delete-files-with-certain-extensi
## To allow spaces in filenames,
## at the top of the script include: IFS=$'\n'; set -f
## at the end of the script include: unset IFS; set +f
IFS=$'\n'; set -f
# ----------------------------------------------------------------------------
# SET PATHS:
IN="/mnt/Vancouver/Programming/data/claws-test/corpus test/"
# https://superuser.com/questions/716001/how-can-i-get-files-with-numeric-names-using-ls-command
# FILES=$(find $IN -type f -regex ".*/[0-9]*") ## recursive; numeric filenames only
FILES=$(find $IN -type f -regex ".*/[0-9 ]*") ## recursive; numeric filenames only (may include spaces)
# echo '$FILES:' ## single-quoted, (literally) prints: $FILES:
# echo "$FILES" ## double-quoted, prints path/, filename (one per line)
# ----------------------------------------------------------------------------
# MAIN LOOP:
for f in $FILES
do
# Tag top of file with basename of current dir:
printf "[top] Tag: ${PWD##*/}\n\n" | cat - $f > temp && mv -f temp $f
# Tag bottom of file with basename of current dir:
printf "\n[bottom] Tag: ${PWD##*/}\n" >> $f
done
unset IFS; set +f
Just for fun, here is a solution using ed which does not have the problem of not working on an empty file. You can put it into a shell script just like any other answer to this question.
ed Test <<EOF
a
.
0i
<added text>
.
1,+1 j
$ g/^$/d
wq
EOF
The above script adds the text to insert to the first line, and then joins the first and second line. To avoid ed exiting on error with an invalid join, it first creates a blank line at the end of the file and remove it later if it still exists.
Limitations: This script does not work if <added text> is exactly equal to a single period.
echo -n "text to insert " ;tac filename.txt| tac > newfilename.txt
The first tac pipes the file backwards (last line first) so the "text to insert" appears last. The 2nd tac wraps it once again so the inserted line is at the beginning and the original file is in its original order.
The simplest solution I found is:
echo -n "<text to add>" | cat - myFile.txt | tee myFile.txt
Notes:
Remove | tee myFile.txt if you don't want to change the file contents.
Remove the -n parameter if you want to append a full line.
Add &> /dev/null to the end if you don't want to see the output (the generated file).
This can be used to append a shebang to the file. Example:
# make it executable (use u+x to allow only current user)
chmod +x cropImage.ts
# append the shebang
echo '#''!'/usr/bin/env ts-node | cat - cropImage.ts | tee cropImage.ts &> /dev/null
# execute it
./cropImage.ts myImage.png
Another solution with aliases. Add to your init rc/ env file:
addtail () { find . -type f ! -path "./.git/*" -exec sh -c "echo $# >> {}" \; }
addhead () { find . -type f ! -path "./.git/*" -exec sh -c "sed -i '1s/^/$#\n/' {}" \; }
Usage:
addtail "string to add at the beginning of file"
addtail "string to add at the end of file"
With the echo approach, if you are on macOS/BSD like me, lose the -n switch that other people suggest. And I like to define a variable for the text.
So it would be like this:
Header="my complex header that may have difficult chars \"like these quotes\" and line breaks \n\n "
{ echo "$Header"; cat "old.txt"; } > "new.txt"
mv new.txt old.txt
TL;dr -
Consider using ex. Since you want the front of a given line, then the syntax is basically the same as what you might find for sed but the option of "in place editing" is built-in.
I cannot imagine an environment where you have sed but not ex/vi, unless it is a MS Windows box with some special "sed.exe", maybe.
sed & grep sort of evolved from ex / vi, so it might be better to say sed syntax is the same as ex.
You can change the line number to something besides #1 or search for a line and change that one.
source=myFile.txt
Front="This goes IN FRONT "
man true > $source
ex -s ${source} <<EOF
1s/^/$Front/
wq
EOF
$ head -n 3 $source
This goes IN FRONT TRUE(1) User Commands TRUE(1)
NAME
Long version, I recommend ex (or ed if you are one of the cool kids).
I like ex because it is portable, extremely powerful, allows me to write in-place, and/or make backups all without needing GNU (or even BSD) extensions.
Additionally, if you know the ex way, then you know how to do it in vi - and probably vim if that is your jam.
Notice that EOF is not quoted when we use "i"nsert and using echo:
str="+++ TOP +++" && ex -s <<EOF
r!man true
1i
`echo "$str"`
.
"0r!echo "${str}"
wq! true.txt
EOF
0r!echo "${str}" might also be used as shorthand for :0read! or :0r! that you have likely used in vi mode (it is literally the same thing) but the : is optional here and some implementations do not support "r"ead address of zero.
"r"eading directly to the special line #0 (or from line 1) would automatically push everything "down", and then you just :wq to save your changes.
$ head -n 3 true.txt | nl -ba
1 +++ TOP +++
2 TRUE(1) User Commands TRUE(1)
3
Also, most classic sed implementations do not have extensions (like \U&) that ex should have by default.
cat concatenates multiple files. <() sends output of a command as a file. Combining these two, we can insert lines at the beginning and end of a file by,
cat <(echo "line before the file") file.txt <(echo "line after the file")

Can you mass edit all files returned in a grep?

I want to mass-edit a ton of files that are returned in a grep. (I know, I should get better at sed).
So if I do:
grep -rnI 'xg_icon-*'
How do I pipe all of those files into vi?
The easiest way is to have grep return just the filenames (-l instead of -n) that match the pattern. Run that in a subshell and feed the results to Vim.
vim $(grep -rIl 'xg_icon-*' *)
A nice general solution to this is to use xargs to convert a stdout from a process like grep to an argument list.
A la:
grep -rIl 'xg_icon-*' | xargs vi
if you use vim and the -p option, it will open each file in a tab, and you can switch between them using gt or gT, or even the mouse if you have mouse support in the terminal
You can do it without any processing of the grep output! This will even enable you to go the the right line (using :help quickfix commands, eg. :cn or :cw). So, if you are using bash or zsh:
vim -q &lt(grep foo *.c)
if what you want to edit is similar across all files, then no point using vi to do it manually. (although vi can be scripted as well), hypothetically, it looks something like this, since you never mention what you want to edit
grep -rnI 'xg_icon-*' | while read FILE
do
sed -i.bak 's/old/new/g' $FILE # (or other editing commands, eg awk... )
done
vi `grep -l -i findthisword *`

How do you search for files containing DOS line endings (CRLF) with grep on Linux?

I want to search for files containing DOS line endings with grep on Linux. Something like this:
grep -IUr --color '\r\n' .
The above seems to match for literal rn which is not what is desired.
The output of this will be piped through xargs into todos to convert crlf to lf like this
grep -IUrl --color '^M' . | xargs -ifile fromdos 'file'
grep probably isn't the tool you want for this. It will print a line for every matching line in every file. Unless you want to, say, run todos 10 times on a 10 line file, grep isn't the best way to go about it. Using find to run file on every file in the tree then grepping through that for "CRLF" will get you one line of output for each file which has dos style line endings:
find . -not -type d -exec file "{}" ";" | grep CRLF
will get you something like:
./1/dos1.txt: ASCII text, with CRLF line terminators
./2/dos2.txt: ASCII text, with CRLF line terminators
./dos.txt: ASCII text, with CRLF line terminators
Use Ctrl+V, Ctrl+M to enter a literal Carriage Return character into your grep string. So:
grep -IUr --color "^M"
will work - if the ^M there is a literal CR that you input as I suggested.
If you want the list of files, you want to add the -l option as well.
Explanation
-I ignore binary files
-U prevents grep from stripping CR characters. By default it does this it if it decides it's a text file.
-r read all files under each directory recursively.
Using RipGrep (depending on your shell, you might need to quote the last argument):
rg -l \r
-l, --files-with-matches
Only print the paths with at least one match.
https://github.com/BurntSushi/ripgrep
If your version of grep supports -P (--perl-regexp) option, then
grep -lUP '\r$'
could be used.
# list files containing dos line endings (CRLF)
cr="$(printf "\r")" # alternative to ctrl-V ctrl-M
grep -Ilsr "${cr}$" .
grep -Ilsr $'\r$' . # yet another & even shorter alternative
dos2unix has a file information option which can be used to show the files that would be converted:
dos2unix -ic /path/to/file
To do that recursively you can use bash’s globstar option, which for the current shell is enabled with shopt -s globstar:
dos2unix -ic ** # all files recursively
dos2unix -ic **/file # files called “file” recursively
Alternatively you can use find for that:
find -type f -exec dos2unix -ic {} + # all files recursively (ignoring directories)
find -name file -exec dos2unix -ic {} + # files called “file” recursively
You can use file command in unix. It gives you the character encoding of the file along with line terminators.
$ file myfile
myfile: ISO-8859 text, with CRLF line terminators
$ file myfile | grep -ow CRLF
CRLF
The query was search... I have a similar issue... somebody submitted mixed line
endings into the version control, so now we have a bunch of files with 0x0d
0x0d 0x0a line endings. Note that
grep -P '\x0d\x0a'
finds all lines, whereas
grep -P '\x0d\x0d\x0a'
and
grep -P '\x0d\x0d'
finds no lines so there may be something "else" going on inside grep
when it comes to line ending patterns... unfortunately for me!
If, like me, your minimalist unix doesn't include niceties like the file command, and backslashes in your grep expressions just don't cooperate, try this:
$ for file in `find . -type f` ; do
> dump $file | cut -c9-50 | egrep -m1 -q ' 0d| 0d'
> if [ $? -eq 0 ] ; then echo $file ; fi
> done
Modifications you may want to make to the above include:
tweak the find command to locate only the files you want to scan
change the dump command to od or whatever file dump utility you have
confirm that the cut command includes both a leading and trailing space as well as just the hexadecimal character output from the dump utility
limit the dump output to the first 1000 characters or so for efficiency
For example, something like this may work for you using od instead of dump:
od -t x2 -N 1000 $file | cut -c8- | egrep -m1 -q ' 0d| 0d|0d$'

Resources