Defining custom emacs find-grep shortcut - linux

Emacs supports M-x find-grep which searches for a string and opens two buffers. One buffer with the search results and other one opens the file which contains the search string.
Currently M-x find-grep expands to following command Run find (like this): find . -type f -exec grep -nH -e {} +.
How can I modify find-grep (or define a new shortcut?) which adds more options to grep and find commands
(e.g. Ignore log files or include only java files find . -iname '*.java'.

Do not modify find-grep. Write your own, similar, command. Start with a copy of its code, if you like. Instead of the part where it invokes program find to implement find . -type f -exec grep -nH -e () +, substitute your own preferred command line. Simplify and adjust to taste (e.g., find . -iname '*.java').
Both find and grep have their own languages (syntax) -- find, in particular. To use them, you need to know (1) what you are trying to do and (2) how to do that using their languages.
Unless you specify exactly what you are trying to do, the only help we can give you here is general guidance about invoking find and grep from Emacs. For that, the find-grep code is a good guide -- see above.

The quick and dirty way is to prefix find-grep, i.e. C-u M-x find-grep. It allows you to edit the command line before executing it.
If you want to permanently change it, you can define a wrapper. This is for rgrep, but find-grep should be similar.
(defvar grep-context-lines 2
"Default number of context lines (non-matching lines before and
after the matching line) for `rgrep-context'.")
(defun rgrep-context (arg)
"Like `rgrep', but adds a '-C' parameter to get context lines around matches.
Default number of context lines is `grep-context-lines', and can
be specified with a numeric prefix."
(interactive "p")
(setq arg (or arg grep-context-lines))
(let ((grep-find-template
(format "find <D> <X> -type f <F> -print0 | xargs -0 -e grep <C> -nH -C %d -e <R>"
arg))
grep-host-defaults-alist
current-prefix-arg)
(call-interactively 'rgrep)))
Note: your grep-find-template might be different; you're probably best off if you modify your default rather than just copying this one. The default is generated by grep-compute-defaults.

Related

Finding .php files with certain variable declaration (string search) on command line (Shell)

I'm attempting to find all .PHP files that are in certain depth of a directory (at least 4 levels down, but not more than 5 levels in).
I'm logged into my Centos server with root authority via shell.
The string I want to search for is:
$slides='';
What I have in front of me.. I would expect it to work. I tried to escape the $ with a \ (I thought perhaps it works like regex, needing special chars excluded). I tried without the ='' portion, or tried adding \'\' to that part.. or remove the ='' altogether to simplify. nothing.
find . -maxdepth 5 -mindepth 4 -type f -name ‘*.php’ -print | xargs grep "\$slides=’’" *
I'm already running it under the directory under which I want to recursively search.
Also - I have the filter to look for *.php only but I still get a bunch of directory names in the return with a warning that says grep: [dir_name]: Is a directory
Clearly I am missing something here as far as syntax of grep command goes, or how the filter works here. I use grep more in PHP so this is quite a transition for me!
So you were almost right. The problem looks to have been the grep part of the command
grep "\$slides=''" *
Namely the * was the issue. From the bash manual
After word splitting, unless the -f option has been set (see The Set
Builtin), Bash scans each word for the characters ‘*’, ‘?’, and ‘[’.
If one of these characters appears, and is not quoted, then the word
is regarded as a pattern, and replaced with an alphabetically sorted
list of filenames matching the pattern
When you piped the found files with find into xargs and attempted to grep them with *, grep would have interpreted this as you wanting to find the string $slides='' in a list of filenames/directories returned by the glob *, and you cannot grep directories without supplying the -r flag to grep, so it returned an error.
Instead, what you wanted to do is pipe the found files with find into xargs so it can add the list of filenames to the grep command, as that's what xargs does. From the xargs manual
xargs reads items from the standard input, delimited by blanks (which
can be protected with double or single quotes or a backslash) or
newlines, and executes the command (default is /bin/echo) one or more
times with any initial- arguments followed by items read from
standard input. Blank lines on the standard input are ignored.
Making the correct command
find . -maxdepth 5 -mindepth 4 -type f -name '*.php' -print0 | xargs -0 grep "\$slides=''"
Using the -print0 flag in find, and the -0 flag in xargs, to use NUL as the delimiter, in case any filenames contained newlines.
If you want to use shell_exec from your PHP code, it is a program execution function which allows you to run a command like 'ls -al' in the operating system shell and get the result returned into a variable. Querystrings are not commands you can use in this way.
Do you mean running PHP from the command line so that it runs from the shell, not from the web server:
php -r 'echo "hello world\n";'
If you run PHP 4.3 and above, you can use the PHP Command Line Interface (CLI) which can also execute scripts stored in files. Have a look at the syntax and examples at: http://php.net/features.commandline

Easy replace with/without regex in multiple files

Hundred times a day I need to search for patterns in files and sometime I have to replace these patterns with something else. Most of the time it is simple patterns like a word or a short sentence but sometime I have to look for more complex regexp. I don't really like sed (at least the sed version I have because it is not much compliant with the PCRE engine). So I rather prefer using perl -pi -e.
However, Perl pie is not very attractive on Cygwin because of the mandatory -i.bak temp files. I need to find a way to automatically remove the .bak files after processing. Moreover, if I want to replace recursively in a project I have to list all the files first:
find . | xargs -n1 perl -pi -e 's/foo/bar/'
This command is quite long to write especially if you use it thousand times a month. So I decided to write a more useful tool working in the same way as the great silver searcher ag.
ag 'foo\d{3}[^\w]' # Search for a pattern
# Oh yes this one should be renamed!
replace 's/(foo)\d{3}[^\w]/\U$1\E_bar/g'
I wrote this very primitive bash function
function replace
{
EXTENSION=.perlpie_tmp
perl -p -i$EXTENSION -e $1 ${*:2}
for file in ${*:2}; do
rm "$file$EXTENSION";
done;
}
But I am not satisfied at all because it doesn't automatically search for all files recursively if there is no more than one argument. I may either modify this function an add find . if the number of arguments is 1, or I can write a much complex program in Perl that can support command line options, pretty output, smart case search or even plain text search.
What is the most suitable option to this problem and is there any advanced search/replace tool on the linux world? If not I may try to write my own rip tool standing for replace-in-place which can support all the options that I need.
Before that I need some advices...
EDIT
Actually I think to fork https://github.com/petdance/ack2 to add a replacement feature... This may or may not be a good idea...
Here's an alternative to your function (edited to use the suggestion provided by gniourf_gniourf, thanks):
find -type f . -exec sh -c 'perl -pi.bak -e "s/foo/bar/" "$0" && rm -f "$0".bak' {} \;
Using this approach, you can remove the file as you go.
I think you can use
grep -Hrn -e "string" .
to find a pattern, and
find -type f -exec sed -i "s#string1#string2#g" {} \;
to replace a pattern
I would slightly modify your existing function:
function replace {
local perl_code=$1 EXTENSION=.perlpie_tmp file
shift
for file; do
perl -p -i$EXTENSION -e "$perl_code" "$file" && rm "$file$EXTENSION"
done;
}
This will slightly worsen the performance as you're now calling perl multiple times, but I suspect you won't notice.

A quick way to search for certain lines of code through many files in a project [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am currently working on a C project that contains over 50 .h and .c files. I would like to know if there is a quick way to search for certain lines of code (like ctrl+f for a window for example) without having to actually search each file one by one.
Thank you in advance
On Linux/Unix there's a command line tool called grep you can use it to search multiple files for a string. For examples if I wanted to search for strcpy in all files:
~/sandbox$ grep -rs "strcpy"*
test.c: strcpy(OSDMenu.name,"OSD MENU");
-r gives searches recursivly so you get all the files in all directories (from the current one) searched. -s ignores warnings, in case you run into non-readable files.
Now if you wanted to search for something custom, and you can't remember the case there's options like -i to allow for case insenstive searches.
~/sandbox$ grep -rsi "myint" *
test.c: int myInt = 5;
test.c: int MYINT = 10;
You can also use regular expressions in case you forgot exactly what you were looking for was called (indeed the name, 'grep' comes from the sed command g/re/p -- global/regular expression/print:
~/sandbox$ grep -rsi "my.*" *
test.c: int myInt = 5;
test.c: int MYINT = 10;
test.c: float myfloat = 10.9;
install cygwin if you aren't using *nix and use find/grep, e.g.
find . -name '*\.[ch]' | xargs grep -n 'myfuncname'
In fact, I made this a little script findinsrc that can be called with findinsrc path1, [path2, ...] pattern. The central line, after checking arguments etc, is
find "${#:1:$#-1}" -type f \( -iname '*.c' -o -iname '*.cpp' -o -iname '*.h' -o -iname '*.hpp' \) -print0 | xargs -0 grep -in "${#:$#}"
"${#:1:$#-1}" are the positional parameters 1 .. n-1, that is, the path(s), supplied as the starting points for find. "${#:$#}" is the last parameter, the pattern supplied to grep.
the -o "option" to find is a logical OR combining the search criteria; because the "default" combination of options is AND, all the ORs must be parenthesized for correct evaluation. Because parentheses have special meaning to the shell, they must be escaped so that they are passed through to find as command line arguments.
-print0 instructs find to separate its output items not with a newline or space but with a null character which cannot appear in path names; this way, there is a clear distinction between whitespace in a path ("My Pictures" nonsense) and separation between paths.
-iname is a case insensitive search, in case files are ending in .CPP etc.
xargs -0 is there specifically to digest find -print0 output: xargs will separate arguments read from stdin at null bytes, not at whitespace.
grep -in: -i instructs grep to perform a case insensitive search (which suits my bad memory and is catered exactly to this "find the bloody function no matter the capitalization you know what I mean" use case). The -n prints the line number, in addition to the file name, where the match occurred.
I have similar scripts findinmake, whre the find pattern includes regular Makefiles, CMakeLists.txt and a proprietary file name; and findinscripts that looks through bat, cmd and sh files. That seemed easier than introducing options to a generic script.
You can use grep to search through the files using the terminal/command line.
grep -R "string_to_search" .
-R to be recursive, search in all sub directories too
Then string you want
Then is the location, . for the current directory
On windows you can use findstr which will find files that contain strings that either exactly match or regular expression match the specified string / pattern.
findstr /?
from the command line will give you the usage. It can also recurse subdirectories (/s).
If you're using a text editor and the shell, then you can use shell tools like grep.
grep -R "some pattern" directory
However you should consider using an IDE such as Eclipse (it's not just for Java), Netbeans (there is a C plugin) or KDevelop. IDEs have keyboard shortcuts for things like "find everywhere the highlighted function is called".
Or of course there's Emacs...

Fuzzy file search in Linux console

Does anybody know a way to perform a quick fuzzy search on the Linux console?
Quite often I come across situations where I need to find a file in a project but I don't remember the exact filename.
In the Sublime text editor I would press Ctrl+ P and type a part of the name, which will produce a list of files to select from. That's an amazing feature I'm quite happy with. The problem is that in most cases I have to browse code in a console on remote machines via ssh. I'm wondering if there is a tool similar to the "Go Anywhere" feature for the Linux console?
You may find fzf useful. It's a general purpose fuzzy finder written in Go that can be used with any list of things: files, processes, command history, Git branches, etc.
Its install script will setup a Ctrl+T keybinding for your shell. Pressing Ctrl+T lets you fuzzy-search for a file or directory and put its path on your console.
The following GIF shows example usage of fzf including its Vim integration:
Most of these answers won't do fuzzy searching like sublime text does it -- they may match part of the answer, but they don't do the nice 'just find all the letters in this order' behavior.
I think this is a bit closer to what you want. I put together a special version of cd ('fcd') that uses fuzzy searching to find the target directory. Super simple -- just add this to your bashrc:
function joinstr { local IFS="$1"; shift; echo "$*"; }
function fcd { cd $(joinstr \* $(echo "$*" | fold -w1))* }
This will add an * between each letter in the input, so if I want to go to, for instance,
/home/dave/results/sample/today
I can just type any of the following:
fcd /h/d/r/spl/t
fcd /h/d/r/s/t
fcd /h/d/r/sam/t
fcd /h/d/r/s/ty
Using the first as an example, this will execute cd /*h*/*d*/*r*/*s*p*l*/*t* and let the shell sort out what actually matches.
As long as the first character is correct, and one letter from each directory in the path is written, it will find what you're looking for. Perhaps you can adapt this for your needs? The important bit is:
$(joinstr \* $(echo "$*" | fold -w1))*
which creates the fuzzy search string.
I usually use:
ls -R | grep -i [whatever I can remember of the file name]
From a directory above where I expect the file to be - the higher up you go in the directory tree, the slower this is going to go.
When I find the the exact file name, I use it in find:
find . [discovered file name]
This could be collapsed into one line:
for f in $(ls --color=never -R | grep --color=never -i partialName); do find -name $f; done
(I found a problem with ls and grep being aliased to "--color=auto")
The fasd shell script is probably worth taking a look at too.
fasd offers quick access to files and directories for POSIX shells. It is inspired by tools like autojump, z and v. Fasd keeps track of files and directories you have accessed, so that you can quickly reference them in the command line.
It differs a little from a complete find of all files, as it only searches recently opened files. However it is still very useful.
find . -iname '*foo*'
Case insensitive find of filenames containing foo.
I don't know how familiar you are with the terminal, but this could help you:
find | grep 'report'
find | grep 'report.*2008'
Sorry if you already know grep and were looking for something more advanced.
fd is a simple, fast and user-friendly alternative to find.
Demo from the GitHub project page:
You can do the following
grep -iR "text to search for" .
where "." being the starting point, so you could do something like
grep -iR "text to search" /home/
This will make grep search for the given text inside every file under /home/ and list files which contain that text.
You can try c- (Cminus), a fuzzy dir changing tool of bash script, which using bash completion. It is somehow limited by only matching visited paths, but really convenient and quite fast.
GitHub project: whitebob/cminus
Introduction on YouTube: https://youtu.be/b8Bem53Cz9A
You might want to try
AGREP or something else that uses the TRE Regular Expression library.
(From their site:)
TRE is a lightweight, robust, and efficient POSIX compliant regexp matching library with some exciting features such as approximate (fuzzy) matching.
At the core of TRE is a new algorithm for regular expression matching with submatch addressing. The algorithm uses linear worst-case time in the length of the text being searched, and quadratic worst-case time in the length of the used regular expression. In other words, the time complexity of the algorithm is O(M2N), where M is the length of the regular expression and N is the length of the text. The used space is also quadratic on the length of the regex, but does not depend on the searched string. This quadratic behaviour occurs only on pathological cases which are probably very rare in practice.
TRE is not just yet another regexp matcher. TRE has some features which are not there in most free POSIX compatible implementations. Most of these features are not present in non-free implementations either, for that matter.
Approximate pattern matching allows matches to be approximate, that is, allows the matches to be close to the searched pattern under some measure of closeness. TRE uses the edit-distance measure (also known as the Levenshtein distance) where characters can be inserted, deleted, or substituted in the searched text in order to get an exact match. Each insertion, deletion, or substitution adds the distance, or cost, of the match. TRE can report the matches which have a cost lower than some given threshold value. TRE can also be used to search for matches with the lowest cost.
You could use find like this for complex regex:
find . -type f -regextype posix-extended -iregex ".*YOUR_PARTIAL_NAME.*" -print
Or this for simplier glob-like matches:
find . -type f -name "*YOUR_PARTIAL_NAME*" -print
Or you could also use find2perl (which is quite faster and more optimized than find), like this:
find2perl . -type f -name "*YOUR_PARTIAL_NAME*" -print | perl
If you just want to see how Perl does it, remove the | perl part and you'll see the code it generates. It's a very good way to learn by the way.
Alternatively, write a quick bash wrapper like this, and call it whenever you want:
#! /bin/bash
FIND_BASE="$1"
GLOB_PATTERN="$2"
if [ $# -ne 2 ]; then
echo "Syntax: $(basename $0) <FIND_BASE> <GLOB_PATTERN>"
else
find2perl "$FIND_BASE" -type f -name "*$GLOB_PATTERN*" -print | perl
fi
Name this something like qsearch and then call it like this: qsearch . something
Search zsh for file or folder in terminal and open or navigate to it with combination of find, fzf, vim and cd.
Install fzf in zsh and add script to ~/.zshrc, then reload shell source ~/.zshrc
fzf-file-search() {
item="$(find '/' -type d \( -path '/proc/*' -o -path '/dev/*' \) -prune -false -o -iname '*' 2>/dev/null | FZF_DEFAULT_OPTS="--height ${FZF_TMUX_HEIGHT:-40%} --rev erse --bind=ctrl-z:ignore $FZF_DEFAULT_OPTS $FZF_CTRL_T_OPTS" $(__fzfcmd) -m "$#")"
if [[ -d ${item} ]]; then
cd "${item}" || return 1
elif [[ -f ${item} ]]; then
(vi "${item}" < /dev/tty) || return 1
else
return 1
fi
zle accept-line
}
zle -N fzf-file-search
bindkey '^f' fzf-file-search
Press keyboard shortcut 'Ctrl+F' to run it, this can be changed in bindkey '^f'. It searchs (find) through all files/folders (fzf) and depending on file type, navigate to directory (cd) or open file with text editor (vim).
Also quickly open recent files/folders with fasd:
fasd-fzf-cd-vi() {
item="$(fasd -Rl "$1" | fzf -1 -0 --no-sort +m)"
if [[ -d ${item} ]]; then
cd "${item}" || return 1
elif [[ -f ${item} ]]; then
(vi "${item}" < /dev/tty) || return 1
else
return 1
fi
zle accept-line
}
zle -N fasd-fzf-cd-vi
bindkey '^e' fasd-fzf-cd-vi
Keyboard shortcut 'Ctrl+E'
Check other usefull tips and tricks for fast navigation inside terminal https://github.com/webdev4422/.dotfiles

How can I use xargs to copy files that have spaces and quotes in their names?

I'm trying to copy a bunch of files below a directory and a number of the files have spaces and single-quotes in their names. When I try to string together find and grep with xargs, I get the following error:
find .|grep "FooBar"|xargs -I{} cp "{}" ~/foo/bar
xargs: unterminated quote
Any suggestions for a more robust usage of xargs?
This is on Mac OS X 10.5.3 (Leopard) with BSD xargs.
You can combine all of that into a single find command:
find . -iname "*foobar*" -exec cp -- "{}" ~/foo/bar \;
This will handle filenames and directories with spaces in them. You can use -name to get case-sensitive results.
Note: The -- flag passed to cp prevents it from processing files starting with - as options.
find . -print0 | grep --null 'FooBar' | xargs -0 ...
I don't know about whether grep supports --null, nor whether xargs supports -0, on Leopard, but on GNU it's all good.
The easiest way to do what the original poster wants is to change the delimiter from any whitespace to just the end-of-line character like this:
find whatever ... | xargs -d "\n" cp -t /var/tmp
This is more efficient as it does not run "cp" multiple times:
find -name '*FooBar*' -print0 | xargs -0 cp -t ~/foo/bar
I ran into the same problem. Here's how I solved it:
find . -name '*FoooBar*' | sed 's/.*/"&"/' | xargs cp ~/foo/bar
I used sed to substitute each line of input with the same line, but surrounded by double quotes. From the sed man page, "...An ampersand (``&'') appearing in the replacement is replaced by the string matching the RE..." -- in this case, .*, the entire line.
This solves the xargs: unterminated quote error.
This method works on Mac OS X v10.7.5 (Lion):
find . | grep FooBar | xargs -I{} cp {} ~/foo/bar
I also tested the exact syntax you posted. That also worked fine on 10.7.5.
Just don't use xargs. It is a neat program but it doesn't go well with find when faced with non trivial cases.
Here is a portable (POSIX) solution, i.e. one that doesn't require find, xargs or cp GNU specific extensions:
find . -name "*FooBar*" -exec sh -c 'cp -- "$#" ~/foo/bar' sh {} +
Note the ending + instead of the more usual ;.
This solution:
correctly handles files and directories with embedded spaces, newlines or whatever exotic characters.
works on any Unix and Linux system, even those not providing the GNU toolkit.
doesn't use xargs which is a nice and useful program, but requires too much tweaking and non standard features to properly handle find output.
is also more efficient (read faster) than the accepted and most if not all of the other answers.
Note also that despite what is stated in some other replies or comments quoting {} is useless (unless you are using the exotic fishshell).
Look into using the --null commandline option for xargs with the -print0 option in find.
For those who relies on commands, other than find, eg ls:
find . | grep "FooBar" | tr \\n \\0 | xargs -0 -I{} cp "{}" ~/foo/bar
find | perl -lne 'print quotemeta' | xargs ls -d
I believe that this will work reliably for any character except line-feed (and I suspect that if you've got line-feeds in your filenames, then you've got worse problems than this). It doesn't require GNU findutils, just Perl, so it should work pretty-much anywhere.
I have found that the following syntax works well for me.
find /usr/pcapps/ -mount -type f -size +1000000c | perl -lpe ' s{ }{\\ }g ' | xargs ls -l | sort +4nr | head -200
In this example, I am looking for the largest 200 files over 1,000,000 bytes in the filesystem mounted at "/usr/pcapps".
The Perl line-liner between "find" and "xargs" escapes/quotes each blank so "xargs" passes any filename with embedded blanks to "ls" as a single argument.
Frame challenge — you're asking how to use xargs. The answer is: you don't use xargs, because you don't need it.
The comment by user80168 describes a way to do this directly with cp, without calling cp for every file:
find . -name '*FooBar*' -exec cp -t /tmp -- {} +
This works because:
the cp -t flag allows to give the target directory near the beginning of cp, rather than near the end. From man cp:
-t, --target-directory=DIRECTORY
copy all SOURCE arguments into DIRECTORY
The -- flag tells cp to interpret everything after as a filename, not a flag, so files starting with - or -- do not confuse cp; you still need this because the -/-- characters are interpreted by cp, whereas any other special characters are interpreted by the shell.
The find -exec command {} + variant essentially does the same as xargs. From man find:
-exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of invoca‐
matched files. The command line is built in much the same way
that xargs builds its command lines. Only one instance of `{}'
is allowed within the command, and (when find is being invoked
from a shell) it should be quoted (for example, '{}') to protect
it from interpretation by shells. The command is executed in
the starting directory. If any invocation returns a non-zero
value as exit status, then find returns a non-zero exit status.
If find encounters an error, this can sometimes cause an immedi‐
ate exit, so some pending commands may not be run at all. This
variant of -exec always returns true.
By using this in find directly, this avoids the need of a pipe or a shell invocation, such that you don't need to worry about any nasty characters in filenames.
With Bash (not POSIX) you can use process substitution to get the current line inside a variable. This enables you to use quotes to escape special characters:
while read line ; do cp "$line" ~/bar ; done < <(find . | grep foo)
Be aware that most of the options discussed in other answers are not standard on platforms that do not use the GNU utilities (Solaris, AIX, HP-UX, for instance). See the POSIX specification for 'standard' xargs behaviour.
I also find the behaviour of xargs whereby it runs the command at least once, even with no input, to be a nuisance.
I wrote my own private version of xargs (xargl) to deal with the problems of spaces in names (only newlines separate - though the 'find ... -print0' and 'xargs -0' combination is pretty neat given that file names cannot contain ASCII NUL '\0' characters. My xargl isn't as complete as it would need to be to be worth publishing - especially since GNU has facilities that are at least as good.
For me, I was trying to do something a little different. I wanted to copy my .txt files into my tmp folder. The .txt filenames contain spaces and apostrophe characters. This worked on my Mac.
$ find . -type f -name '*.txt' | sed 's/'"'"'/\'"'"'/g' | sed 's/.*/"&"/' | xargs -I{} cp -v {} ./tmp/
If find and xarg versions on your system doesn't support -print0 and -0 switches (for example AIX find and xargs) you can use this terribly looking code:
find . -name "*foo*" | sed -e "s/'/\\\'/g" -e 's/"/\\"/g' -e 's/ /\\ /g' | xargs cp /your/dest
Here sed will take care of escaping the spaces and quotes for xargs.
Tested on AIX 5.3
I created a small portable wrapper script called "xargsL" around "xargs" which addresses most of the problems.
Contrary to xargs, xargsL accepts one pathname per line. The pathnames may contain any character except (obviously) newline or NUL bytes.
No quoting is allowed or supported in the file list - your file names may contain all sorts of whitespace, backslashes, backticks, shell wildcard characters and the like - xargsL will process them as literal characters, no harm done.
As an added bonus feature, xargsL will not run the command once if there is no input!
Note the difference:
$ true | xargs echo no data
no data
$ true | xargsL echo no data # No output
Any arguments given to xargsL will be passed through to xargs.
Here is the "xargsL" POSIX shell script:
#! /bin/sh
# Line-based version of "xargs" (one pathname per line which may contain any
# amount of whitespace except for newlines) with the added bonus feature that
# it will not execute the command if the input file is empty.
#
# Version 2018.76.3
#
# Copyright (c) 2018 Guenther Brunthaler. All rights reserved.
#
# This script is free software.
# Distribution is permitted under the terms of the GPLv3.
set -e
trap 'test $? = 0 || echo "$0 failed!" >& 2' 0
if IFS= read -r first
then
{
printf '%s\n' "$first"
cat
} | sed 's/./\\&/g' | xargs ${1+"$#"}
fi
Put the script into some directory in your $PATH and don't forget to
$ chmod +x xargsL
the script there to make it executable.
bill_starr's Perl version won't work well for embedded newlines (only copes with spaces). For those on e.g. Solaris where you don't have the GNU tools, a more complete version might be (using sed)...
find -type f | sed 's/./\\&/g' | xargs grep string_to_find
adjust the find and grep arguments or other commands as you require, but the sed will fix your embedded newlines/spaces/tabs.
I used Bill Star's answer slightly modified on Solaris:
find . -mtime +2 | perl -pe 's{^}{\"};s{$}{\"}' > ~/output.file
This will put quotes around each line. I didn't use the '-l' option although it probably would help.
The file list I was going though might have '-', but not newlines. I haven't used the output file with any other commands as I want to review what was found before I just start massively deleting them via xargs.
I played with this a little, started contemplating modifying xargs, and realised that for the kind of use case we're talking about here, a simple reimplementation in Python is a better idea.
For one thing, having ~80 lines of code for the whole thing means it is easy to figure out what is going on, and if different behaviour is required, you can just hack it into a new script in less time than it takes to get a reply on somewhere like Stack Overflow.
See https://github.com/johnallsup/jda-misc-scripts/blob/master/yargs and https://github.com/johnallsup/jda-misc-scripts/blob/master/zargs.py.
With yargs as written (and Python 3 installed) you can type:
find .|grep "FooBar"|yargs -l 203 cp --after ~/foo/bar
to do the copying 203 files at a time. (Here 203 is just a placeholder, of course, and using a strange number like 203 makes it clear that this number has no other significance.)
If you really want something faster and without the need for Python, take zargs and yargs as prototypes and rewrite in C++ or C.
You might need to grep Foobar directory like:
find . -name "file.ext"| grep "FooBar" | xargs -i cp -p "{}" .
If you are using Bash, you can convert stdout to an array of lines by mapfile:
find . | grep "FooBar" | (mapfile -t; cp "${MAPFILE[#]}" ~/foobar)
The benefits are:
It's built-in, so it's faster.
Execute the command with all file names in one time, so it's faster.
You can append other arguments to the file names. For cp, you can also:
find . -name '*FooBar*' -exec cp -t ~/foobar -- {} +
however, some commands don't have such feature.
The disadvantages:
Maybe not scale well if there are too many file names. (The limit? I don't know, but I had tested with 10 MB list file which includes 10000+ file names with no problem, under Debian)
Well... who knows if Bash is available on OS X?

Resources