xargs bash -c unexpected token - linux

I'm experiencing an issue calling xargs inside a bash script to parallelize the launch of a function.
I have this line:
grep -Ev '^#|^$' "$listOfTables" | xargs -d '\n' -l1 -I args -P"$parallels" bash -c "doSqoop 'args'"
that launches the function doSqoop that I previously exported.
I am passing to xargs and then to bash -c a single, very long line, containing fields that I split and handle inside the function.
It is something like schema|tab|dest|desttab|query|splits|.... that I read from a file, via the grep command above. I am fine with this solution, I know xargs can split the line on | but I'm ok this way.
It used to work well since I had to add another field at the end, which contains this kind of value:
field1='varchar(12)',field2='varchar(4)',field3='timestamp',....
Now I have this error:
bash: -c: line 0: syntax error near unexpected token '('
I tried to escape the pharhentesis and and single quotes, without success.
It appears to me that bash -c is interpreting the arguments

Use GNU parallel that can call exported functions, and also has an easier syntax and much more capabilities.
Your sample command should could be replaced with
grep -Ev '^#|^$' file | parallel doSqoop
Test with below script:
#!/bin/bash
doSqoop() {
printf "%s\n" "$#"
}
export -f doSqoop
grep -Ev '^#|^$' file | parallel doSqoop
You can also set the number of processes with the -P option, otherwise it matches the number of cores in your system:
grep -Ev '^#|^$' file | parallel -P "$num" doSqoop

Related

How to grep text patterns from remote crontabs using xargs through SSH?

I'm developping a script to search for patterns within scripts executed from CRON on a bunch of remote servers through SSH.
Script on client machine -- SSH --> Remote Servers CRON/Scripts
For now I can't get the correct output.
Script on client machine
#!/bin/bash
server_list=( '172.x.x.x' '172.x.x.y' '172.x.x.z' )
for s in ${server_list[#]}; do
ssh -i /home/user/.ssh/my_key.rsa user#${s} crontab -l | grep -v '^#\|^[[:space:]]*$' | cut -d ' ' -f 6- | awk '{print $1}' | grep -v '^$\|^echo\|^find\|^PATH\|^/usr/bin\|^/bin/' | xargs -0 grep -in 'server.tld\|10.x.x.x'
done
This only gives me the paths of scripts from crontab, not the matched lines and line number plus the first line is prefixed with "grep:" keyword (example below):
grep: /opt/directory/script1.sh
/opt/directory/script2.sh
/opt/directory/script3.sh
/opt/directory/script4.sh
How to get proper output, meaning the script path plus line number plus line of matching pattern?
Remote CRON examples
OO 6 * * * /opt/directory/script1.sh foo
30 6 * * * /opt/directory/script2.sh bar
Remote script content examples
1 ) This will match grep pattern
#!/bin/bash
ping -c 4 server.tld && echo "server.tld ($1)"
2 ) This won't match grep pattern
#!/bin/bash
ping -c 4 8.x.x.x && echo "8.x.x.x ($1)"
Without example input, it's really hard to see what your script is attempting to do. But the cron parsing could almost certainly be simplified tremendously by refactoring all of it into a single Awk script. Here is a quick stab, with obviously no way to test.
#!/bin/sh
# No longer using an array for no good reason, so /bin/sh will work
for s in 172.x.x.x 172.x.x.y 172.x.x.z; do
ssh -i /home/user/.ssh/my_key.rsa "user#${s}" crontab -l |
awk '! /^#|^[[:space:]]*$/ && $6 !~ /^$|^(echo|find|PATH|\/usr\/bin|\/bin\/)/ { print $6 }' |
# no -0; use grep -E and properly quote literal dot
xargs grep -Ein 'server\.tld|10.x.x.x'
done
Your command would not output null-delimited data to xargs so probably the immediate problem was that xargs -0 would receive all the file names as a single file name which obviously does not exist, and you forgot to include the ": file not found" from the end of the error message.
The use of grep -E is a minor hack to enable a more modern regex syntax which is more similar to that in Awk, where you don't have to backslash the "or" pipe etc.
This script, like your original, runs grep on the local system where you run the SSH script. If you want to run the commands on the remote server, you will need to refactor to put the entire pipeline in single quotes or a here document:
for s in 172.x.x.x 172.x.x.y 172.x.x.z; do
ssh -i /home/user/.ssh/my_key.rsa "user#${s}" <<\________HERE
crontab -l |
awk '! /^#|^[[:space:]]*$/ && $6 !~ /^$|^(echo|find|PATH|\/usr\/bin|\/bin\/)/ { print $6 }' |
xargs grep -Ein 'server\.tld|10.x.x.x'
________HERE
done
The refactored script contains enough complexities in the quoting that you probably don't want to pass it as an argument to ssh, which requires you to figure out how to quote strings both locally and remotely. It's easier then to pass it as standard input, which obviously just gets transmitted verbatim.
If you get "Pseudo-terminal will not be allocated because stdin is not a terminal.", try using ssh -t. Sometimes you need to add multiple -t options to completely get rid of this message.

xargs pipe non empty stdin lines to command while preserving double quotes

I'm trying to have a script listen to stdin (so I run it and it doesn't immediately exit) and only execute when stdin is not empty and then pipe the stdin line to another command.
Right now I'm using the command from the answer here:
xargs -I {} sh -c 'echo {} | foo'
I want to preserve double quotes from stdin, for that people suggest using -d '\n' but this causes foo to run on empty lines.
I looked into possible GNU Parallel solutions but couldn't find anything.
Here is my stdout:
>xargs -I {} sh -c 'echo {} | foo'
bar
I have executed for 'bar'
"bar"
I have executed for 'bar' //notice the double quotes missing
^C
>xargs -I {} sh -c "echo '{}' | foo"
bar
I have executed for 'bar'
"bar"
I have executed for 'bar' //Same thing, double quotes missing
^C
>xargs -d '\n' -I {} sh -c "echo {} | foo"
i have executed for '' //doesn't ignore empty lines anymore
i have executed for ''
bar
i have executed for 'bar'
"bar"
i have executed for 'bar'
Desired output:
bar
I have executed for 'bar'
"bar"
I have executed for '"bar"'
Running
echo '"bar"' | foo
gets me
I have executed for '"bar"'
If, as your tags suggest, you are running on linux, you have GNU xargs, which supports the -0 option. Then, you can pass in completely arbitrary text, including even newlines:
printf '%s\0' "foo" "'bar"' '"baz"' 'quux
with a newline' | xargs -0 foo
Removing empty lines could be accomplished with a simple grep in front. There is also xargs -r which says to not run the command if xargs receives empty input (this too is a GNU extension).
Your attempts are slightly problematic, though; you should pass the arguments as command-line arguments rather than have xargs interpolate them into the sh -c '... {} ...' string literally.
Slightly depending on your requirements, this could even work portably on other platforms:
xargs sh -c 'if [ $# -gt 0 ]; then echo "$#" | foo; fi' _
The _ is just a placeholder; the arguments to sh -c '...' are used to populate $0, $1, $2, etc and so we put in something, anything, to occupy the slot for $0.
GNU Parallel uses this internally:
perl -e 'if(sysread(STDIN,$buf,1)){open($fh,"|-",#ARGV)||die;syswrite($fh,$buf);if($read=sysread(STDIN,$buf,131071)){syswrite($fh,$buf);}while($read=sysread(STDIN,$buf,131072)){syswrite($fh,$buf);}close$fh;exit($?&127?128+($?&127):1+$?>>8)}' /usr/bin/bash -c 'wc -l'
If you only want a single line try:
seq 3 | parallel --pipe -N1 wc -c
echo "'foo'" | parallel --pipe -N1 --rrs "echo -n i have executed for \"'\";cat;echo \"'\""
echo '"foo"' | parallel --pipe -N1 --rrs "echo -n i have executed for \"'\";cat;echo \"'\""
I want to preserve double quotes from stdin, for that people suggest using -d '\n' but this causes foo to run on empty lines.
xargs performs quote processing by default unless you specify a delimiter via either -d/--delimiter or -0/--null. You must use one of these to avoid xargs removing the quotes you are trying to preserve.
What's more, supposing that you manage to pass the quoted input through xargs unchanged, the shell that xargs launches to run the command will perform its own quote removal, as well as parameter expansion, variable assingments, redirection processing, etc. You can observe the effects of that directly with this variation on your command:
$ xargs -d '\n' -I{} sh -c 'echo {} >>tmp.txt'
bar
'bar'
$ cat tmp.txt
bar
bar
$
Note that the quotes are removed despite specifying a delimiter to xargs.
It's a bit hard to parse your exact requirements, but it sounds like you just want to filter empty lines out of the standard input to some command. sed can do that pretty easily:
foo() {
while IFS= read -r line; do
echo "I have executed for '$line'"
done
}
$ sed '/\S/!d' | foo
bar
"bar"
A whole line with "quotes" and 'quotes' and metacharacters > < !
I have executed for 'bar'
I have executed for '"bar"'
I have executed for 'A whole line with "quotes" and 'quotes' and metacharacters > < !'
$
Explanation of the sed command: the regex /\S/ matches any non-whitespace character, anywhere on the line. The ! negates the match, and the d deletes lines matching the (negated) pattern -- that is any line that does not contain at least one non-whitespace character.
As you can see in the example run transcript, there is a difference in buffering between your example command and the effect of filtering with sed. It's unclear whether that's important to you.

Move a file list based upon grep pattern in command line [duplicate]

I want to pass each output from a command as multiple argument to a second command, e.g.:
grep "pattern" input
returns:
file1
file2
file3
and I want to copy these outputs, e.g:
cp file1 file1.bac
cp file2 file2.bac
cp file3 file3.bac
How can I do that in one go? Something like:
grep "pattern" input | cp $1 $1.bac
You can use xargs:
grep 'pattern' input | xargs -I% cp "%" "%.bac"
You can use $() to interpolate the output of a command. So, you could use kill -9 $(grep -hP '^\d+$' $(ls -lad /dir/*/pid | grep -P '/dir/\d+/pid' | awk '{ print $9 }')) if you wanted to.
In addition to Chris Jester-Young good answer, I would say that xargs is also a good solution for these situations:
grep ... `ls -lad ... | awk '{ print $9 }'` | xargs kill -9
will make it. All together:
grep -hP '^\d+$' `ls -lad /dir/*/pid | grep -P '/dir/\d+/pid' | awk '{ print $9 }'` | xargs kill -9
For completeness, I'll also mention command substitution and explain why this is not recommended:
cp $(grep -l "pattern" input) directory/
(The backtick syntax cp `grep -l "pattern" input` directory/ is roughly equivalent, but it is obsolete and unwieldy; don't use that.)
This will fail if the output from grep produces a file name which contains whitespace or a shell metacharacter.
Of course, it's fine to use this if you know exactly which file names the grep can produce, and have verified that none of them are problematic. But for a production script, don't use this.
Anyway, for the OP's scenario, where you need to refer to each match individually and add an extension to it, the xargs or while read alternatives are superior anyway.
In the worst case (meaning problematic or unspecified file names), pass the matches to a subshell via xargs:
grep -l "pattern" input |
xargs -r sh -c 'for f; do cp "$f" "$f.bac"; done' _
... where obviously the script inside the for loop could be arbitrarily complex.
In the ideal case, the command you want to run is simple (or versatile) enough that you can simply pass it an arbitrarily long list of file names. For example, GNU cp has a -t option to facilitate this use of xargs (the -t option allows you to put the destination directory first on the command line, so you can put as many files as you like at the end of the command):
grep -l "pattern" input | xargs cp -t destdir
which will expand into
cp -t destdir file1 file2 file3 file4 ...
for as many matches as xargs can fit onto the command line of cp, repeated as many times as it takes to pass all the files to cp. (Unfortunately, this doesn't match the OP's scenario; if you need to rename every file while copying, you need to pass in just two arguments per cp invocation: the source file name and the destination file name to copy it to.)
So in other words, if you use the command substitution syntax and grep produces a really long list of matches, you risk bumping into ARG_MAX and "Argument list too long" errors; but xargs will specifically avoid this by instead copying only as many arguments as it can safely pass to cp at a time, and running cp multiple times if necessary instead.
The above will still work incorrectly if you have file names which contain newlines. Perhaps see also https://mywiki.wooledge.org/BashFAQ/020
#!/bin/bash
for f in files; do
if grep -q PATTERN "$f"; then
echo cp -v "$f" "${f}.bac"
fi
done
files can be *.txt or *.text which basically means files ending in *.txt or *text or replace with something that you want/need, of course replace PATTERN with yours. Remove echo if you're satisfied with the output. For a recursive solution take a look at the bash shell option globstar

Run multiple commands in shell script using `find` and `xargs`

I'm attempting to echo the filename then sqlcmd directly after however I simply cannot figure out how to concatenate the statements in a way that works.
I have tried:
Creating a new function (apparently can't be done like this in shell only bash)
Separating with a semi colon (both escaped and without)
Using the double && as per this example
Any tips please.
find "./src/ALs/Database/SQL" -iname "*.sql" | sort -n | xargs -0 -I % sh -c 'echo % && sqlcmd -S $SQL_HOST -d Database -U $SQL_USER -P $SQL_PWD -i %'
I think that if you remove -0 from xargs it'll do what you expect. That flag is for null-byte-separated input, whereas you are passing it newline-separated input.
To work with null-byte-separated records throughout the pipeline use find ... -print0, sort -z and xargs -0. This is the most robust way to pass records through a pipeline (it won't break, no matter what your filenames are).
find "./src/AdviserLinks/Database/SQL" -iname "*.sql" -print0 |
sort -zn |
xargs -0 -n1 sh -c 'echo "$0" &&
sqlcmd -S "$SQL_HOST" -d WebSupportDatabase -U "$SQL_USER" -P "$SQL_PWD" -i "$0"'
This assumes that the $SQL variables are export-ed to the environment.
I have replaced -I % with -n1, which will process records one at a time. Each filename is passed to sh as $0, which can be used safely; there is no risk that the contents of the record is interpreted as shell syntax, as was the case with -I % in your attempt. Note that this means that a separate child shell is invoked for every file, and it would be more efficient to use a loop as in Charles' answer.
As for using separate statements vs &&, that depends on whether you want the execution of the second command to be conditional on the success of first command.
Aiming to combine both safety and performance (invoking sh only once per list of sql files that fit on a command line):
find "./src/AdviserLinks/Database/SQL" -iname "*.sql" -print0 |
sort -zn |
xargs -0 sh -c '
for arg do
echo "$arg"
sqlcmd -S "$SQL_HOST" -d WebSupportDatabase -U "$SQL_USER" -P "$SQL_PWD" -i "$arg"
done
' _
Note:
We aren't using the -I argument to xargs at all. Instead of using a sigil, we let xargs concatenate as many items as possible to the end of the argument list for sh.
Within the sh command, for arg do loops over "$#" by default; thus, it assigns $1, $2, etc. in turn to the variable named arg, so that just one copy of sh can process several SQL files.
We're letting all the expansions of values like SQL_HOST, SQL_USER and SQL_PWD be performed by the child shell, instead of attempting to do them in the parent (note that this does require that these values be exported to the environment, rather than merely set as process-local shell variables). This change means that a SQL password that might have characters meaningful to the shell doesn't risk being parsed as syntax.

Preserve '\n' newline in returned text over ssh

If I execute a find command, with grep and sort etc. in the local command line, I get returned lines like so:
# find ~/logs/ -iname 'status' | xargs grep 'last seen' | sort --field-separator=: -k 4 -g
0:0:line:1
0:0:line:2
0:0:line:3
If I execute the same command over ssh, the returned text prints without newlines, like so:
# VARcmdChk="$(ssh ${VARuser}#${VARserver} "find ~/logs/ -iname 'status' | xargs grep 'last seen' | sort --field-separator=: -k 4 -g")"
# echo ${VARcmdChk}
0:0:line:1 0:0:line:2 0:0:line:3
I'm trying to understand why ssh is sanitising the returned text, so that newlines are converted to spaces. I have not yet tried output'ing to file, and then using scp to pull that back. Seems a waste, since I just want to view the remote results locally.
When you echo the variable VARcmdChk, you should enclose it with ".
$ VARcmdChk=$(ssh ${VARuser}#${VARserver} "find tmp/ -iname status -exec grep 'last seen' {} \; | sort --field-separator=: -k 4 -g")
$ echo "${VARcmdChk}"
last seen:11:22:33:44:55:66:77:88:99:00
last seen:00:99:88:77:66:55:44:33:22:11
Note that I've replaced your xargs for -exec.
Ok, the question is a duplicate of this one, Why does shell Command Substitution gobble up a trailing newline char?, so partly answered.
However, I say partly, as the answers tell you the reasons for this happening as such, but the only clue to a solution is a small answer right at the end.
The solution is to quote the echo argument, as the solution suggests:
# VARcmdChk="$(ssh ${VARuser}#${VARserver} "find ~/logs/ -iname 'status' | xargs grep 'last seen' | sort --field-separator=: -k 4 -g")"
# echo "${VARcmdChk}"
0:0:line:1
0:0:line:2
0:0:line:3
but there is no explanation as to why this works as such, since assumption is that the variable is a string, so should print as expected. However, reading Expansion of variable inside single quotes in a command in Bash provides the clue regarding preserving newlines etc. in a string. Placing the variable to be printed by echo into quotes preserves the contents absolutely, and you get the expected output.
The echo of the variable is why its putting it all into one line. Running the following command will output the results as expected:
ssh ${VARuser}#${VARserver} "find ~/logs/ -iname 'status' | xargs grep 'last seen' | sort --field-separator=: -k 4 -g"
To get the command output to have each result on a new line, like it does when you run the command locally you can use awk to split the results onto a new line.
awk '{print $1"\n"$2}'
This method can be appended to your command like this:
echo ${VARcmdChk} | awk '{print $1"\n"$2"\n"$3"\n"$4}'
Alternatively, you can put quotes around the variable as per your answer:
echo "${VARcmdChk}"

Resources