echo $TMPLIST | xargs -I{} -n 1 -P $MAXJOBS curl -o {}_$DATESTRING.dump `get-temp-url --location {}`
$TMPLIST has a list of locations which I want processed.
I am trying to run something similar to the above, but the brackets inside of the backticks do not get expanded. What am I doing wrong?
In this command...
echo $TMPLIST |
xargs -I{} -n 1 -P $MAXJOBS curl -o {}_$DATESTRING.dump \
`get-temp-url --location {}`
...the backtics are interpreted by the shell; they are never seen by xargs. You could do something like this:
echo $TMPLIST |
xargs -I{} -n 1 -P $MAXJOBS \
sh -c 'curl -o {}_$DATESTRING.dump `get-temp-url --location {}`'
Note that for this to work, DATESTRING will need to be an environment variable, rather than a shell variable (e.g., you would need to export DATESTRING).
Related
cat domains.txt | xargs -P10 -I % ffuf -u %/FUZZ -w wordlist.txt -o output.json
Ffuf is used for directory and file bruteforcing while domains.txt contains valid HTTP and HTTPS URLs like http://example.com, http://example2.com. I used xargs to speed up the process by running 10 parallel instances. But the problem here is I am unable to store output for each instance separately and output.json is getting override by every running instance. Is there anything we can do to make output.json unique for every instance so that all data gets saved separately. I tried ffuf/$(date '+%s').json instead but it didn't work either.
Sure. Just name your output file using the domain. E.g.:
xargs -P10 -I % ffuf -u %/FUZZ -w wordlist.txt -o output-%.json < domains.txt
(I dropped cat because it was unnecessary.)
I missed the fact that your domains.txt file is actually a list of URLs rather than a list of domain names. I think the easiest fix is just to simplify domains.txt to be just domain names, but you could also try something like:
xargs -P10 -I % sh -c 'domain="%"; ffuf -u %/FUZZ -w wordlist.txt -o output-${domain##*/}.json' < domains.txt
cat domains.txt | xargs -P10 -I % sh -c "ping % > output.json.%"
Like this and your "%" can be part of the file name. (I changed your command to ping for my testing)
So maybe something more like this:
cat domains.txt | xargs -P10 -I % sh -c "ffuf -u %/FUZZ -w wordlist.txt -o output.json.%
"
I would replace your ffuf command with the following script, and call this from the xargs command. It just strips out the invalid file name characters and replaces them with a dot then runs the command:
#!/usr/bin/bash
URL=$1
FILE="`echo $URL | sed 's/:\/\//\./g'`"
ffuf -u ${URL}/FUZZ -w wordlist.txt -o output-${FILE}.json
I am trying to remove files via ssh using by find and xargs. On the local machine command works correctly. I am using exception list also.
I have tried to changes brackets between ' ' and " " but it does not work.
save_files=(test1 test2)
On the local machine:
find / -mindepth 1 | grep -vE "$(IFS=\| && echo "${save_files[*]}")" | xargs rm -rf
via ssh:
su - user -c "ssh host find / -mindepth 1 | grep -vE '$(IFS=\| && echo "${save_files[*]}")' | xargs rm -rf"
In above ssh command xargs is performing locally. I need xargs on remote machine. Even I put find command in '' brackets like below:
su - user -c "ssh host 'find / -mindepth 1 | grep -vE '$(IFS=\| && echo "${save_files[*]}")' | xargs rm -rf'"
Starting another answer, trying again for one-liner, avoiding the intermediate script.
The key problem is the three level quoting, whereas shell support only two level quoting (double quote, single quote). I believe it might be possible to eliminate the need for the 3rd level of quoting by using backquote in the pattern, eliminating the need to put it into quotes.
IFS=\| P="${save_files[*]}" su - owner -c "ssh localhost 'set -vx ; find /tmp/xyz -mindepth 1 | grep -vE ${P//|/\\|} | xargs rm -rf'"
The solution create the pattern with 'escapes' - see '($P/|/\|}', eliminating the need to quote the pattern. Should work if the save_files does not have magic characters.
Notes:
Notice that I've replaced '/' with /tmp/xyz, to reduce the change the accidental copy/paste will remove important data (already happen on my VM :-( ).
The 'set -vx' is for debugging & troubleshooting, and can be removed
The code for executing the command on a remote host:
su - user -c "ssh host 'find / -mindepth 1 | grep -vE '$(IFS=\| && echo "${save_files[*]}")' | xargs rm -rf'"
Is using double quote to pass the command to 'su -c', and then single quotes to pass the command to 'ssh' (which contain double quotes). However, the quoting does not support nesting, so that the quote in '$(IFS=... ... )' actually terminate the quote in 'find'.
You can execute the same with a 'set -vx' prefix, to see the expansion
su - owner -c "set -vx ; ssh localhost ...
and you will see
++ IFS='|'
++ echo 'test1|test2'
+ su - owner -c 'set -vx ; ssh localhost '\''set -x ; find /foo -mindepth 1 | grep -vE '\''test1|test2'\'' | xargs rm -rf'\'''
Password:
+ ssh localhost 'set -x ; find /foo -mindepth 1 | grep -vE test1'
+ 'test2 | xargs rm -rf'
-su: test2 | xargs rm -rf: command not found
+ find /foo -mindepth 1
+ grep -vE test1
find: ‘/foo’: No such file or directory
Proposed Solution:
I was not able to find a way to get three level quoting (very frustrating) Possible solution may be to create a helper script h.sh, which will perform the 'ssh' command, and then invoking the helper script with 'su - user '/path/to/helper.sh'
Can you try this?
su - user -c "ssh host 'save_files=test; find / -mindepth 1 | grep -vE "$save_files" | xargs rm -rf'"
Or after switching as root user try below one?
ssh host 'save_files=test; find / -mindepth 1 | grep -vE "$save_files" | xargs rm -rf'
First connect your Remote system and run the bash command whatever you need, like find and xargs
ssh root#MachineB 'bash command'
I would like to create a perl or bash script that will read keyboard input and assign a variable, perform a fixed string grep recursively within the current directory filled with Snort logs, and then automatically tcpdump the matched files, grep its output, and print the specified lines to the terminal. Does anyone have a good idea of how this should work?
Here is an example of the methodology I want from the script:
step 1: Read keyboard input and assign it to variable named string.
step 2 command: grep -Fr "$string"
step 2 output: snort.log.1470609906 matches
step 3 command: tcpdump -r snort.log.1470609906 | grep -F "$string" C-10
step 3 output:
Snort log
Here's some bash code that does that:
s="google.com"
grep -Frl "$s" | \
while IFS= read -r x; do
tcpdump -r "$x" | grep -F "$s" -C10
done
idk about perl but you can do it easily enough just in shell:
str="google.com"
find . -type f -name 'snort.log.*' -exec grep -FlZ "$str" {} + |
xargs -0 -I {} sh -c 'tcpdump -r "{}" | grep -F '"$str"' -C10'
I am trying to run a series of commands in parallel through xargs. I created a null-separated list of commands in a file cmd_list.txt and then attempted to run them in parallel with 6 threads as follows:
cat cmd_list.txt | xargs -0 -P 6 -I % bash -c %
However, I get the following error:
bash: line 0: fg: no job control
I've narrowed down the problem to be related to the length of the individual commands in the command list. Here's an example artificially-long command to download an image:
mkdir a-very-long-folder-de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8
wget --no-check-certificate --no-verbose -O a-very-long-folder-de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8/blah.jpg http://d4u3lqifjlxra.cloudfront.net/uploads/example/file/48/accordion.jpg
Just running the wget command on its own, without the file list and without xargs, works fine. However, running this command at the bash command prompt (again, without the file list) fails with the no job control error:
echo "wget --no-check-certificate --no-verbose -O a-very-long-folder-de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8/blah.jpg http://d4u3lqifjlxra.cloudfront.net/uploads/example/file/48/accordion.jpg" | xargs -I % bash -c %
If I leave out the long folder name and therefore shorten the command, it works fine:
echo "wget --no-check-certificate --no-verbose -O /tmp/blah.jpg http://d4u3lqifjlxra.cloudfront.net/uploads/example/file/48/accordion.jpg" | xargs -I % bash -c %
xargs has a -s (size) parameter that can change the max size of the command line length, but I tried increasing it to preposterous sizes (e.g., 16000) without any effect. I thought that the problem may have been related to the length of the string passed in to bash -c, but the following command also works without trouble:
bash -c "wget --no-check-certificate --no-verbose -O a-very-long-folder-de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8/blah.jpg http://d4u3lqifjlxra.cloudfront.net/uploads/example/file/48/accordion.jpg"
I understand that there are other options to run commands in parallel, such as the parallel command (https://stackoverflow.com/a/6497852/1410871), but I'm still very interested in fixing my setup or at least figuring out where it's going wrong.
I'm on Mac OS X 10.10.1 (Yosemite).
It looks like the solution is to avoid the -I parameter for xargs which, per the OS X xargs man page, has a 255-byte limit on the replacement string. Instead, the -J parameter is available, which does not have a 255-byte limit.
So my command would look like:
echo "wget --no-check-certificate --no-verbose -O a-very-long-folder-de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8/blah.jpg http://d4u3lqifjlxra.cloudfront.net/uploads/example/file/48/accordion.jpg" | xargs -J % bash -c %
However, in the above command, only the portion of the replacement string before the first whitespace is passed to bash, so bash tries to execute:
wget
which obviously results in an error. My solution is to ensure that xargs interprets the commands as null-delimited instead of whitespace-delimited using the -0 parameter, like so:
echo "wget --no-check-certificate --no-verbose -O a-very-long-folder-de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8de090952623b4865c2c34bd6330f8a423ed05ed8/blah.jpg http://d4u3lqifjlxra.cloudfront.net/uploads/example/file/48/accordion.jpg" | xargs -0 -J % bash -c %
and finally, this works!
Thank you to #CharlesDuffy who provided most of this insight. And no thank you to my OS X version of xargs for its poor handling of replacement strings that exceed the 255-byte limit.
I suspect it's the percent symbol, and your top shell complaining.
cat cmd_list.txt | xargs -0 -P 6 -I % bash -c %
Percent is a metacharacter for job control. "fg %2", e.g. "kill %4".
Try escaping the percents with a backslash to signal to the top shell that it should not try to interpret the percent, and xargs should be handed a literal percent character.
cat cmd_list.txt | xargs -0 -P 6 -I \% bash -c \%
I'm trying to list some ftp directories. I can't work out how to make bash execute a command that contains pipes correctly.
Here's my script:
#/bin/sh
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
cmd='echo "ls /mydir/'"$d"'/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1'
$cmd
done
This just outputs:
"ls /mydir/dir1/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
"ls /mydir/dir2/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
How can I make bash execute the whole string including the echo? I also need to be able to parse the output of the command.
I don't think that you need to be using the -b switch at all. It should be sufficient to specify the commands that you would like to execute as a string:
#/bin/bash
dirs=("/dir1" "/dir2")
for d in "${dirs[#]}"
do
printf -v d_str '%q' "$d"
sftp -i ~/mykey user#example.com "ls /mydir/$d_str/*.tar*" 2>&1 | tail -n1
done
As suggested in the comments (thanks #Charles), I've used printf with the %q format specifier to protect against characters in the directory name that may be interpreted by the shell.
First you need to use /bin/bash as shebang to use BASH arrays.
Then remove echo and use command substitution to capture the output:
#/bin/bash
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
output=$(ls /mydir/"$d"/*.tar* | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1)
echo "$output"
done
I will however advise you not use ls's output in sftp command. You can replace that with:
output=$(echo "/mydir/$d/"*.tar* | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1)
Don't store the command in a string; just use it directly.
#/bin/bash
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
echo "ls /mydir/$d/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
done
Usually, people store the command in a string so they can both execute it and log it, as a misguided form of factoring. (I'm of the opinion that it's not worth the trouble required to do correctly.)
Note that sftp reads from standard input by default, so you can just use
echo "ls ..." | sftp -i ~/mykey user#example.com 2>&1 | tail -n1
You can also use a here document instead of a pipeline.
sftp -i ~/mykey user#example.com 2>&1 <<EOF | tail -n1
ls /mydir/$d/*.tar.*
EOF