I am trying to remove files via ssh using by find and xargs. On the local machine command works correctly. I am using exception list also.
I have tried to changes brackets between ' ' and " " but it does not work.
save_files=(test1 test2)
On the local machine:
find / -mindepth 1 | grep -vE "$(IFS=\| && echo "${save_files[*]}")" | xargs rm -rf
via ssh:
su - user -c "ssh host find / -mindepth 1 | grep -vE '$(IFS=\| && echo "${save_files[*]}")' | xargs rm -rf"
In above ssh command xargs is performing locally. I need xargs on remote machine. Even I put find command in '' brackets like below:
su - user -c "ssh host 'find / -mindepth 1 | grep -vE '$(IFS=\| && echo "${save_files[*]}")' | xargs rm -rf'"
Starting another answer, trying again for one-liner, avoiding the intermediate script.
The key problem is the three level quoting, whereas shell support only two level quoting (double quote, single quote). I believe it might be possible to eliminate the need for the 3rd level of quoting by using backquote in the pattern, eliminating the need to put it into quotes.
IFS=\| P="${save_files[*]}" su - owner -c "ssh localhost 'set -vx ; find /tmp/xyz -mindepth 1 | grep -vE ${P//|/\\|} | xargs rm -rf'"
The solution create the pattern with 'escapes' - see '($P/|/\|}', eliminating the need to quote the pattern. Should work if the save_files does not have magic characters.
Notes:
Notice that I've replaced '/' with /tmp/xyz, to reduce the change the accidental copy/paste will remove important data (already happen on my VM :-( ).
The 'set -vx' is for debugging & troubleshooting, and can be removed
The code for executing the command on a remote host:
su - user -c "ssh host 'find / -mindepth 1 | grep -vE '$(IFS=\| && echo "${save_files[*]}")' | xargs rm -rf'"
Is using double quote to pass the command to 'su -c', and then single quotes to pass the command to 'ssh' (which contain double quotes). However, the quoting does not support nesting, so that the quote in '$(IFS=... ... )' actually terminate the quote in 'find'.
You can execute the same with a 'set -vx' prefix, to see the expansion
su - owner -c "set -vx ; ssh localhost ...
and you will see
++ IFS='|'
++ echo 'test1|test2'
+ su - owner -c 'set -vx ; ssh localhost '\''set -x ; find /foo -mindepth 1 | grep -vE '\''test1|test2'\'' | xargs rm -rf'\'''
Password:
+ ssh localhost 'set -x ; find /foo -mindepth 1 | grep -vE test1'
+ 'test2 | xargs rm -rf'
-su: test2 | xargs rm -rf: command not found
+ find /foo -mindepth 1
+ grep -vE test1
find: ‘/foo’: No such file or directory
Proposed Solution:
I was not able to find a way to get three level quoting (very frustrating) Possible solution may be to create a helper script h.sh, which will perform the 'ssh' command, and then invoking the helper script with 'su - user '/path/to/helper.sh'
Can you try this?
su - user -c "ssh host 'save_files=test; find / -mindepth 1 | grep -vE "$save_files" | xargs rm -rf'"
Or after switching as root user try below one?
ssh host 'save_files=test; find / -mindepth 1 | grep -vE "$save_files" | xargs rm -rf'
First connect your Remote system and run the bash command whatever you need, like find and xargs
ssh root#MachineB 'bash command'
Related
I need to find the reports (.docx files), read them with docx2txt, find the second match of "passed" (excluding "not passed") and save these filenames to text file. Here is what I tried:
OIFS="$IFS"
IFS=$'\n'
for f in $(find . -wholename '*_done/(*Report*.docx' |grep -v appendix)
do
docx2txt "$f" - | (grep -q -m2 passed || grep -q -v "not passed") || echo $f >> failed
done
IFS="$OIFS"
But this script gives me an empty file. If I replace || to && before echo, all filenames are stored into the file. grep works fine if it is not in the script, as well as docx2txt. What am I doing wrong here?
There are quite a lot problems with the grep commands.
grep -q always exits successfully on the first match.
With -q the -m2 has no effect. If there is one match grep exits successfully. It does not check if there is a second match.
To check that there are (at least) two matches, count the matches and then use test/[ ] to check the number of found matches. If there is at most one passed per line, grep -c is sufficient. If there can be multiple matches per line, you need grep -o ... | wc -l.
-q and -v together means: Is there at least one line that does not contain the pattern? When grep finds such a line it exits successfully. The only way for this command to fail is an input in which every line contains not passed (this includes the empty file).
Matching passed but not not passed is trickier than one might suspect. If there can be at most one passed/not passed per line, you can use grep -v 'not passed' | grep passed. Otherwise you need a need negative lookbehind, which is only available in perl compatible regular expressions (PCRE).
In addition to that command | (grep ... || grep ...) might not do what you expect. command produces output only once. After the first grep read some of this output, that read part is gone. The second grep will then continue reading where the first grep stopped.
BTW: for … in $(find … | grep -v …) can be turned into a single, safe find command using -not and -exec.
Solution
If each line contains at most one passed/not passed, use
find . -wholename '*_done/(*Report*.docx' -not -wholename '*appendix*' \
-exec sh -c '[ $(docx2txt "$0" - | grep -v "not passed" | grep -cm2 passed) = 2 ]' {} \; -print
If there can be multiple passed/not passed per line, you need GNU grep or pcregrep:
find . -wholename '*_done/(*Report*.docx' -not -wholename '*appendix*' \
-exec sh -c '[ $(docx2txt "$0" - | grep -Pom2 "(?<!not )passed" | wc -l) = 2 ]' {} \; -print
When you run into a problem like this, it's a good idea to remove as much code as possible. If we just take that one line with the multiple grep statements, we can first verify that the current expression doesn't work:
$ echo passed | ((grep -q -m2 passed || grep -q -v "not passed") || echo failed
$ echo not passed | ((grep -q -m2 passed || grep -q -v "not passed") || echo failed
We can see that neither of these commands produces at any output.
Let's think carefully about the logic:
The || operator means "if the first command doesn't succeed, run the second command". So in both cases, the first grep succeeds (because both passed and not passed contain the phrase passed). This means the second grep will never run, and it means that since the first command was successful, the entire grep ... || grep ... command will be successful, and that means the final echo $f will never run.
I was trying to think of a clever way to solve this, but it seems simplest if we make use of a temporary file:
OIFS="$IFS"
IFS=$'\n'
tmpfile=$(mktemp docXXXXXX)
trap "rm -f $tmpfile" EXIT
for f in $(find . -wholename '*_done/(*Report*.docx' |grep -v appendix)
do
docx2txt "$f" - | head -2 > $tmpfile
if grep -q passed $tmpfile && ! grep -q 'not passed' $tmpfile; then
echo $f >> failed
fi
done
IFS="$OIFS"
I try to create a script who delete all the olds files except the three more recent files on my backup directory with lftp.
I have try to do this with ls -1tr who return all the files in ascending date order, and after I do a head -$NB_BACKUP_TO_RM ($NB_BACKUP_TO_RM is the numbers of files that I want to delete in my lists), this two commands return the correct files.
After this I want to remove all of them, so I do a xargs rm --, but Bash returns that the files don't exist... I think this command is not running into the remote directory, but in the local directory, and I don't know what I can do for delete this files (of my return lists).
Here is the full code:
MAX_BACKUP=3
NB_BACKUP=$(lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp* | wc -l ; quit" -u $USER,$PASSWORD $HOST)
if (( $NB_BACKUP > $MAX_BACKUP ))
then
NB_BACKUP_TO_RM=$(($NB_BACKUP-$MAX_BACKUP))
REMOVE=$(lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xargs rm -- ; quit" -u $USER,$PASSWORD $HOST)
echo $REMOVE
fi
Have you an idea of the problem? How can I delete the files of my lists (after ls -1tr $REMOTE_DIR/full_backup_ftp* and head -$NB_BACKUP_TO_RM)
Thanks for your help
Starting SFTP connection can be time consuming. Slightly modified solution to avoid multiple lftp sessions below. It will perform much better the the alternative solution, especially if large number of files have to be purged.
Basically, leveraging lftp flexibility to mix lftp command with external commands. It creates a command file with a series of 'rm' (leveraging head ,xargs, ...), and executing those commands INSIDE the same lftp session.
Also note that lftp 'ls' does not allow wildcard, use 'cls' instead
Make sure you test this carefully, because of potential removal of important files
lftp -e $USER,$PASSWORD $HOST <<__CMD__
cls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xarg -I{} echo rm {} > rm_list.txt
source rm_list.txt
__CMD__
Or with one liner, using 'lftp' ability to execute dynamically generated command (source -e). It eliminate the temporary file.
lftp -e $USER,$PASSWORD $HOST <<__CMD__
source -e 'cls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xarg -I{} echo rm {}'
__CMD__
Looks xargs is unknown cmd for lftp after man lftp. And xargs rm is deleting local files not remote files.
so please use xargs as below, it works for me.
lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp*; quit" -u $USER,$PASSWORD $HOST | head -$NB_BACKUP_TO_RM | xargs -I {} lftp -e 'rm '{}'; quit' -u $USER,$PASSWORD $HOST
am trying to execute the following command:
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
ls: cannot access /var/log/alert_ARCDB.log: No such file or directory
tail: cannot follow `-' by name
notice the error returned, when i login to ssh separately and then execute
tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
see the below:
# ls -t /var/log/alert_ARCDB.log | head -n1
/var/log/alert_ARCDB.log
why is that happening and how to fix it. am trying to do this in one line as i don't want to create a script file.
Thanks a lot
Shell parameter expansion happens before command execution.
Here's a simple example. If I type...
ls "$HOME"
...the shell replaces $HOME with the path to my home directory first, then runs something like ls /home/larsks. The ls command has no idea that the command line originally had $HOME.
If we look at your command...
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
...we see that you're in exactly the same situation. The $(ls -t ...) expression is expanded before ssh is executed. In other words, that command is running your local system.
You can inhibit the shell expansion on your local system by using single quotes. For example, running:
echo '$HOME'
Will produce:
$HOME
So you can run:
ssh root#10.10.10.50 'tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )'
But there's another problem here. If /var/log/alert_ARCDB.log is a file, your command makes no sense: calling ls -t on a single file gets you nothing.
If alert-ARCDB.log is a directory, you have a different problem. The result of ls /some/directory is a list of filenames without any directory prefix. If I run something like:
ls -t /tmp
I will get output like
file1
file2
If I do this:
tail $(ls -t /tmp | head -1)
I end up with a command that looks like:
tail file1
And that will fail, because there is no file1 in my current directory.
One approach would be to pipe the commands you want to perform to ssh. One simple way to achieve that is to first create a function that will echo the commands you want executed :
remote_commands()
{
echo 'cd /var/log/alert_ARCDB.log'
echo 'tail -F -n 1 "$(ls -t | head -n1 )"'
}
The cd will allow you to use the relative path listed by ls. The single quotes make sure that everything will be sent as-is to the remote shell, with no local expansion occurring.
Then you can do
ssh root#10.10.10.50 bash < <(remote_commands)
This assumes alert_ARCDB.log is a directory (or else I am not sure why you would want to add head -n1 after that).
I would like to create a perl or bash script that will read keyboard input and assign a variable, perform a fixed string grep recursively within the current directory filled with Snort logs, and then automatically tcpdump the matched files, grep its output, and print the specified lines to the terminal. Does anyone have a good idea of how this should work?
Here is an example of the methodology I want from the script:
step 1: Read keyboard input and assign it to variable named string.
step 2 command: grep -Fr "$string"
step 2 output: snort.log.1470609906 matches
step 3 command: tcpdump -r snort.log.1470609906 | grep -F "$string" C-10
step 3 output:
Snort log
Here's some bash code that does that:
s="google.com"
grep -Frl "$s" | \
while IFS= read -r x; do
tcpdump -r "$x" | grep -F "$s" -C10
done
idk about perl but you can do it easily enough just in shell:
str="google.com"
find . -type f -name 'snort.log.*' -exec grep -FlZ "$str" {} + |
xargs -0 -I {} sh -c 'tcpdump -r "{}" | grep -F '"$str"' -C10'
I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!