am trying to execute the following command:
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
ls: cannot access /var/log/alert_ARCDB.log: No such file or directory
tail: cannot follow `-' by name
notice the error returned, when i login to ssh separately and then execute
tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
see the below:
# ls -t /var/log/alert_ARCDB.log | head -n1
/var/log/alert_ARCDB.log
why is that happening and how to fix it. am trying to do this in one line as i don't want to create a script file.
Thanks a lot
Shell parameter expansion happens before command execution.
Here's a simple example. If I type...
ls "$HOME"
...the shell replaces $HOME with the path to my home directory first, then runs something like ls /home/larsks. The ls command has no idea that the command line originally had $HOME.
If we look at your command...
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
...we see that you're in exactly the same situation. The $(ls -t ...) expression is expanded before ssh is executed. In other words, that command is running your local system.
You can inhibit the shell expansion on your local system by using single quotes. For example, running:
echo '$HOME'
Will produce:
$HOME
So you can run:
ssh root#10.10.10.50 'tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )'
But there's another problem here. If /var/log/alert_ARCDB.log is a file, your command makes no sense: calling ls -t on a single file gets you nothing.
If alert-ARCDB.log is a directory, you have a different problem. The result of ls /some/directory is a list of filenames without any directory prefix. If I run something like:
ls -t /tmp
I will get output like
file1
file2
If I do this:
tail $(ls -t /tmp | head -1)
I end up with a command that looks like:
tail file1
And that will fail, because there is no file1 in my current directory.
One approach would be to pipe the commands you want to perform to ssh. One simple way to achieve that is to first create a function that will echo the commands you want executed :
remote_commands()
{
echo 'cd /var/log/alert_ARCDB.log'
echo 'tail -F -n 1 "$(ls -t | head -n1 )"'
}
The cd will allow you to use the relative path listed by ls. The single quotes make sure that everything will be sent as-is to the remote shell, with no local expansion occurring.
Then you can do
ssh root#10.10.10.50 bash < <(remote_commands)
This assumes alert_ARCDB.log is a directory (or else I am not sure why you would want to add head -n1 after that).
Related
I try to create a script who delete all the olds files except the three more recent files on my backup directory with lftp.
I have try to do this with ls -1tr who return all the files in ascending date order, and after I do a head -$NB_BACKUP_TO_RM ($NB_BACKUP_TO_RM is the numbers of files that I want to delete in my lists), this two commands return the correct files.
After this I want to remove all of them, so I do a xargs rm --, but Bash returns that the files don't exist... I think this command is not running into the remote directory, but in the local directory, and I don't know what I can do for delete this files (of my return lists).
Here is the full code:
MAX_BACKUP=3
NB_BACKUP=$(lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp* | wc -l ; quit" -u $USER,$PASSWORD $HOST)
if (( $NB_BACKUP > $MAX_BACKUP ))
then
NB_BACKUP_TO_RM=$(($NB_BACKUP-$MAX_BACKUP))
REMOVE=$(lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xargs rm -- ; quit" -u $USER,$PASSWORD $HOST)
echo $REMOVE
fi
Have you an idea of the problem? How can I delete the files of my lists (after ls -1tr $REMOTE_DIR/full_backup_ftp* and head -$NB_BACKUP_TO_RM)
Thanks for your help
Starting SFTP connection can be time consuming. Slightly modified solution to avoid multiple lftp sessions below. It will perform much better the the alternative solution, especially if large number of files have to be purged.
Basically, leveraging lftp flexibility to mix lftp command with external commands. It creates a command file with a series of 'rm' (leveraging head ,xargs, ...), and executing those commands INSIDE the same lftp session.
Also note that lftp 'ls' does not allow wildcard, use 'cls' instead
Make sure you test this carefully, because of potential removal of important files
lftp -e $USER,$PASSWORD $HOST <<__CMD__
cls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xarg -I{} echo rm {} > rm_list.txt
source rm_list.txt
__CMD__
Or with one liner, using 'lftp' ability to execute dynamically generated command (source -e). It eliminate the temporary file.
lftp -e $USER,$PASSWORD $HOST <<__CMD__
source -e 'cls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xarg -I{} echo rm {}'
__CMD__
Looks xargs is unknown cmd for lftp after man lftp. And xargs rm is deleting local files not remote files.
so please use xargs as below, it works for me.
lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp*; quit" -u $USER,$PASSWORD $HOST | head -$NB_BACKUP_TO_RM | xargs -I {} lftp -e 'rm '{}'; quit' -u $USER,$PASSWORD $HOST
Im learning the commandline from the book The Linux command line and I have a doubt.
Should not
ls -l $(which cp)
and which cp | ls -l have the same output?
Because I'm taking the output of which cp and passing it to ls -l
But that does not work as expected. which cp | ls -l instead displays the contents of pwd
ls doesn't care what's in the standard input.
echo anything | ls -l
^^^
Since you haven't provided a directory to list, it will list the pwd.
In the first case ls is receiving the result as an argument, in the second it is receiving it in the input stream (stdin), wich is ignored in this case.
You can convert from the input stream to arguments using xargs :
which cp | xargs ls -l
I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!
For instance, if I'd like to reference the output of the previous command once, I can use the command below:
ls *.txt | xargs -I % ls -l %
But how to reference the output twice? Like how can I implement something like:
ls *.txt | xargs -I % 'some command' % > %
PS: I know how to do it in shell script, but I just want a simpler way to do it.
You can pass this argument to bash -c:
ls *.txt | xargs -I % bash -c 'ls -l "$1" > "out.$1"' - %
You can lookup up 'tpipe' on SO; it will also lead you to 'pee' (which is not a good search term elsewhere on the internet). Basically, they're variants of the tee command which write to multiple processes instead of writing to files like the tee command does.
However, with Bash, you can use Process Substitution:
ls *.txt | tee >(cmd1) >(cmd2)
This will write the input to tee to each of the commands cmd1 and cmd2.
You can arrange to lose standard output in at least two different ways:
ls *.txt | tee >(cmd1) >(cmd2) >/dev/null
ls *.txt | tee >(cmd1) | cmd2
I'm writing a script to read from a input file, which contains ~1000 lines of host info. The script ssh to each host, cd to the remote hosts log directory and cat the latest daily log file. Then I redirect the cat log file locally to do some pattern matching and statistics.
The simplified structure of my program is a while loop looks like this:
while read host
do
ssh -n name#$host "cd TO LOG DIR AND cat THE LATEST LOGFILE" | matchPattern
done << EOA
$(awk -F, '{print &7}' $FILEIN)
EOA
where matchPattern is a function to match pattern and do statistics.
Right now I got 2 questions for this:
1) How to find the latest daily log file remotely? The latest log file name matches xxxx2012-05-02.log and is newest created, is it possible to do ls remotely and find the file matching the xxxx2012-05-02.log file name?(I can do this locally but get jammed when appending it to ssh command) Another way I could come up with is to do
cat 'ls -t | head -1' or
cat $(ls -t | head -1)
However if I append this to ssh, it will list my local newest created file name, can we set this to a remote variable so that cat will find the correct file?
2) As there are nearly 1000 hosts, I'm wondering can I do this in parallel (like to do 20 ssh at a time and do the next 20 after the first 20 finishes), appending & to each ssh seems not suffice to accomplish it.
Any ideas would be greatly appreciated!
Follow up:
Hi everyone, I finally find a crappy way do solve the first problem by doing this:
ssh -n name#$host "cd $logDir; cat *$logName" | matchPattern
Where $logName is "today's date.log"(2012-05-02.log). The problem is that I can only use local variables within the double quotes. Since my log file ends with 2012-05-02.log, and there is no other files ends with this suffix, I just do a blindly cat *2012-05-02.log on remote machine and it will cat the desired file for me.
For your first question,
ssh -n name#$host 'cat $(ls -t /path/to/log/dir/*.log | head -n 1)'
should work. Note single quotes around the remote command.
For your second question, wrap all the ssh | matchPattern | analyse stuff into its own function, then iterate over it by
outstanding=0
while read host
do
sshMatchPatternStuff &
outstanding=$((outstanding + 1))
if [ $outstanding -ge 20 ] ; then
wait
outstanding=$((outstanding - 1))
fi
done << EOA
$(awk -F, '{print &7}' $FILEIN)
EOA
while [ $outstanding -gt 0 ] ; do
wait
outstanding=$((outstanding - 1))
done
(I assume you're using bash.)
It may be better to separate the ssh | matchPattern | analyse stuff into its own script, and then use a parallel variant of xargs to call it.
for your second question, take a look at parallel distributed shell:
http://sourceforge.net/projects/pdsh/
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
parallel -j0 --nonall --slf <(awk -F, '{print $7}' servers.txt) 'cd logdir; cat `ls -t | head -1` | grep pattern'
This way you get the matching done on the remote server. If you prefer to transfer the full log file and do the matching locally, simply move the grep outside:
parallel -j0 --nonall --slf <(awk -F, '{print $7}' servers.txt) 'cd logdir; cat `ls -t | head -1`' | grep pattern
You can install GNU Parallel simply by:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel sem
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1