iterating through ls output is not occurring in bash - linux

I am trying to ls the directories and print them out but nothing is being displayed. I am able to SSH and execute the first pwd. However, anything within the the for loop has no output. I know for sure there are directories called event-test- because I've done so manually. I've manually entered the directory (/data/kafka/tmp/kafka-logs/) and ran this piece of code and the correct output appeared so I'm not sure why
manually entered has correct output:
for i in `ls | grep "event-test"`; do echo $i; done;
script:
for h in ${hosts[*]}; do
ssh -i trinity-prod-keypair.pem bc2-user#$h << EOF
sudo bash
cd /data/kafka/tmp/kafka-logs/
pwd
for i in `ls | grep "event-test-"`; do
pwd
echo $i;
done;
exit;
exit;
EOF
done

It is because
`ls | grep "event-test-"`
is executing on your localhost not on remote host. Besides parsing ls is error prone and not even needed. You can do:
for h in "${hosts[#]}"; do
ssh -t -t trinity-prod-keypair.pem bc2-user#$h <<'EOF'
sudo bash
cd /data/kafka/tmp/kafka-logs/
pwd
for i in *event-test-*; do
pwd
echo "$i"
done
exit
exit
EOF
done

When parsing ls it is good practice to do ls -1 to get a prettier list to parse. Additionally, when trying to find files named "event-test-" I would recommend the find command. Since I am not completely sure what you are attempting to do other than list the locations of these "event-test" files I'd recommend something more similar to the following:
for h in "${hosts[#]}"; do ssh trinity-prod-keypair.pem bc2-user#$h -t -t "find /data/kafka/tmp/kafka-logs/ -type f -name *event-test-*;" ; done
This will give you a pretty output of the full path to the file and the file name.
I hope this helps.

Related

Using the ls command to hide non-executable files

I'm trying to have a command that will print only the non-executable files sorted by modification time in the current directory.
What I have so far is:
$ ls -lt | grep -i "...x......"
This is printing all of the files in the dir. Just starting to learn code, any help would be much appreciated.
The way to go :
for file in *; do test -x "$file" || echo "$file"; done
Thanks to not parsing ls output

Remote to Local rolling backup script

I'm trying to create a bash script that runs through crontab to execute a backup remote to local. Everything works but my rolling backup part, where it only keeps 4 backups.
#!/bin/bash
dateForm=`date +%m-%d-%Y`
fileName=[redacted]-"$dateForm"
echo backup started for [redacted] on: $dateForm >> /home/backups/backLog.log
ls -tQ /home/backups/[redacted] | tail -n+5 | xargs -r rm
ssh root#[redacted] "tar jcf - -C /home/[redacted]/[redacted] ." > "/home/backups/[redacted]/$fileName".tar.bz2
if [ ! -f "/home/backups/[redacted]/$fileName.tar.bz2" ]
then
echo "something went wrong with the backup for $fileName!" >> /home/backups/backLog.log
else
echo "Backup completed for $fileName" >> /home/backups/backLog.log
fi
the ls line will work if executed in the directory just fine, but because crontab is executing it and I need the script to be outside of the folder it's targeting. I can't get it to target the rm to the correct directory utilizing the piped ls
I was able to come up with an interesting solution after studying the man page for ls a little more and utilizing find to grab the full paths.
ls -tQ $(find /home/backups/[redacted] -type f -name "*") | tail -n+5 | xargs -r rm
just posting an answer for someone that didn't want to create a rolling backup script that completely depended on date formatting, as there would ALWAYS be at least 4 backups in the folder targeted.

ssh tail with nested ls and head cannot access

am trying to execute the following command:
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
ls: cannot access /var/log/alert_ARCDB.log: No such file or directory
tail: cannot follow `-' by name
notice the error returned, when i login to ssh separately and then execute
tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
see the below:
# ls -t /var/log/alert_ARCDB.log | head -n1
/var/log/alert_ARCDB.log
why is that happening and how to fix it. am trying to do this in one line as i don't want to create a script file.
Thanks a lot
Shell parameter expansion happens before command execution.
Here's a simple example. If I type...
ls "$HOME"
...the shell replaces $HOME with the path to my home directory first, then runs something like ls /home/larsks. The ls command has no idea that the command line originally had $HOME.
If we look at your command...
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
...we see that you're in exactly the same situation. The $(ls -t ...) expression is expanded before ssh is executed. In other words, that command is running your local system.
You can inhibit the shell expansion on your local system by using single quotes. For example, running:
echo '$HOME'
Will produce:
$HOME
So you can run:
ssh root#10.10.10.50 'tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )'
But there's another problem here. If /var/log/alert_ARCDB.log is a file, your command makes no sense: calling ls -t on a single file gets you nothing.
If alert-ARCDB.log is a directory, you have a different problem. The result of ls /some/directory is a list of filenames without any directory prefix. If I run something like:
ls -t /tmp
I will get output like
file1
file2
If I do this:
tail $(ls -t /tmp | head -1)
I end up with a command that looks like:
tail file1
And that will fail, because there is no file1 in my current directory.
One approach would be to pipe the commands you want to perform to ssh. One simple way to achieve that is to first create a function that will echo the commands you want executed :
remote_commands()
{
echo 'cd /var/log/alert_ARCDB.log'
echo 'tail -F -n 1 "$(ls -t | head -n1 )"'
}
The cd will allow you to use the relative path listed by ls. The single quotes make sure that everything will be sent as-is to the remote shell, with no local expansion occurring.
Then you can do
ssh root#10.10.10.50 bash < <(remote_commands)
This assumes alert_ARCDB.log is a directory (or else I am not sure why you would want to add head -n1 after that).

Need help using the pipe command in terminal (Linux / shell file)

Doing an assignment for class that needs to be done using commands in the terminal. I have a shell file (temp1.sh) created in the home directory, and a shell file (temp2.sh) created in a folder (randomFolder). When I run temp2.sh I need to display the amount of characters in temp1.sh. I need to use the pipe command to accomplish this.
So I figure I need to change directory to the home directory then open the file temp1.sh and use thewc -c command to display the characters. I have been trying many different ways to execute this task and somehow can't get it to work. Any help would be appreciated. Without using a pipe I can get it to work, but I can't seem to write out this command line properly while using a pipe.
What I have done so far:
cd ~
touch temp1.sh
chmod 755 temp1.sh
echo 'This file has other commands that are not relevant and work' >> temp1.sh
mkdir randomFolder
cd randomFolder
touch temp2.sh
chmod 755 temp2.sh
echo cd ~ | wc -c temp1.sh >> temp2.sh
This last line tells me there is no such file "temp1.sh" after I run it. if I redirect to home then type wc -c temp1.sh, I get the desired output. I want this output to happen when I run temp2.sh.
Example without using pipe command:
echo wc -c ~/temp1.sh >> temp2.sh
This gives me the desired output when I run temp2.sh. However I need to accomplish this while using the pipe command.
Your code is close to working. The first part is fine:
cd ~
touch temp1.sh
chmod 755 temp1.sh
echo 'This file has other commands that are not relevant and work' >> temp1.sh
mkdir randomFolder
cd randomFolder
touch temp2.sh
chmod 755 temp2.sh
All of that should work. You problem is this part:
echo cd ~ | wc -c temp1.sh >> temp2.sh
You need to separate the cd ~ from something that runs some command and pipes the output to wc, and get the whole lot stored in temp2.sh. That could be something like:
echo "cd $HOME" > temp2.sh
echo "cat temp1.sh | wc -c" >> temp2.sh
The key point here is using separate lines for the cd command and the wc command. Using > for the first command ensures that you don't have stray garbage from previous failed attempts in temp2.sh. You can achieve the same result in multiple ways, including:
echo "cd; cat temp1.sh | wc -c" > temp2.sh
echo "cd ~; while read -r line; do echo "$line"; done < temp1.sh | wc -c" > temp2.sh
And then, finally, you need to execute temp2.sh. You might use any of these, though some (which?) depend on how your PATH is set:
./temp2.sh
temp2.sh
sh temp2.sh
sh -x temp2.sh
$HOME/randomFolder/temp2.sh
~/randomFolder/temp2.sh

How to pipe files one by one from list into script?

I have a list of files that I need to pipe into a shell script. I can list the files within a directory by using the following:
ls ~/data/2121/*SOMEFILE*
resulting in:
2121.SOMEFILEaa
2121.SOMEFILEab
2121.SOMEFILEac
and so on...
I have another script that performs some processing on a single file (2121.SOMEFILEaa) which I run by using the following command:
bash runscript ../data/2121/2121.SOMEFILEaa
However, I need to make this more efficient by piping individual files from the list of files generated via ls into the script. How can I pipe the results from the ls ~/data/2121/*SOMEFILES* command--file by file--into the runscript script?
Another option
ls ~/data/2121/*SOMEFILE* | xargs -L1 bash runscript
I think you are looking for this:
for file in ~/data/2121/*SOMEFILE*; do
bash runscript "$file"
done
In this way, you're calling bash runscript for each file.
$ cat pipe.sh
#!/bin/bash
## Store data from pipe to variable $PIPE ------#
_read_pipe(){ #
while read -t 10 pipe; do
if [ -n "$pipe" ] ;then
PIPE="$PIPE $pipe" ;fi ;done ;}
## your code -----------------------------------#
_read_pipe #
for kung_foo in $PIPE ;do
echo $kung_foo ;done
$ ls 2121.SOMEFILE* | ./pipe.sh
2121.SOMEFILEaa
2121.SOMEFILEab
2121.SOMEFILEac
and so on...
[ -t ] is for timeout
I hope this helps,
cheers Karim

Resources