what is the use of running shell script like "$ ls -la | script.sh"? - linux

To get total number of lines read, why we are using ls -la | script.sh ?
Why we cant execute normal way like script.sh ?
Note that script.sh is the shell script program.

Breaking it down bit by bit:
ls -la
List all files (including dotfiles) in long format.
|
Sends the output of the command on the left to the command on the right
script.sh
Executes the script.
So the output of ls -la will be sent via stdin to script.sh.

To get total number of lines, that is files in your case, simply
ls | wc -l

Related

ssh tail with nested ls and head cannot access

am trying to execute the following command:
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
ls: cannot access /var/log/alert_ARCDB.log: No such file or directory
tail: cannot follow `-' by name
notice the error returned, when i login to ssh separately and then execute
tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
see the below:
# ls -t /var/log/alert_ARCDB.log | head -n1
/var/log/alert_ARCDB.log
why is that happening and how to fix it. am trying to do this in one line as i don't want to create a script file.
Thanks a lot
Shell parameter expansion happens before command execution.
Here's a simple example. If I type...
ls "$HOME"
...the shell replaces $HOME with the path to my home directory first, then runs something like ls /home/larsks. The ls command has no idea that the command line originally had $HOME.
If we look at your command...
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
...we see that you're in exactly the same situation. The $(ls -t ...) expression is expanded before ssh is executed. In other words, that command is running your local system.
You can inhibit the shell expansion on your local system by using single quotes. For example, running:
echo '$HOME'
Will produce:
$HOME
So you can run:
ssh root#10.10.10.50 'tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )'
But there's another problem here. If /var/log/alert_ARCDB.log is a file, your command makes no sense: calling ls -t on a single file gets you nothing.
If alert-ARCDB.log is a directory, you have a different problem. The result of ls /some/directory is a list of filenames without any directory prefix. If I run something like:
ls -t /tmp
I will get output like
file1
file2
If I do this:
tail $(ls -t /tmp | head -1)
I end up with a command that looks like:
tail file1
And that will fail, because there is no file1 in my current directory.
One approach would be to pipe the commands you want to perform to ssh. One simple way to achieve that is to first create a function that will echo the commands you want executed :
remote_commands()
{
echo 'cd /var/log/alert_ARCDB.log'
echo 'tail -F -n 1 "$(ls -t | head -n1 )"'
}
The cd will allow you to use the relative path listed by ls. The single quotes make sure that everything will be sent as-is to the remote shell, with no local expansion occurring.
Then you can do
ssh root#10.10.10.50 bash < <(remote_commands)
This assumes alert_ARCDB.log is a directory (or else I am not sure why you would want to add head -n1 after that).

Bash: Running one command after another using string variable

I understand that running one command after another is done in bash using the following command
command1 && command2
or
command1; command2
or even
command1 & command2
I also understand that a command stored in a bash variable can be run by simply firing the variable as:
TestCommand="ls"
$TestCommand
Doing the above will list all the files in the directory and I have tested that it does.
But doing the same with multiple commands generates an error. Sample below:
TestCommand="ls && ls -l"
$TestCommand
ls: cannot access &&: No such file or directory
ls: cannot access ls: No such file or directory
My question is why is this happening and is there any workaround?
And before you bash me for doing something so stupid. The preceding is just to present the problem. I have a list of files in my directory and I am using sed to convert the list into a single executable string. Storing that string in a bash variable, I am trying to run it but failing.
When you put two command in a single string variable, it is executed as single command. so when you are using "$TestCommand" to execute two "ls" commands, it is executing only one(first) "ls" command. it considers && and ls(second) as argument of first ls command.
As your current working directory is not having any files named && and ls it is returning error :
ls: cannot access &&: No such file or directory
ls: cannot access ls: No such file or directory
So, basically your commands behaves like this
ls file1 file2 -l
and it will give you output like this if file1 and file2 exists:
HuntM#~/scripts$ ls file1 file2 -l
-rw-r--r-- 1 girishp staff 0 Dec 8 12:44 file1
-rw-r--r-- 1 girishp staff 0 Dec 8 12:44 file2
Now your solution:
You can create function OR one more script to execute 2 commands as below:
caller.sh
#!/bin/bash
myLs=`./myls.sh`
echo "$myLs"
myls.sh
#!/bin/bash
ls && ls -l

Why doesn't the cd command work when trying to pipe it with another command

I'm trying to use a pipeline with cd and ls like this:
ls -l | cd /home/user/someDir
but nothing happens.
And I tried:
cd /home/user/someDir | ls -l
It seems that the cd command does nothing while the ls -l will work on the current directory.
The directory that I'm trying to open is valid.
Why is that? Is it possible to pipe commands with cd / ls?
Didn't have any problem with other commands while using pipe.
cd takes no input and produces no output; as such, it does not make sense to use it in pipes (which take output from the left command and pass it as input to the right command).
Are you looking for ls -l ; cd /somewhere?
Another option (if you need to list the target directory) is:
cd /somewhere && ls -l
The '&&' here will prevent executing the second command (ls -l) if the target directory does not exist.

How is $() different from redirection?

Im learning the commandline from the book The Linux command line and I have a doubt.
Should not
ls -l $(which cp)
and which cp | ls -l have the same output?
Because I'm taking the output of which cp and passing it to ls -l
But that does not work as expected. which cp | ls -l instead displays the contents of pwd
ls doesn't care what's in the standard input.
echo anything | ls -l
^^^
Since you haven't provided a directory to list, it will list the pwd.
In the first case ls is receiving the result as an argument, in the second it is receiving it in the input stream (stdin), wich is ignored in this case.
You can convert from the input stream to arguments using xargs :
which cp | xargs ls -l

Why "which cp | ls -l " is not treate as "ls -l $(which cp)"?

According to pipe methodology in Linux, the output of the first command should be treated as input for the second command. So when I am doing which cp | ls -l, it should be treated as ls -l $(which cp)
But the output is showing something else.
Why so ?
ls does not take input from stdin. You can work around this if you need to by using xargs:
which cp | xargs ls -l
This will invoke ls -l with the (possibly multiple, if which were to return more than one) filenames as command line arguments, with no standard input.

Resources