How to declare a variable which store pipe command output in Linux shell script - linux

dt=`echo date --date "-15 min"|awk '{print $4}'`;
dts=`echo sar -P ALL -s $dt`;
echo $dts
What is wrong with code?? Here, I want to take 15 min previous sar output but what I get is "sar -P ALL -s min" as output.

Use a subshell rather than backticks:
#!/bin/bash
DT=$(date --date "-15 min" | awk '{print $4}')
DTS=$(sar -P ALL -s $DT)
echo "$DTS"
See: http://tldp.org/LDP/abs/html/subshells.html

try this;
dt=`date --date "-15 min"|awk '{print $4}'`;
dts=`sar -P ALL -s $dt`;
echo $dts
Eg:
user#host:/tmp/test$ ./test.sh
sar -P ALL -s 16:37:44

Related

How to get the Directory name and file name of a bash script by bash?

Follow are known. Possible it helps:
Get the filename.extension incl. fullpath:
Script: /path1/path2/path3/path4/path5/bashfile.sh
#!/bin/bash
echo $0
read -r
Output:
/path1/path2/path3/path4/path5/bashfile.sh
Get filename.extension:
Script: /path/path/path/path/path/bashfile.sh
#!/bin/bash
echo ${0##*/}
read -r
Output:
bashfile.sh
Question:
How to get directory name and file name of a bash script by bash ?
Script: `/path1/path2/path3/path4/path5/bashfile.sh`
Wanted output:
/path5/bashfile.sh
Remark:
Perhaps its possible, if you look from right side, remove all left from "/*/"
Little bit shorter than the first fitting solution:
Script: /path1/path2/path3/path4/path5/bashfile.sh
#!/bin/bash
n=$(($(echo $0 | tr -dc "/" | wc -m)+1))
echo "/""$(echo "$0" | cut -d"/" -f$(($n-1)),$n)"
read -r
Output:
/path5/bashfile.sh
Perhaps they are a shorter solution.
readlink -f $0 |awk -F"/" '{print "/"$(NF-1)"/"$NF}'
# or
awk -F"/" '{print "/"$(NF-1)"/"$NF}' <(readlink -f $0)
# or
awk -F"/" '{print "/"$(NF-1)"/"$NF}' <<<$(readlink -f $0)
# or
sed -E 's/^(.*)(\/\w+\/\w+\.\w+$)/\2/g' <(readlink -f $0)
output
/path5/bashfile.sh
#!/bin/bash
echo "/$(basename "$(dirname "$0")")/$(basename "$0")"
echo
echo
read -r
Output:
/Dirname/Filname.Extension

Issue finding the process id from shell scipt

mySample.sh
pid=$(ps -Af | grep $1 | grep -v grep | awk ' { print $2 } ');
echo $pid
The above command is printing and killing the temporary process that was created for grep
Even though i do not have any process running with Abcd,
This is printing pid
Any ways to ignore it,
iam actually ignoring it using grep -v, still...
./mySample.sh Abcd
6251 6378 6379
Any Issue in fetching the process id.?
Basic command line output is below,After running a process with name Acc_Application_One
[root#localhost Desktop]# ps -Af | grep Acc
root 6251 2758 0 16:16 pts/1 00:00:00 ./Acc_Application_One
root 7288 2758 0 16:57 pts/1 00:00:00 grep Acc
Changed mySample.sh
pgrep -fl "$1"
And the output is
[root#localhost Desktop]# mySample.sh Acc_Application_One
6251 7289
To kill a process with the pattern anywhere in command line use pkill -f:
pkill -f "$1"
As per man pkill:
-f Match the pattern anywhere in the full argument string of the process instead of just the executable name.
Similarly you can use pgrep -f "$1" to list the process id of the matching process.
Try something much simpler:
pid=$(pgrep "$1")
And if you want to kill it:
pkill "$1"
The problem will become clear when you remove the awk: mySample.sh will have Abcd as well.
ps -Af | grep " $1" | grep -Ev "grep|$0" | awk ' { print $2 } '
Changed mySample.sh script with below code
And This is just fetching the processId using the parameter sent
and killing it
pid=$(pgrep -fl $1 | grep -v '[k]ill_script'| awk ' { print $1 } ')
echo $pid
if [[ -n ${pid} ]]; then
echo "Stopping Acc Application $1 with pid=${pid}"
kill -9 ${pid}
fi
Thanks

Why is echo showing the command itself and not the command output

Why is echo showing the command and not the output of the command once I start using it in a FOR I loop? For example this command works
root#linux1 tmp]# iscsiadm -m node |awk '{print $1}'
192.168.100.88:326
But not in a FOR I loop
[root#linux1 tmp]# for i in 'iscsiadm -m node | awk '{print $1}'';do echo $i;done
iscsiadm -m node | awk {print
}
I want the command to print the first field so then I can add other functionality to the For I loop. Thanks
EDIT -- Not sure why I got voted down on this question. Please advise.
You're not executing the iscsiadm and awk commands, because you quoted it; that makes it a literal string. To substitute the output of a command back into the command line, use $(...)
for i in $(iscsiadm -m node |awk '{print $1}'); do
echo $i
done

OR condition in Shell Scripting - Unix

I declare three variables.
$1=`ssh <server_1> cat /etc/passswd|cut -f -d:|grep -e $IID -e $EID`
$2=`ssh <server_2> cat /etc/shadow|cut -f -d:|grep -e $IID -e $EID`
$3=`ssh <server_3> cat /etc/passwd}|cut -f -d:|grep -i $CID`
The above three variables are created by taking ssh to servers and checking the presence of the IDs which I give as input. If the ID doesn't exist already, the the variable is going to be null.
Now, how do I verify if all the three variables are null. I wanted to use the OR condition specified within an IF.
I tried,
if [ -s "$1" -o -s "$2" -o -s "$3"];then
echo -$1 $2 $3 "already exist(s)"
It didnt work. Please advise.
PS: I have just begun my career in Unix and correct me If am wrong anywhere.
Several points.
When you assign to a variable, don't use the dollar sign:
foo=xxx
Variables $1, $2 etc are already used for your command line arguments. Pick other names. But not $4please. :-)
When you specify a command for ssh, and it has arguments, it has to be quoted, because the command needs to be a single argument for ssh. In your case use double quotes, as you want variable expansion for $IID etc.
Most Unix utils are able to open input files themselves, so you don't need to start your pipeline with cat.
foo=`ssh <server_1> "cut -f -d: /etc/passwd | grep -e $IID -e $EID"`
Or something like that.
It was a typo in my question. I had actually declared it as,
1=`ssh <server_1> cat /etc/passswd|cut -f -d:|grep -e $IID -e $EID`
2=`ssh <server_2> cat /etc/shadow|cut -f -d:|grep -e $IID -e $EID` and so on.
And I tried it as ,
if [ -s "$1" -o -s "$2" -o -s "$3"];then
echo -e $1 $2 $3 "already exist(s)"
Since I had to Deliver my script today, I used the conventional method of,
ssh <server_1> "cat /etc/passswd|cut -f -d:|grep -e $IID -e $EID" > file1
ssh <server_2> "cat /etc/shadow|cut -f -d:|grep -e $IID -e $EID" > file2
ssh <server_3> "cat /etc/passwd|cut -f -d:|grep -ix $CID" > file3
if [ -s file1 -o -s file2 -o -s file3]; then
for i in `cat file1 file2 file3`
do
echo $i "already exists"
done
else
And I have now learnt from my first post, that -s to ensure that a file is not empty and -z is to ensure string is empty.

Executing a string as a command in bash that contains pipes

I'm trying to list some ftp directories. I can't work out how to make bash execute a command that contains pipes correctly.
Here's my script:
#/bin/sh
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
cmd='echo "ls /mydir/'"$d"'/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1'
$cmd
done
This just outputs:
"ls /mydir/dir1/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
"ls /mydir/dir2/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
How can I make bash execute the whole string including the echo? I also need to be able to parse the output of the command.
I don't think that you need to be using the -b switch at all. It should be sufficient to specify the commands that you would like to execute as a string:
#/bin/bash
dirs=("/dir1" "/dir2")
for d in "${dirs[#]}"
do
printf -v d_str '%q' "$d"
sftp -i ~/mykey user#example.com "ls /mydir/$d_str/*.tar*" 2>&1 | tail -n1
done
As suggested in the comments (thanks #Charles), I've used printf with the %q format specifier to protect against characters in the directory name that may be interpreted by the shell.
First you need to use /bin/bash as shebang to use BASH arrays.
Then remove echo and use command substitution to capture the output:
#/bin/bash
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
output=$(ls /mydir/"$d"/*.tar* | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1)
echo "$output"
done
I will however advise you not use ls's output in sftp command. You can replace that with:
output=$(echo "/mydir/$d/"*.tar* | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1)
Don't store the command in a string; just use it directly.
#/bin/bash
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
echo "ls /mydir/$d/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
done
Usually, people store the command in a string so they can both execute it and log it, as a misguided form of factoring. (I'm of the opinion that it's not worth the trouble required to do correctly.)
Note that sftp reads from standard input by default, so you can just use
echo "ls ..." | sftp -i ~/mykey user#example.com 2>&1 | tail -n1
You can also use a here document instead of a pipeline.
sftp -i ~/mykey user#example.com 2>&1 <<EOF | tail -n1
ls /mydir/$d/*.tar.*
EOF

Resources