Run Bash command in script PS with Powercli - linux

i try to run this script in powercli V.11 but i have always des mistakes.
$vm= "server"
$adminGuest="root"
$adminGuestPwd="pass"
$command = " df -H | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }'"
Invoke-VMScript -vm $vm -ScriptText $command -GuestUser $adminGuest -GuestPassword $adminGuestPwd -ScriptType Bash
i don't know how can i integrate this script in my code df -H | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }'
Thnks

Double quotes in awk command are confusing PowerCLI code check.
If you try the command without double quotes in awk, you still won't get the result you're expecting as awk is not parsing the data properly.
Maybe you can collect info with Invoke-VMScript and then parse it somehow with Powershell.
$command = " df -H | grep -vE '^Filesystem|tmpfs|cdrom' "
$output = Invoke-VMScript -vm $vm -ScriptText $command -GuestUser $vCUser -GuestPassword $vCPass -ScriptType Bash
$output.ScriptOutput

Related

Reformat with awk and sed from STDIN and execute

This is just an example of what I run into a lot:
I would like to copy all .bash_histories to one directory.
grep "/bin/bash" /etc/passwd | awk -F: '{ print "cp " $6"/.bash_history /backup" $6 ".bash_history" }
Output:
cp /home/peter/.bash_history /backup/home/peter/.bash_history
cp /home/john/.bash_history /backup/home/john/.bash_history
What I would like is an output like this:
cp /home/peter/.bash_history /backup/_home_peter_.bash_history
cp /home/john/.bash_history /backup/_home_john_.bash_history
And that this output will be executed.
(It's not specifically about this issue, but just in general how to reformat with awk and sed and execute the new created command line, without really creating a script for it)
The awk script to obtain a similar output will be
grep "/bin/bash" /etc/passwd |head -2 | awk -F: '{ print "cp " $6 "/.bash_history backup/_home_"$1".bash_history" }'
giving an output like
cp /root/.bash_history backup/_home_root.bash_history
cp /home/xxx/.bash_history backup/_home_xxx.bash_history
Now inorder to excecute the commands, the system() function within the awk would be helpfull
system(command) would excecute any command, and return value being the exit status of the command.
The above script can be modified as
grep "/bin/bash" /etc/passwd |head -2 | awk -F: '{ system("cp " $6 "/.bash_history backup/_home_"$1".bash_history;") }'
Test run:
$ grep "/bin/bash" /etc/passwd |head -2 | awk -F: '{ system("cp " $6 "/.bash_history backup/_home_"$1".bash_history;") }'
$ ls backup/
_home_xxx.bash_history _home_root.bash_history
PS: It is not recommend to create directories in your root folder. So i intentionally replaced /backup in your script to backup.
Also inorder for the script to be successful, the backup folder must be created before hand.
getent passwd | grep \/bin\/bash | cut -d ":" -f 6 | while read a; do eval "cp $a/.bash_history /backup/$(echo $a | sed 's#/#_#g')_.bash_history"; done
This uses getent to fetch the passwd file and cut gets the 6th field like your awk statement did, then it reads each entry line by line and builds the string and executes it with eval.
getent passwd | grep \/bin\/bash | cut -d ":" -f 6 | while read a; do eval "cp $a/.bash_history /backup/$(echo $a | sed 's#/#_#g')_.bash_history"; done
Worked perfectly! Issue solved!

awk - send sum to global variable

I have a line in a bash script that calculates the sum of unique IP requests to a certain page.
grep $YESTERDAY $ACCESSLOG | grep "$1" | awk -F" - " '{print $1}' | sort | uniq -c | awk '{sum += 1; print } END { print " ", sum, "total"}'
I am trying to get the value of sum to a variable outside the awk statement so I can compare pages to each other. So far I have tried various combinations of something like this:
unique_sum=0
grep $YESTERDAY $ACCESSLOG | grep "$1" | awk -F" - " '{print $1}' | sort | uniq -c | awk '{sum += 1; print ; $unique_sum=sum} END { print " ", sum, "total"}'
echo "${unique_sum}"
This results in an echo of "0". I've tried placing __$unique_sum=sum__ in the END, various combinations of initializing the variable (awk -v unique_sum=0 ...) and placing the variable assignment outside of the quoted sections.
So far, my Google-fu is failing horribly as most people just send the whole of the output to a variable. In this example, many lines are printed (one for each IP) in addition to the total. Failing a way to capture the 'sum' variable, is there a way to capture that last line of output?
This is probably one of the most sophisticated things I've tried in awk so my confidence that I've done anything useful is pretty low. Any help will be greatly appreciated!
You can't assign a shell variable inside an awk program. In general, no child process can alter the environment of its parent. You have to have the awk program print out the calculated value, and then shell can grab that value and assign it to a variable:
output=$( grep $YESTERDAY $ACCESSLOG | grep "$1" | awk -F" - " '{print $1}' | sort | uniq -c | awk '{sum += 1; print } END {print sum}' )
unique_sum=$( sed -n '$p' <<< "$output" ) # grab the last line of the output
sed '$d' <<< "$output" # print the output except for the last line
echo " $unique_sum total"
That pipeline can be simplified quite a lot: awk can do what grep can do, so first
grep $YESTERDAY $ACCESSLOG | grep "$1" | awk -F" - " '{print $1}'
is (longer, but only one process)
awk -F" - " -v date="$YESTERDAY" -v patt="$1" '$0 ~ date && $0 ~ patt {print $1}' "$ACCESSLOG"
And the last awk program just counts how many lines and can be replaced with wc -l
All together:
unique_output=$(
awk -F" - " -v date="$YESTERDAY" -v patt="$1" '
$0 ~ date && $0 ~ patt {print $1}
' "$ACCESSLOG" | sort | uniq -c
)
echo "$unique_output"
unique_sum=$( wc -l <<< "$unique_output" )
echo " $unique_sum total"

What does this command does in shell linux

TMPFILE=/tmp/jboss_ps.$$
${PS} ${PS_OPTS} | \
grep ${JBOSS_HOME}/java | \
egrep -v " grep | \
tee | $0 " | ${AWK} '{print $NF " "}' | \
sort -u > ${TMPFILE} 2>/dev/null
I want to know what this precise line is doing from the code above
egrep -v " grep | \
tee | $0 "
At first i thought that that line is searching for everything that does not contain this exact string "grep | \ tee | $0" but it appears that egrep is processing the pipes, so what's the significance of the pipes here, does it mean OR ? From my test it appears that it's not, but if it means output redirection then what's the inner grep getting ? And why is tee alone too ?
AFAIK
egrep -v " grep | \
tee | $0 "
is nothing but
egrep -v " grep | tee | $0 "
where \ is the continuation character in bash.
egrep is same as grep -E
-v for inverted selection
tee just another string
so egrep -v " grep | tee | $0 " does find lines that have the string {java path} and within this results, all the lines that doesn't match the condition {either of grep OR tee OR $0 } where
$0 is the filename not a '$0' because it uses DOUBLE QUOTES and not single quotes :)
" commands | $variables " has the tendency to expand the variables and use the utility.
The commands in the pipeline before the egrep command is probably something like
ps -ef|grep .... The egrep -v (Option)line you asked about is simply omitting lines you
don't want in the results, in this case the initial grep command issued by the
script, any tee commands and lastly $0 which is the name of the this script
being executed. egrep allows to enter multiple patterns enclosed in double quotes and
separated by pipe symbol. Syntax egrep -[option or not] "patern1|patern2|patern..."

unix - awk unexpected behaviour

I have the below code in a bash file called 'findError.sh':
#!/bin/bash
filename="$1"
formatindicator="\"|\""
echo "$formatindicator"
formatarg="\$1"
echo "$formatarg"
count=`awk -F$formatindicator '{print $formatarg}' $filename | perl -ane '{ if(m/ERROR/) { print } }' | wc -l `
command="awk -F$formatindicator '{print $formatarg}' $filename | perl -ane '{ if(m/ERROR/) { print } }' | wc -l"
echo $command
echo $count
I then run it at the command line like so:
sh findError.sh test.dat
But It gives me a different count than running the command that was echoed?? How is this possible?
ie
The $command that is echoed back is:
awk -F"|" '{print $1}' test.dat | perl -ane '{ if(m/ERROR/) { print } }' | wc -l
But the $count that is echoed back is:
3
However if I just run this one line below at the command line (not through the script) - the result is 0:
awk -F"|" '{print $1}' test.dat | perl -ane '{ if(m/ERROR/) { print } }' | wc -l
Sample input file (test.dat):
sid|storeNo|latitude|longitude
2|1|-28.03720000
9|2
10
jgn352|1|-28.03ERROR720000
9|2|fdERRORkjhn422-405
0000543210|gfERRORdjk39
Notes: Using SunOS with bash version 4.0.17
You're being too careful with your quotes around the format delimiter.
When you type:
awk -F"|" ...
The program (awk) sees -F| as its first argument; the shell strips the double quotes.
When you have:
formatindicator="\"|\""
echo "$formatindicator"
formatarg="\$1"
echo "$formatarg"
count=`awk -F$formatindicator ...`
You have preserved the double quotes in $formatindicator and therefore awk sees -F"|" as the delimiter, and uses the double quote as the delimiter.
Use:
formatindicator="|"
echo "$formatindicator"
formatarg="\$1"
echo "$formatarg"
count=`awk -F"$formatindicator" ...`
The difference is that the shell strips the quotes off -F"$formatindicator" but doesn't do that when $formatindicator itself contains the double quotes.
(NB: edited to retain back-quotes instead of the $(...) notation that is (a) preferred and (b) was used in the first version of this answer. The $(...) notation was not recognized by the SunOS /bin/sh which was, I believe, being used to execute the script. Both bash and ksh recognize the $(...) notation, but the basic Bourne shell, /bin/sh, on Solaris 10 (SunOS 5.10) and earlier (I've not laid hands on Solaris 11) does not recognize $(...).)
I note that any of perl, awk or grep could be used to find the count of the error lines on its own, so the triplet of awk piped to perl piped to wc is not very efficient.
awk -F"|" '$1 ~ /ERROR/ { count++ } END { print count }' $filename
grep -c ERROR $filename # simple
grep -c '^[^|]*ERROR[^|]*|' $filename # accurate
perl -anF"|" -e '$count++ if $F[0] =~ m/ERROR/; END { print "$count\n"; }' $filename
It's Perl, so TMTOWTDI; take your pick...
Side discussion
In the comments, we have concerns over how various parts of the script are being interpreted.
formatindicator="|"
formatarg="\$1"
count=`awk -F$formatindicator '{print $formatarg}' $filename | perl -ane '{ if(m/ERROR/) { print } }' | wc -l `
Let's simplify this to (using part of my main answer):
count=`awk -F"$formatindicator" '{print $formatarg}' $filename`
The intention is to have the delimiter specified on the command line (which happens successfully) via the -F option. The issue, I expect, is "why does $formatarg get expanded inside single quotes?". And the answer is "Does it?". I think not. So, what is happening, is that awk is seeing the script {print $formatarg}. Since formatarg is not assigned any value, it is equivalent to 0, so the script prints $0, which is the entire input line. Perl is quite happy to echo the line if it matches ERROR anywhere on the line, and wc couldn't care less about what's in the lines, so the result is approximately what was expected. The only time there'd be a discrepancy is when the line from $filename contains ERROR in other than the first pipe-delimited field. That would be counted by the script where it should not.
The problem is with using external variables in awk. If you wish to use external variables in awk then define a variable in the awk one-liner using -v option and variable name and assign your external variable to it. So
The line -
count=`awk -F$formatindicator '{print $formatarg}' $filename | perl -ane '{ if(m/ERROR/) { print } }' | wc -l `
should be -
count=`awk -v fi="$formatindicator" -v fa="$formatarg" 'BEGIN {FS=fi}{print fa}' "$1" | perl -ane '{ if(m/ERROR/) { print } }' | wc -l `
Update:
As stated in the comments, the $formatarg contains the value $1. What you need to do is store just 1 and then pass it as -
count=`awk -v fi=$formatindicator -v fa="$formatarg" 'BEGIN {FS=fi}{print $fa}' "$1" | perl -ane '{ if(m/ERROR/) { print } }' | wc -l
[jaypal:~/Temp] echo $formatindicator
|
[jaypal:~/Temp] echo $formatarg
1
[jaypal:~/Temp] awk -v fi="$formatindicator" -v fa="$formatarg" 'BEGIN {FS=fi}{print $fa}' data.file
sid
2
9
10
jgn352
9
0000543210
Script:
#!/bin/bash
filename="$1"
formatindicator="|"
echo "$formatindicator"
formatarg="1"
echo "$formatarg"
count=`awk -v fa="$formatarg" -v fi="$formatindicator" 'BEGIN{FS=fi}{print $fa}' $filename | perl -ane '{ if(m/ERROR/) { print } }' | wc -l `
command="awk -F$formatindicator '{print $formatarg}' $filename | perl -ane '{ if(m/ERROR/) { print } }' | wc -l"
echo $command
echo $count

Print the output of a shell command in Perl

I would like to turn the outut of a shell command into a variable e.g. $result and then print it out on screen e.g. print $result
df -H | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }'
Just use backticks, and careful with the quoting:
my $result = `df -H | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print \$5 " " \$1 }'`;
print $result;
I'm just a learner myself but I found this http://perldoc.perl.org/Shell.html useful... "This package is included as a show case, illustrating a few Perl features. It shouldn't be used for production programs".

Resources