Awk ignores printing a single column. sh -c - linux

I'm having trouble trying to wrap this command inside the other command.
# Target Command:
/bin/df / | awk END' { gsub(/\%/, ""); print $5} '
# What I want:
/bin/sh -c " [command above goes here]"
I'm running into a problem with the awk and all the quotes...
I've tried:
bin/sh -c "/bin/df / | awk END' { gsub(/\%/, "'"''"'"); print $5} '"
But the problem is that awk doesn't seem to print only column $5 in this instance.
How can I fix the above command awk command to print only column 5 (of the last line)?
PS: what I'm trying to do is to get the percentage of disk used (excluding the % sign). Since I'm calling it from a program that doesn't support pipes in an easy way, I'm using sh -c.

You can use:
/bin/sh -c "/bin/df | awk 'END{gsub(/%/, \"\", \$5); print \$5}'"
$ also needs to be escaped.
As per #Ed's comment below if $5 is not available in END block in some versions of awk then use:
/bin/sh -c "/bin/df | awk '{p=\$5} END{sub(/%/, \"\", p); print p}'"

You need to escape the " and the $:
/bin/sh -c "/bin/df / | awk END' { gsub(/\%/, \"\"); print \$5} '"
The reason why you did not need to escape the $ in your original line is: because it was guarded by ''. But when you quote the ticks (with the double quotes) they lose that protective property.
The " need to be escaped because they would terminate your outermost double quotes too early.
But it is most likely easier to put everything into a shell script and start this. This also allows you some more lines like setting up the PATH and IFS and maybe doing platform specific searching of (g)awk and so on.
BTW: and I would add -x nfs (Linux) or -t nonfs,nullfs (BSD) if possible. Monitoring scripts are known to kill a system if started repeatingly (while the NFS server is unabaulable). Of course this asumes, you dont want to monitor NFS.

You can let bash do the heavy lifting with exported functions:
myfunc() {
/bin/df / | awk 'END { gsub(/\%/, ""); print $5} '
}
export -f myfunc
bash -c "myfunc"
This way, you don't have to mess around with unreadable escaping.

Related

bash - Diff a command with a file (specific)

so its pretty hard to describe for me what I want to do, but I'll try it:
(Because of some private information I changed the names)
I want to "diff" a command output with a text file created from me.
The command output looks like:
'Blabla1' '12.34.56.78' (24 objects + dependencies), STATUS: 'RUNNING'
'Blabla3' '12.34.56.89' (89 objects + dependencies), STATUS: 'RUNNING'
And the txtfile:
Blabla1
Blabla2
If it finds Blabla1 anywhere in the command output its fine. But you see, he will not find Blabla2 anywhere in the command output and this difference I want as an output.
I hope you understand what I mean and you could possible help me.
Greetings,
Can
UPDATE::::
#hek2mgl
So my command is:
./factory.sh listapplications | grep -i running
This command shows this:
'ftp' '1' (7 objects + dependencies), STATUS: 'RUNNING' - 'XSD Da
'abc' '5.1.0' (14 objects + dependencies), STATUS: 'RUNNING' - '2017-10-13: Fix fuer Bug 2150'
'name' '1.0.2' (5 objects + dependencies), STATUS: 'RUNNING'
And I want to compare that output with my textfile:
ftp
abc
name
missing
alsomissing
So if I compare this 2 now it should check if he finds the words from my textfile ANYWHERE in the command output. If it does find it anywhere -> not output.
And as you see he'll not find "missing" and "alsomissing". I want this two as an output at the end.
What you might be interested in is grep in combination with 'process substitution'. If your file with patterns is file.txt and your command to execute is cmd then you can use
grep -o -F -f file.txt <(cmd) | grep -v -F -f - file.txt
This will output the patterns is file.txt which are not matched in the output of cmd.
In case of the Blabla example, the above line will output
Blabla2
How it works is the following. The first part will search for all patterns listed in file.txt in the output of cmd and will only output the matched parts. This means that
% grep -o -F -f file.txt <(cmd)
Blabla1
This output is now piped to another command that will try to find all lines in file.txt which do not match any of the patterns comming from the pipe (-f -)
% grep -o -F -f file.txt <(cmd) | grep -v -F -f - file.txt
Blabla2
So ... this seems to do it, using bash process substitution:
$ cat file1
'Blabla1' '12.34.56.78' (24 objects + dependencies), STATUS: 'RUNNING'
'Blabla3' '12.34.56.89' (89 objects + dependencies), STATUS: 'RUNNING'
$ cat file2
Blabla1
Blabla2
$ grep -vFf <(awk '{gsub(/[^[:alnum:]]/,"",$1);print $1}' file1) file2
Blabla2
The awk script takes the first field, strips non-alphanumeric characters from it (i.e. the single quotes) and outputs just that first field. The grep option -f uses the "virtual" file created by the aforementioned process substitution as a list of fixed strings to search for within the input file (file2), and the -v reverses the search, showing you only what was not found.
If the regex in the gsub() is too greedy, you might replace it with something like $1=substr($1,2,length($1)-2).
You could alternately do this in (POSIX) awk alone, without relying on bash process substitution:
$ awk 'NR==FNR{a[substr($1,2,length($1)-2)];next} $1 in a{next} 1' file1 file2
Blabla2
This reads the stripped first field of file1 into the keys of an array, then for each line of file2 checks for the existence of that key in the array, skipping lines that match and printing any left over. (The 1 at the end of the script is short-hand for "print this line".)
You can also use awk only:
awk '
# Store patterns of text.file in an array (p)atterns.
# Initialize their count of occurrence with 0
NR==FNR{
p[$0]=0
next
}
# Replace the quotes around BlaBla... in cmd output.
# Increase the count of occurrence of the pattern
{
gsub("'\''", "")
p[$1]++
}
# At the end of the input print those patterns which
# did not appear in cmd output, meaning their count of
# occurrence is zero.
END{
for(i in p){
if(p[i]==0){
print i
}
}
}' text.file cmd.txt
PS: Alternatively you use process substitution instead of storing the command output in a file. Replace cmd.txt by <(cmd) then.

shell scripts variable passed to awk and double quotes needed to preserve

I have some logs called ts.log that look like
[957670][DEBUG:2016-11-30 16:49:17,968:com.ibatis.common.logging.log4j.Log4jImpl.debug(Log4jImpl.java:26)]{pstm-9805256} Parameters: []
[957670][DEBUG:2016-11-30 16:49:17,968:com.ibatis.common.logging.log4j.Log4jImpl.debug(Log4jImpl.java:26)]{pstm-9805256} Types: []
[957670][DEBUG:2016-11-30 16:50:17,969:com.ibatis.common.logging.log4j.Log4jImpl.debug(Log4jImpl.java:26)]{rset-9805257} ResultSet
[957670][DEBUG:2016-11-30 16:51:17,969:com.ibatis.common.logging.log4j.Log4jImpl.debug(Log4jImpl.java:26)]{rset-9805257} Header: [LAST_INSERT_ID()]
[957670][DEBUG:2016-11-30 16:52:17,969:com.ibatis.common.logging.log4j.Log4jImpl.debug(Log4jImpl.java:26)]{rset-9805257} Result: [731747]
[065417][DEBUG:2016-11-30 16:53:17,986:sdk.protocol.process.InitProcessor.process(InitProcessor.java:61)]query String=requestid=10547
I have a script in which there's sth like
#!/bin/bash
begin=$1
cat ts.log | awk -F '[ ,]' '{if($2 ~/^[0-2][0-9]:[0-6][0-9]:[0-6][0-9]&& $2>="16:50:17"){print $0}}'
instead of inputting the time like 16:50:17 I want to just pass $1 of shell to awk so that all I need to do is ./script time:hh:mm:ss The script will look like
#!/bin/bash
begin=$1
cat ts.log | awk -v var=$begin -F '[ ,]' '{if($2 ~/^[0-2][0-9]:[0-6][0-9]:[0-6][0-9]&& $2>="var"){print $0}}'
But the double quotes need to be there OR it won't work.
I tried 2>"\""var"\""
but it doesn't work.
so is there a way to keep the double quotes there?
preferred result ./script
then extract the log from the time specified as $1.
There's many ways to do what you want.
Option 1: Using double quotes enclosing awk program
#!/bin/bash
begin=$1
awk -F '[ ,]' "\$2 ~ /^..:..:../ && \$2 >= \"${begin}\" " ts.log
Inside double quotes strings, bash does variable substitution. So $begin or ${begin} will be replaced with the shell variable value (whatever sent by the user)
Undesired effect: awk special variables starting with $ must be escaped with '\' or bash will try to replace them before execute awk.
To get a double quote char (") in bash double quote strings, it has to be escaped with '\', so in bash " \"16:50\" " will be replaced with "16:50". (This won't work with single quote strings, that don't have expansion of variables nor escaped chars at all in bash).
To see what variable substitutions are made when bash executes the script, you can execute it with debug option (it's very enlightening):
$ bash -x yourscript.sh 16:50
Option 2: Using awk variables
#!/bin/bash
begin=$1
awk -F '[ ,]' -v begin=$begin '$2 ~ /^..:..:../ && $2 >= begin' ts.log
Here an awk variable begin is created with option -v varname=value.
Awk variables can be used in any place of awk program as any other awk variable (don't need double quotes nor $).
There are other options, but I think you can work with these two.
In both options I've changed a bit your script:
It doesn't need cat to send data to awk, because awk can execute your program in one or more data files sent as parameters after your program.
Your awk program doesn't need include print at all (as #fedorqui said), because a basic awk program is composed by pairs of pattern {code}, where pattern is the same as you used in the if sentence, and the default code is {print $0}.
I've also changed the time pattern, primarly to clarify the script, but in a log file there's almost no chance that exists some 8 char length string that has 2 colons inside (regexp: . repaces any char)

bash scripting: combine var=$(...) and var=${var%%...} lines?

Is it possible and, if yes, how to convert the following expression to one-liner?
DEV=$(lsblk -no KNAME,MODEL | grep 'ModelNAME')
DEV=${DEV%%'ModelNAME'}
Simple DEV=${(lsblk -no KNAME,MODEL | grep 'ModelNAME')%%'ModelNAME'} doesn't work
zsh allows you to combine parameter expansions. Bash does not.
For either bash or POSIX sh (both of which support this particular parameter expansion), you'll need to do this as two separate commands.
That said, there are other options available. For instance:
# tell awk to print first field and exit on a match
dev=$(lsblk -no KNAME,MODEL | awk '/ModelNAME/ { print $1; exit }')
...or, even easier (but requiring bash or another modern ksh derivative):
# read first field of first line returned by grep; _ is a placeholder for other fields
read -r dev _ < <(lsblk -no KNAME,MODEL | grep -e ModelNAME)

AWK with If condition

i am trying to replace the following string for ex:
from
['55',2,1,10,30,23],
to
['55',2,555,10,30,23],
OR
['55',2,1,10,30,23],
to
['55',2,1,10,9999,23],
i search around and find this :
$ echo "[55,2,1,10,30,23]," | awk -F',' 'BEGIN{OFS=","}{if($1=="[55"){$2=10}{print}}'
[55,10,1,10,30,23],
but it's not working in my case since there is " ' " around the value of $1 in my if condition :
$ echo "['55',2,1,10,30,23]," | awk -F',' 'BEGIN{OFS=","}{if($1=="['55'"){$2=10}{print}}'
['55',2,1,10,30,23],
The problem is not in the awk code, it's the shell expansion. You cannot have single quotes in a singly-quoted shell string. This is the same problem you run into when you try to put the input string into single quotes:
$ echo '['55',2,1,10,30,23],'
[55,2,1,10,30,23],
-- the single quotes are gone! And this makes sense, because they did their job of quoting the [ and the ,2,1,10,30,23], (the 55 is unquoted here), but it is not what we wanted.
A solution is to quote the sections between them individually and squeeze them in manually:
$ echo '['\''55'\'',2,1,10,30,23],'
['55',2,1,10,30,23],
Or, in this particular case, where nothing nefarious is between where the single quotes should be,
echo '['\'55\'',2,1,10,30,23],' # the 55 is now unquoted.
Applied to your awk code, that looks like this:
$ echo "['55',2,1,10,30,23]," | awk -F',' 'BEGIN{OFS=","}{if($1=="['\'55\''"){$2=10}{print}}'
['55',10,1,10,30,23],
Alternatively, since this doesn't look very nice if you have many single quotes in your code, you can write the awk code into a file, say foo.awk, and use
echo "['55',2,1,10,30,23]," | awk -F, -f foo.awk
Then you don't have to worry about shell quoting mishaps in the awk code because the awk code is not subject to shell expansion anymore.
I think how to match and replace is not the problem for you. The problem you were facing is, how to match a single quote ' in field.
To avoid to escape each ' in your codes, and to make your codes more readable, you can assigen the quote to a variable, and use the variable in your codes, for example like this:
echo "['55' 1
['56' 1"|awk -v q="'" '$1=="["q"55"q{$2++}7'
['55' 2
['56' 1
In the above example, only in line with ['55', the 2nd field got incremented.

Shell scripts for Meld Nautilus context menu

Beyond Compare provides "Select for compare" and "Compare to Selected" by using two nautilus scripts (stored in /home/user/.gnome2/nautilus-scripts).
Script 1: Select for compare
#!/bin/sh
quoted=$(echo "$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS" | awk 'BEGIN { FS = "\n" } { printf "\"%s\" ", $1 }' | sed -e s#\"\"##)
echo "$quoted" > $HOME/.beyondcompare/nautilus
Script 2: Compare to Selected
#!/bin/sh
arg2=$(cat $HOME/.beyondcompare/nautilus)
arg1=$(echo "$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS" | awk 'BEGIN { FS = "\n" } { printf "\"%s\" ", $1 }' | sed -e s#\"\"##)
bcompare $arg1 $arg2
I am trying to do similar scripts for Meld, but it is not working.
I am not familiar with shell scripts. Can anyone help me understand this:
quoted=$(echo "$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS" | awk 'BEGIN { FS = "\n" } { printf "\"%s\" ", $1 }' | sed -e s#\"\"##)
so that I can adapt to meld.
If you are not rolling your own solution for the sake of learning, I would suggest installing the diff-ext extension to nautilus. It is cross platform and if you are running Debian/Ubuntu installing it should be as simple as sudo apt-get install diff-ext.
Check out some screenshots here - http://diff-ext.sourceforge.net/screenshots.shtml
The quoted=$( ...) assigns whatever output there is to the variable named quoted, and can be used later in the script as $quoted OR ${quoted} OR "${quoted}" OR "$quoted"
The '|' char is called a 'pipe' in unix/linux and it connects the output of the preceding command to feed into the following command.
So you just take the script apart 1 piece at a time and see what it does,
quoted=$(
# I would execute below by itself first
echo "$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS"
# then add on this piped program to see how data gets transformed
| awk 'BEGIN { FS = "\n" } { printf "\"%s\" ", $1 }'
# then add this
| sed -e s#\"\"##
# the capturing of the output to the var 'quoted' is the final step of code
)
# you **cannot** copy paste this whole block of code and expect it to work ;-)
I don't know what is supposed to be in $NAUTILUS_SCRIPT_SELECTED_FILE_PATHS, so it is hard to show you here. AND, that variable is not defined in any of the code you specify here, so you may only get a blank line when you echo its value. Be prepared to do some research on how that value get set AND what are the correct values.
Also I notice that your code is 'prefixed' as #!/bin/sh. If it is truly /bin/sh then command substitution like quoted=$(....) will not work and should generate an error message. Persumably your system is really using bash for /bin/sh. You can eliminate any possible confusion in the future (when changing to a system where /bin/sh = bourne shell), by changing the 'shebang' to #! /bin/bash.
I hope this helps.
I just discovered diff-ext thanks to this post, excellent!
The first try I did failed: by default diff-ext does not handle backup files (*~ and *.bak). To enable this, run:
$ diff-ext-setup
and in the Mime types pane, check application/x-trash.
Now you can compare a file and its backup.

Resources