Issues passing AWK output to BASH Variable - linux

I'm trying to parse lines from an error log in BASH and then send a certain part out to a BASH variable to be used later in the script and having issues once I try and pass it to a BASH variable.
What the log file looks like:
1446851818|1446851808.1795|12|NONE|DID|8001234
I need the number in the third set (in this case, the number is 12) of the line
Here's an example of the command I'm running:
tail -n5 /var/log/asterisk/queue_log | grep 'CONNECT' | awk -F '[|]' '{print $3}'
The line of code is trying to accomplish this:
Grab the last lines of the log file
Search for a phrase (in this case connect, I'm using the same command to trigger different items)
Separate the number in the third set of the line out so it can be used elsewhere
If I run the above full command, it runs successfully like so:
tail -n5 /var/log/asterisk/queue_log | grep 'CONNECT' | awk -F '[|]' '{print $3}'
12
Now if I try and assign it to a variable in the same line/command, I'm unable to have it echo back the variable.
My command when assigning to a variable looks like:
tail -n5 /var/log/asterisk/queue_log | grep 'CONNECT' | brand=$(awk -F '[|]' '{print $3}')
(It is being run in the same script as the echo command so the variable should be fine, test script looks like:
#!/bin/bash
tail -n5 /var/log/asterisk/queue_log | grep 'CONNECT' | brand=$(awk -F '[|]' '{print $3}')
echo "$brand";
I'm aware this is most likely not the most efficient/eloquent solution to do this, so if there are other ideas/ways to accomplish this I'm open to them as well (my BASH skills are basic but improving)

You need to capture the output of the entire pipeline, not just the final section of it:
brand=$(tail -n5 /var/log/asterisk/queue_log | grep 'CONNECT' | awk -F '|' '{print $3}')
You may also want to consider what will happen if there is more than one line containing CONNECT in the final five lines of the file (or indeed, if there are none). That's going to cause brand to have multiple (or no) values.
If your intent is to get the third field from the latest line in the file containing CONNECT, awk can pretty much handle the entire thing without needing tail or grep:
brand=$(awk -F '|' '/CONNECT/ {latest = $3} END {print latest}')

Related

Make this code read from file

Hello guys I wrote code in linux shell script but the code only read from keyboard i want to change it to read from file for example if i write ./car.sh lamborghini.txt it should give me most expensive model of it.
code is like this:
#!/bin/sh
echo "Choose one of them"
read manu
sort -t';' -nrk3 auto.dat > auto1.dat
grep $manu auto1.dat | head -n1 | cut -d';' -f2
and auto.dat file contains these:
Lamborghini;Aventador;700000
Lamborghini;Urus;200000
Tesla;ModelS;180000
Tesla;ModelX;140000
Ford;Mustang;300000
Ford;Focus;20000
The read command always reads from stdin. You can use redirection < to read the content of a file.
Reading $manu from a file's content
#!/bin/sh
read manu < "$1"
sort -t';' -nrk3 auto.dat | grep "$manu" | head -n1 | cut -d';' -f2
This version of your script expects a file name as a command line parameter. The first line of said file will be stored in $manu. Example:
./car.sh fileWithSelection.txt
The file should contain the text you would have entered in your old script.
Reading $manu from a command line parameter
In my opinion, it would make more sense to interpret the command line parameters directly, instead of using files and passing them to the script.
#!/bin/sh
manu="$1"
sort -t';' -nrk3 auto.dat | grep "$manu" | head -n1 | cut -d';' -f2
Example:
./car.sh "text you would have entered in your old script."
You can try this way but the file Tesla.txt must contain Tesla
#!/bin/sh
read manu < "$1"
awk -F\; -vmod="$manu" '
$1==mod{if($3>a){a=$3;b=$2}}
END{if(b){print "The more expensive "mod" is "b" at "a}}' auto.dat

Generating a bash script from a bash script

I need to generate a script from within a script but am having problems because some of the commands going into the new script are being interpreted rather than written to the new file. For example i want to create a file called start.sh in it I want to set a variable to the current IP address:
echo "localip=$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/')" > /start.sh
what gets written to the file is:
localip=192.168.1.78
But what i wanted was the following text in the new file:
localip=$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/')"
so that the IP is determined when the generated script is run.
What am i doing wrong ?
You're making this unnecessary hard. Use a heredoc with a quoted sigil to pass literal contents through without any kind of expansion:
cat >/start.sh <<'EOF'
localip=$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/')
EOF
Using <<'EOF' or <<\EOF, as opposed to just <<EOF, is essential; the latter will perform expansion just as your original code does.
If anything you're writing to start.sh needs to be based on current variables, by the way, be sure to use printf %q to safely escape their contents. For instance, to set your current $1, $2, etc. to be active during start.sh execution:
# open start.sh for output on FD 3
exec 3>/start.sh
# build a shell-escaped version of your argument list
printf -v argv_str '%q ' "$#"
# add to the file we previously opened a command to set the current arguments to that list
printf 'set -- %s\n' "$argv_str" >&3
# pass another variable through safely, just to be sure we demonstrate how:
printf 'foo=%q\n' "$foo" >&3
# ...go ahead and add your other contents...
cat >&3 <<'EOF'
# ...put constant parts of start.sh here, which can use $1, $2, etc.
EOF
# close the file
exec 3>&-
This is far more efficient than using >>/start.sh on every line that needs to append: Using exec 3>file and then >&3 only opens the file once, rather than opening it once per command that generates output.

Tail -f piped to > awk piped to file > file does not work

Having trouble to wrap my head around piping and potential buffering issue. I am trying to perform set of operations piped that seem to break at some piping level. To simplify , I narrowed it down to 3 piping operations that do not work correctly
tail -f | awk '{print $1}' > file
results in no data redirected to the file , however
tail -f | awk '{print $1}'
results are output to stdout fine
also
tail -10 | awk '{print $1}' > file
works fine as well.
thinking it might be buffering issue, tried
tail -f | unbuffer awk '{print $1}' > file
what produced no positive results
(note: in original request, i have more operation in between using grep --line-buffer, but the problem was narrowed down to 3 piped commands tail -f | awk > file
The following will tail -f on a given file and whenever new data is added will automatically execute the while loop:
tail -f file_to_watch | while read a; do echo "$a" |awk '{print $1}' >> file; done
or more simply if you really only need to print the first field you could read it directly to your variable like this:
tail -f file_to_watch | while read a b; do echo "$a" >> file; done
Here is how to handle log files:
tail --follow=name logfile | awk '{print $1 | "tee /var/log/file"}'
or for you this may be ok:
tail -f | awk '{print $1 | "tee /var/log/file"}'
--follow=name this prevents stop of command while log file are rolled.
| "tee /var/log/file" this is used to get the output to the file.

Assigning output of a command to a variable(BASH)

I need to assign the output of a command to a variable. The command I tried is:
grep UUID fstab | awk '/ext4/ {print $1}' | awk '{print substr($0,6)}'
I try this code to assign a variable:
UUID=$(grep UUID fstab | awk '/ext4/ {print $1}' | awk '{print substr($0,6)}')
However, it gives a syntax error. In addition I want it to work in a bash script.
The error is:
./upload.sh: line 12: syntax error near unexpected token ENE=$( grep UUID fstab | awk '/ext4/ {print $1}' | awk '{print substr($0,6)}'
)'
./upload.sh: line 12: ENE=$( grep UUID fstab | awk '/ext4/ {print $1}' | awk '{print substr($0,6)}'
)'
well, using the '$()' subshell operator is a common way to get the output of a bash command. As it spans a subshell it is not that efficient.
I tried :
UUID=$(grep UUID /etc/fstab|awk '/ext4/ {print $1}'|awk '{print substr($0,6)}')
echo $UUID # writes e577b87e-2fec-893b-c237-6a14aeb5b390
it works perfectly :)
EDIT:
Of course you can shorten your command :
# First step : Only one awk
UUID=$(grep UUID /etc/fstab|awk '/ext4/ {print substr($1,6)}')
Once more time :
# Second step : awk has a powerful regular expression engine ^^
UUID=$(cat /etc/fstab|awk '/UUID.*ext4/ {print substr($1,6)}')
You can also use awk with a file argument ::
# Third step : awk use fstab directlty
UUID=$(awk '/UUID.*ext4/ {print substr($1,6)}' /etc/fstab)
Just for trouble-shooting purposes, and something else to try to see if you can get this to work, you could also try to use "backticks", e.g,
cur_dir=`pwd`
would save the output of the pwd command in your variable cur_dir, though using $() approach is generally preferable.
To quote from a pages given to me on http://unix.stackexchange.com:
The second form `COMMAND` (using backticks) is more or less obsolete for Bash, since it
has some trouble with nesting ("inner" backticks need to be escaped)
and escaping characters. Use $(COMMAND), it's also POSIX!

What is this Bash (and/or other shell?) construct called?

What is the construct in bash called where you can take wrap a command that outputs to stdout, such that the output itself is treated like a stream? In case I'm not describing that so well, maybe an example will do best, and this is what I typically use it for: applying diff to output that does not come from a file, but from other commands, where
cmd
is wrapped as
<(cmd)
By wrapping a command in such a manner, in the example below I determine that there a difference of one between the two commands that I am running, and then I am able to determine that one precise difference. What is the construct/technique of wrapping a command as <(cmd) called? Thanks
[builder#george v6.5 html]$ git status | egrep modified | awk '{print $3}' | wc -l
51
[builder#george v6.5 html]$ git status | egrep modified | awk '{print $3}' | xargs grep -l 'Ext\.define' | wc -l
50
[builder#george v6.5 html]$ diff <(git status | egrep modified | awk '{print $3}') <(git status | egrep modified | awk '{print $3}' | xargs grep -l 'Ext\.define')
39d38
< javascript/reports/report_initiator.js
ADDENDUM
The revised command using the advice for using git's ls-file should be as follows (untested):
diff <(git ls-files -m) <(git ls-files -m | xargs grep -l 'Ext\.define')
It is called process substitution.
This is called Process Substitution
This is process substitution, as you have been told. I'd just like to point out that this also works in the other direction. Process substitution with >(cmd) allows you to take a command that writes to a file and instead have that output redirected to another command's stdin. It's very useful for inserting something into a pipeline that takes an output filename as an argument. You don't see it as much because pretty much every standard command will write to stdout already, but I have used it often with custom stuff. Here is a contrived example:
$ echo "hello world" | tee >(wc)
hello world
1 2 12

Resources