tail not providing output in bash script - linux

i have written a bash script that will filter 'tail' output, which the entire command
tail -f /var/log/asterisk/messages | awk 'match($12, /[^0-9]91([0-9]{10})#default/, a) {print a[1]}'
works fine from the CLI but not when placed in the bash script:
#!/bin/bash
phonenumber=$(tail -f /var/log/asterisk/messages | awk 'match($12, /[^0-9]91([0-9]{10})#default/, a) {print a[1]}')
echo "$phonenumber >> test.log"
which doesn't output anything, (2135551234, is the expected output string) i have tried writing to the log file and writing just the stdout but neither work.
i have tried the script using 'cat' instead of 'tail' and that works fine. but i dont want to dump the output of the entire file, hence the use of 'tail'.
I have also tried using 'tee' but to no avail
the end goal of this script with be to send the phone number as it comes into the PBX to a serial device to another system and used as the CID.
thanks for all your help in advance

Try this:
phonenumber=$(tail -f /var/log/asterisk/messages | awk 'match($12, /[^0-9]91([0-9]{10})#default/, a) {print a[1]; exit}')
Your version doesn't work because tail -f and awk are in an infinite loop. Adding exit to the awk script terminates the loop when the first phone number is found. awk exits immediately and its output is put into the variable, and tail -f gets a SIGPIPE signal when it tries to write the next line to the pipe, which causes it to exit.

Related

Assign command output to variable in Bash?

I know this seems fairly trivial. But I have no idea where I am going wrong. I have a shell script where I download a package based on the input argument and then extract the package name. This is how I do it:
wget $1
echo $1 | awk -F/ '{print $NF}'
I run it like this bash scrip.sh http://apache.claz.org/phoenix/apache-phoenix-4.10.0-HBase-1.2/bin/apache-phoenix-4.10.0-HBase-1.2-bin.tar.gz
I download the package and then the second line splits the input variable along the / delimiter I get apache-phoenix-4.10.0-HBase-1.2-bin.tar.gz. Now I want to assign the result of the second line to a variable. I change my script to a dir=$($1 | awk -F/ '{print $NF}') and add an echo $dir to the script to see the result. However I keep running into this error : line 2: http://apache.claz.org/phoenix/apache-phoenix-4.10.0-HBase-1.2/bin/apache-phoenix-4.10.0-HBase-1.2-bin.tar.gz: No such file or directory
I tried wrapping the command into `` but the problem persists. I am not cd-ing into any directory so I have no idea why this error keeps showing up.

How can I get past file length limit?

I am trying to parse 50+ files in a shell script in a single call like the following,
for i in {0..49}
do
_file_list="$_file_list $_srcdir01/${_date_a[$i]}.gz"
done
eval zcat "$_file_list" | awk '{sum += 1} END {print sum;}'
But when I do this, I get the 'file name too long' error with zcat.
The reason I am trying to do this in a single call is because to my knowledge, awk cannot retain information from previous call. And I have to go through the entire list by considering it as a whole (e.g. finding a unique word in that list)
I also don't want to combine files because each of them are large files already.
Is there a clever way to solve this or Do I need to split the call and write out the intermediate results along the way?
You can pipe directly from a loop:
for date in "${_date_a[#]}"
do
zcat "$_srcdir01/$date.gz"
done | awk '{sum += 1} END {print sum;}'
In any case, that code shouldn't give that error as posted.
Since your example is not complete or self-contained, I added some code to initialize datafiles to test:
$ cat testscript
_srcdir01="./././././././././././././././././././"
_date_a=(foo{0001..0050})
for file in "${_date_a[#]}"
do
echo "hello world" | gzip > "$file.gz"
done
for i in {0..49}
do
_file_list="$_file_list $_srcdir01/${_date_a[$i]}.gz"
done
eval zcat "$_file_list" | awk '{sum += 1} END {print sum;}'
Running it generates a bunch of test data and correctly sums the number of lines:
$ bash testscript
50
I can reproduce your issue if I e.g. remove the eval:
$ bash testscript
(...)/foo0045.gz ./././././././././././././././././././/foo0046.gz ././././././.
/././././././././././././/foo0047.gz ./././././././././././././././././././/foo0
048.gz ./././././././././././././././././././/foo0049.gz ./././././././././././.
/./././././././/foo0050.gz: file name too long
So please double check that the code you post is the code you run, and not one of several other attempts you made while trying to solve it.
$ awk '{sum += 1} END {print sum}' files...
will work, but perhaps you just need to use wc -l
Manually building the file list is unnecessary,
$ zcat path/to/files{1..49} | awk ...
will work as well.

referencing stdout in a command that has been piped into

I want to make a simple dmenu command that reads a file of commands and names. Then takes the names and displays them using dmenu then takes dmenu's output and runs the associated command using the file again.
I got to the point where dmenu displays the names, but I don't really know where to go from there. Learning bash is a really daunting task to me and I don't really know where to start with this seemingly simple script/command.
here is the file:
Pushbullet
google-chrome-stable --app=https://www.pushbullet.com
Steam
steam
Chrome
google-chrome-stable
Libre Office
libreoffice
Transmission
transmission-qt
Audio Control Panel
sudo pavucontrol & bluberry
and here is what I have so far for my command:
awk 'NR % 2 != 0' /home/rocco/programlist | dmenu | ??(grep -l "stdout" /home/rocco/programlist....)
It was my thinking that I could somehow pipe into grep or awk with the name of the application then get the line number then add one and pipe that into sh.
Thanks
I have no experience with dmenu but if I understand how it works correctly, this should do what you want. Wrapping a command in $(…) returns the output as a variable, which we can pass on to another command.
#!/bin/bash
plist="/home/rocco/programlist"
# pipe every second line to dmenu
selected=$(awk 'NR % 2 != 0' "$plist" | dmenu)
# search for the selected item, get the command after it
cmd=$(grep -A1 "$selected" "$plist" | tail -n 1)
# run the command
$cmd
Worth mentioning a mistake in your question. dmenu sends to stdout, or standard output, but the next program in line would be reading stdin, or standard input. In any case, grep can't take patterns on standard input, which is why I've saved to a variable instead of trying to pipe it somewhere.
Assuming you have programlist.txt in the working directory you can use:
awk 'NR%2 !=0' programlist.txt |dmenu |awk '{system("grep --no-group-separator -A 1 '"'"'"$0"'"'"' programlist.txt");}' |awk '{if(NR==2){system($0);}}'
Note the quoting of the $0 in the first awk envocation. This is necessary to get names with spaces in them like "Libre Office"

Redirecting linux cout to a variable and the screen in a script

I am currently trying to make a script file that runs multiple other script files on a server. I would like to display the output of these script to the screen IN ADDITION to passing it into grep so I can do error testing. currently I have written this:
status=$(SOMEPROCESS | grep -i "SOMEPROCESS started completed correctly")
I do further error handling below this using the variable status, so I would like to display SOMEPROCESS's output to the screen for error reference. This is a read only server and I can not save the output to a log file.
You need to use the tee command. It will be slightly fiddly, since tee outputs to a file handle. However you could create a file descriptor using pipe.
Or (simpler) for your use case.
Start the script without grep and pipe it through tee SOMEPROCESS | tee /my/safely/generated/filename. Then use tail -f /my/safely/generated/filename | grep -i "my grep pattern separately.
You can use process substituion together with tee:
SOMEPROCESS | tee >(grep ...)
This will use an anonymous pipe and pass /dev/fd/... as file name to tee (or a named pipe on platforms that don't support /dev/fd/...).
Because SOMEPROCESS is likely to buffer its output when not talking to a terminal, you might see significant lag in screen output.
I'm not sure whether I understood your question exactly.
I think you want to get the output of SOMEPROCESS, test it, print it out when there are errors. If it is, I think the code bellow may help you:
s=$(SOMEPROCESS)
grep -q 'SOMEPROCESS started completed correctly' <<< $s
if [[ $? -ne 0 ]];then
# specified string not found in the output, it means SOMEPROCESS started failed
echo $s
fi
But in this code, it will store the all output in the memory, if the output is big enough, there will be a OOM risk.

How to pipe all the output of "ps" into a shell script for further processing?

When I run this command:
ps aux|awk {'print $1,$2,$3,$11'}
I get a listing of the user, PID, CPU% and the actual command.
I want to pipe all those listings into a shell script to calculate the CPU% and if greater than, say 5, then to kill the process via the PID.
I tried piping it to a simple shell script, i.e.
ps aux|awk {'print $1,$2,$3,$11'} | ./myscript
where the content of my script is:
#!/bin/bash
# testing using positional parameters
echo "$1 $2 $3 $4"
But I get a blank output. Any idea how to do this?
Many thanks!
If you use awk, you don't need an additional bash script. Also, it is a good idea to reduce the output of the ps command so you don't have to deal with extra information:
ps acxho user,pid,%cpu,cmd | awk '$3 > 5 {system("echo kill " $2)}'
Explanation
The extra ps flags I use:
c: command only, no extra arguments
h: no header, good for scripting
o: output format. In this case, only output the user, PID, %CPU, and command
The awk command compare the %CPU, which is the third column, with a threshold (5). If it is over the threshold, then issue the system command to kill that process.
Note the echo in the command. Once you are certain the scripts works the way you like, then remove the word echo from the command to execute it for real.
Your script needs to read its input
#!/bin/bash
while read a b c d; do
echo $a $b
done
I think you can get it using xargs command to pass the AWK output to your script as arguments:
ps aux|awk {'print $1,$2,$3,$11'} | xargs ./myscript
Some extra info about xargs: http://en.wikipedia.org/wiki/Xargs
When piping input from one process to another in Linux (or POSIX-compliant systems) the output is not given as arguments to the receiving process. Instead, the standard output of the first process is piped into the standard input of the other process.
Because of this, your script cannot work. $1...$n accesses variables that have been passed as arguments to it. As there are none it won't display anything. Instead, you have to read the standard input into variables with the read command (as pointed out by William).
The pipe '|' redirects the standard output of the left to the standard input of the right. In this case, the output of the ps goes to the input of awk, then the output of awk goes to the stdin of the script.
Therefore your scripts needs to read its STDIN.
#!/bin/bash
read var1 var2 var3 ...
Then you can do whatever you want with those variables.
More info, type in bash: help read
If I well understood your problem, you want to kill every process that exceeds X% of the CPU (using ps aux).
Here is the solution using AWK:
ps aux | grep -v "%CPU" | awk '{if ($3 > XXX) { print "Killing process with PID "$2", called "$4", consuming "$3"% and launched by "$1; system( "kill -9 " $2 );}}' -
Where XXX is your threshold (% of CPU).
It also prints related info to the killed process, if it is not desired just remove the print statement.
You can add some filters like: do not remove root's process...
Try putting myscript in front like this:
./myscript `ps aux|awk {'print $1,$2,$3,$11'}`

Resources