Erase string a part of string in shell script - linux

I need to delete a part of a string via shell script.
E.g: I have this: VERBOSE [61622] [C-0000051f] And I want this:
C-0000051f
Without the brackets in the last portion
How can I do it?
Thank you!

check the awk solution
> data="VERBOSE [61622] [C-0000051f] "
> awk -F"[" ' { print $NF } ' <<< $data | awk -F"]" ' { print $1 } '
C-0000051f
>
EDIT1:
More compact version
> data="VERBOSE [61622] [C-0000051f] "
> awk -F"[[]|]" ' { print $4 } ' <<< $data
C-0000051f
>

As I notice from you answer here, you want to extract some information from a file and then perform a set of actions on that extracted line. The entire line you provided there, could be reduced to the following:
NUMTEL="$1"
IDCALL=$(awk -F "[][]" -v string="$NUMTEL" '/VERBOSE/ && ($0 ~ string){print $(NF-1);exit}' /var/log/asterisk/full)

I was able to do what I needed with the following script.
!/bin/bash
NUMTEL=$1
IDCALL=/usr/bin/grep "VERBOSE" /var/log/asterisk/full | /usr/bin/grep "$NUMTEL" | /usr/bin/head -n1 | /usr/bin/awk '{print $4}' | /usr/bin/cut -d] -f2 | /usr/bin/sed -e 's/\[//g'
echo $IDCALL
Thank you

Related

Script : check 8th field of /etc/shadow

I want to check if the 8th field of /etc/shadow of a username has no entry.
This is my code:
#!/bin/bash
for i in $(cat < "users.txt")
do
sudo grep -w $i /etc/shadow | awk -F: "$8 == ' '" | cut -d: -f1 ;
done
But this is the error that i get when i execute the script
awk: line 1: syntax error at or near ==
awk syntax for this purpose would be:
awk -F 'delim' '$n1 == "text" {print $n2}'
sudo grep -w $i /etc/shadow | awk -F':' '$8 == " " {print $0}'
FYI: /etc/shadow does not contain spaces between colons. so if cat shows
bin:*:1:1:::::::
you should run $8 == ""
note that there is no space.
If n2 is 0, you'd return the entire row. Hope this helps!
Your approach can be greatly simplified, typically using grep and awk on one line indicates that you're overthinking things. Invoking one command instead of three ...
#!/bin/bash
for i in $(cat < "users.txt")
do
sudo awk -F: -v user=$i '$1==user && $8==""{print $1}' /etc/shadow
done

Command "echo" has no effect in awk

I have a line of code that I need to run in a linux terminal and it`s not going very well.
What i`m doing is trying to output some variables obtained from my postfix mail queue to a file. For now I just need this piece of code working, but when I try to execute, nothing happens.
Code:
mailq | tail -n +2 | awk 'BEGIN { RS = "" } { echo $1 }' | tr -d '*!' >> myfile
Additional Notes:
If I change echo to print and remove >> myfile it works, but I need to output it to file.
awk doesn't have an echo command; it does have a print command. Making the replacement should be sufficient, without removing the >> myfile.
Tangentially, you can do away with the tail command by telling awk to ignore its first two lines of input and exiting immediately after the third.
mailq | awk ' NR == 3 { print $1; exit }' | tr -d '*!' >> myfile

Using bash, I want to print a number followed by sizes of 2 paths on one line. i.e. output of 3 commands on one line

Using bash, I want to print a number followed by sizes of 2 paths on one line. i.e. output of 3 commands on one line.
All the 3 items should be separated by ":"
echo -n "10001:"; du -sch /abc/def/* | grep 'total' | awk '{ print $1 }'; du -sch /ghi/jkl/* | grep 'total' | awk '{ print $1 }'
I am getting the output as -
10001:61M
:101M
But I want the output as -
10001:61M:101M
This should work for you. The two key elements added being the
tr - d '\n'
which effectively strips new line characters from the end of the output. As well as adding in the echo ":" to get the extra colon for formatting in there.
Hope this helps! Here's a link to the docs for tr command.
https://ss64.com/bash/tr.html
echo -n "10001:"; du -sch /abc/def/* | grep 'total' | awk '{ print $1 }' | tr -d '\n'; echo ":" | tr -d '\n'; du -sch /ghi/jkl/* | grep 'total' | awk '{ print $1 }'
Save your values to variables, and then use printf:
printf '%s:%s:%s\n' "$first" "$second" "$third"

awk - send sum to global variable

I have a line in a bash script that calculates the sum of unique IP requests to a certain page.
grep $YESTERDAY $ACCESSLOG | grep "$1" | awk -F" - " '{print $1}' | sort | uniq -c | awk '{sum += 1; print } END { print " ", sum, "total"}'
I am trying to get the value of sum to a variable outside the awk statement so I can compare pages to each other. So far I have tried various combinations of something like this:
unique_sum=0
grep $YESTERDAY $ACCESSLOG | grep "$1" | awk -F" - " '{print $1}' | sort | uniq -c | awk '{sum += 1; print ; $unique_sum=sum} END { print " ", sum, "total"}'
echo "${unique_sum}"
This results in an echo of "0". I've tried placing __$unique_sum=sum__ in the END, various combinations of initializing the variable (awk -v unique_sum=0 ...) and placing the variable assignment outside of the quoted sections.
So far, my Google-fu is failing horribly as most people just send the whole of the output to a variable. In this example, many lines are printed (one for each IP) in addition to the total. Failing a way to capture the 'sum' variable, is there a way to capture that last line of output?
This is probably one of the most sophisticated things I've tried in awk so my confidence that I've done anything useful is pretty low. Any help will be greatly appreciated!
You can't assign a shell variable inside an awk program. In general, no child process can alter the environment of its parent. You have to have the awk program print out the calculated value, and then shell can grab that value and assign it to a variable:
output=$( grep $YESTERDAY $ACCESSLOG | grep "$1" | awk -F" - " '{print $1}' | sort | uniq -c | awk '{sum += 1; print } END {print sum}' )
unique_sum=$( sed -n '$p' <<< "$output" ) # grab the last line of the output
sed '$d' <<< "$output" # print the output except for the last line
echo " $unique_sum total"
That pipeline can be simplified quite a lot: awk can do what grep can do, so first
grep $YESTERDAY $ACCESSLOG | grep "$1" | awk -F" - " '{print $1}'
is (longer, but only one process)
awk -F" - " -v date="$YESTERDAY" -v patt="$1" '$0 ~ date && $0 ~ patt {print $1}' "$ACCESSLOG"
And the last awk program just counts how many lines and can be replaced with wc -l
All together:
unique_output=$(
awk -F" - " -v date="$YESTERDAY" -v patt="$1" '
$0 ~ date && $0 ~ patt {print $1}
' "$ACCESSLOG" | sort | uniq -c
)
echo "$unique_output"
unique_sum=$( wc -l <<< "$unique_output" )
echo " $unique_sum total"

bash, extract string from text file with space delimiter

I have a text files with a line like this in them:
MC exp. sig-250-0 events & $0.98 \pm 0.15$ & $3.57 \pm 0.23$ \\
sig-250-0 is something that can change from file to file (but I always know what it is for each file). There are lines before and above this, but the string "MC exp. sig-250-0 events" is unique in the file.
For a particular file, is there a good way to extract the second number 3.57 in the above example using bash?
use awk for this:
awk '/MC exp. sig-250-0/ {print $10}' your.txt
Note that this will print: $3.57 - with the leading $, if you don't like this, pipe the output to tr:
awk '/MC exp. sig-250-0/ {print $10}' your.txt | tr -d '$'
In comments you wrote that you need to call it in a script like this:
while read p ; do
echo $p,awk '/MC exp. sig-$p/ {print $10}' filename | tr -d '$'
done < grid.txt
Note that you need a sub shell $() for the awk pipe. Like this:
echo "$p",$(awk '/MC exp. sig-$p/ {print $10}' filename | tr -d '$')
If you want to pass a shell variable to the awk pattern use the following syntax:
awk -v p="MC exp. sig-$p" '/p/ {print $10}' a.txt | tr -d '$'
More lines would've been nice but I guess you would like to have a simple use awk.
awk '{print $N}' $file
If you don't tell awk what kind of field-separator it has to use it will use just a space ' '. Now you just have to count how many fields you have got to get your field you want to get. In your case it would be 10.
awk '{print $10}' file.txt
$3.57
Don't want the $?
Pipe your awk result to cut:
awk '{print $10}' foo | cut -d $ -f2
-d will use the $ als field-separator and -f will select the second field.
If you know you always have the same number of fields, then
#!/bin/bash
file=$1
key=$2
while read -ra f; do
if [[ "${f[0]} ${f[1]} ${f[2]} ${f[3]}" == "MC exp. $key events" ]]; then
echo ${f[9]}
fi
done < "$file"

Resources