How do I extract the text from the following text and store it to the variables:
05:21-09:32, 14:21-19:30
Here, I want to store 05 in one variable, 21 in another, 09 in another and so on. All the value must me stored in array or in separate varibles.
I have tried:
k="05:21-09:32, 14:21-19:30"
part1=($k | awk -F"-" '{print $1}' | awk -F":" '{print $1}')
part2=($k | awk -F"-" '{print $2}' | awk -F":" '{print $1}')
part3=($k | awk -F"," '{print $2}' | awk -F":" '{print $1}')
part4=($k | awk -F"-" '{print $3}' | awk -F":" '{print $1}')
I need a more clear solution or short solution.
You can use read with the -array option:
IFS=':-, ' read -ra my_arr <<< "05:21-09:32, 14:21-19:30"
The above code will split the input string on :, -, , and spaces:
$ echo "${my_arr[0]}" "${my_arr[1]}" "${my_arr[2]}" "${my_arr[3]}"
05 21 09 32
Your code has a number of problems.
You can't pipe the value of k to standard output with just $k -- you want something like printf '%s\n' "$k" or perhaps the less portable echo "$k"
Notice also the quoting in the expression above; without it, the shell will perform wildcard expansion and whitespace tokenization on the value
Spawning two Awk processes for a simple string substitution is excessive
Spawning a separate pipeline for each value you want to extract is inefficient; if at all possible, extract everything in one go.
Something like IFS=':-, '; set -- $k will assign the parts to $1, $2, $3, and $4 in one go.
I am having file, it has below data. I want to get queue names (FID.MAGNET.ERROR.*) which is having 100 + depth. please help me here.
file name MQData -
Which command i should use to get queue names which is having 100+(three digits > + ) details?
Three digits and >=100 have different meanings.
0000 is more than 3 digits. well perhaps your data won't have those cases.
If the length is important, I will do awk 'length($1)>2{print $2} file
If the value is what you are looking at, I will do awk '($1+0)>=100{print $2}' file
The $1+0 makes sure if your $1 has leading zeros, the comparison will be done correctly too. Take a look this example:
kent$ awk 'BEGIN{if("01001"+0>100)print "OK";else print "NOK"}'
OK
kent$ awk 'BEGIN{if("01001">100)print "OK";else print "NOK"}'
NOK
awk '$1 >= 100 {print $2}' MQData
Does that work?
You can skip lines with grep -v. I use echo -e to create a multi-line stream.
echo -e "1 xx\n22 yy\n333 zz\n100 To be deleted" | grep -Ev "^. |^.. |^100 "
I have a file like this
file A :
min:353 max:685 avg:519
min:565 max:7984 avg:4274
min:278 max:5617 avg:2947
min:624 max:6768 avg:3639
min:27 max:809 avg:418
min:809 max:3685 avg:2247
min:958 max:2276 avg:1617
I trying to get last two line of avg numbers to ad them together.
Like 2247+1617 than output the value 3864.
How can I achieve it?
So far my code is like this : (sorry for limited knowledge)
tail -n 2 file.A | awk -F '[: ]' '{print $6}'
Here is an awk only
awk -F: 'FNR==NR {n=NR;next} FNR>n-2 {sum+=$NF}END{print sum}' file.A{,}
3864
Or you can just sum the two last value:
awk -F: '{f=s;s=$NF}END{print s+f}' file.A
3864
You seem to want to add the last field. This would add the last field to a variable. The END block is executed after the input is exhausted, so sum would be printed at the end:
tail -2 file.A | awk -F: '{sum+=$NF}END{print sum}'
Similar approach as devnull's answer but using tac.
tac file.A | awk -F: '{sum+=$NF}NR==2{print sum;exit}'
3864
tac reverses the file
Using awk we sum the last column ($NF).
When line number is 2 we print the sum and exit.
I would like to execute 2 splits using AWK (i have 2 fields separator), the String of data i'm working on would look like something like so:
data;digit&int&string&int&digit;data;digit&int&string&int&digit
As you can see the outer field separator is a semicolon, and the nested one is an ampersand.
What i'm doing with awk is (suppose that the String would be in a variable named test)
echo ${test} | awk '{FS=";"} {print $2}' | awk '{FS="&"} {print $3}'
This should catch the "String" word, but for some reason this is not working.
It seems like the second pipe its not being applied, as i see only the result of the first awk function
Any advice?
use awk arrays
echo $test | awk -F';' '{split($2, arr, "&"); print(arr[3])}'
The other answers give working solutions, but they don't really explain the problem.
The problem is that setting FS inside a regular { ... } block the awk script won't cause $1, $2, etc. to be re-calculated for the current line; so FS will be set for any later lines, but the very first line will already have been split by whitespace. To set FS before running the script, you can use a BEGIN block (which is run before the first line); or, you can use the -F command-line option.
Making either of those changes will fix your command:
echo "$test" | awk 'BEGIN{FS=";"} {print $2}' | awk 'BEGIN{FS="&"} {print $3}'
echo "$test" | awk -F';' '{print $2}' | awk -F'&' '{print $3}'
(I also took the liberty of wrapping $test in double-quotes, since unquoted parameter-expansions are a recipe for trouble. With your value of $test it would have been fine, but I make it a habit to always use double-quotes, just in case.)
Try that :
echo "$test" | awk -F'[;&]' '{print $4}'
I specify a multiple separator in -F'[;&]'
I'm trying to remove the first two columns (of which I'm not interested in) from a DbgView log file. I can't seem to find an example that prints from column 3 onwards until the end of the line. Note that each line has variable number of columns.
...or a simpler solution: cut -f 3- INPUTFILE just add the correct delimiter (-d) and you got the same effect.
awk '{for(i=3;i<=NF;++i)print $i}'
awk '{ print substr($0, index($0,$3)) }'
solution found here:
http://www.linuxquestions.org/questions/linux-newbie-8/awk-print-field-to-end-and-character-count-179078/
Jonathan Feinberg's answer prints each field on a separate line. You could use printf to rebuild the record for output on the same line, but you can also just move the fields a jump to the left.
awk '{for (i=1; i<=NF-2; i++) $i = $(i+2); NF-=2; print}' logfile
awk '{$1=$2=$3=""}1' file
NB: this method will leave "blanks" in 1,2,3 fields but not a problem if you just want to look at output.
If you want to print the columns after the 3rd for example in the same line, you can use:
awk '{for(i=3; i<=NF; ++i) printf "%s ", $i; print ""}'
For example:
Mar 09:39 20180301_123131.jpg
Mar 13:28 20180301_124304.jpg
Mar 13:35 20180301_124358.jpg
Feb 09:45 Cisco_WebEx_Add-On.dmg
Feb 12:49 Docker.dmg
Feb 09:04 Grammarly.dmg
Feb 09:20 Payslip 10459 %2828-02-2018%29.pdf
It will print:
20180301_123131.jpg
20180301_124304.jpg
20180301_124358.jpg
Cisco_WebEx_Add-On.dmg
Docker.dmg
Grammarly.dmg
Payslip 10459 %2828-02-2018%29.pdf
As we can see, the payslip even with space, shows in the correct line.
What about following line:
awk '{$1=$2=$3=""; print}' file
Based on #ghostdog74 suggestion. Mine should behave better when you filter lines, i.e.:
awk '/^exim4-config/ {$1=""; print }' file
awk -v m="\x0a" -v N="3" '{$N=m$N ;print substr($0, index($0,m)+1)}'
This chops what is before the given field nr., N, and prints all the rest of the line, including field nr.N and maintaining the original spacing (it does not reformat). It doesn't mater if the string of the field appears also somewhere else in the line, which is the problem with daisaa's answer.
Define a function:
fromField () {
awk -v m="\x0a" -v N="$1" '{$N=m$N; print substr($0,index($0,m)+1)}'
}
And use it like this:
$ echo " bat bi iru lau bost " | fromField 3
iru lau bost
$ echo " bat bi iru lau bost " | fromField 2
bi iru lau bost
Output maintains everything, including trailing spaces
Works well for files where '/n' is the record separator so you don't have that new-line char inside the lines. If you want to use it with other record separators then use:
awk -v m="\x01" -v N="3" '{$N=m$N ;print substr($0, index($0,m)+1)}'
for example. Works well with almost all files as long as they don't use hexadecimal char nr. 1 inside the lines.
awk '{a=match($0, $3); print substr($0,a)}'
First you find the position of the start of the third column.
With substr you will print the whole line ($0) starting at the position(in this case a) to the end of the line.
The following awk command prints the last N fields of each line and at the end of the line prints a new line character:
awk '{for( i=6; i<=NF; i++ ){printf( "%s ", $i )}; printf( "\n"); }'
Find below an example that lists the content of the /usr/bin directory and then holds the last 3 lines and then prints the last 4 columns of each line using awk:
$ ls -ltr /usr/bin/ | tail -3
-rwxr-xr-x 1 root root 14736 Jan 14 2014 bcomps
-rwxr-xr-x 1 root root 10480 Jan 14 2014 acyclic
-rwxr-xr-x 1 root root 35868448 May 22 2014 skype
$ ls -ltr /usr/bin/ | tail -3 | awk '{for( i=6; i<=NF; i++ ){printf( "%s ", $i )}; printf( "\n"); }'
Jan 14 2014 bcomps
Jan 14 2014 acyclic
May 22 2014 skype
Perl solution:
perl -lane 'splice #F,0,2; print join " ",#F' file
These command-line options are used:
-n loop around every line of the input file, do not automatically print every line
-l removes newlines before processing, and adds them back in afterwards
-a autosplit mode – split input lines into the #F array. Defaults to splitting on whitespace
-e execute the perl code
splice #F,0,2 cleanly removes columns 0 and 1 from the #F array
join " ",#F joins the elements of the #F array, using a space in-between each element
If your input file is comma-delimited, rather than space-delimited, use -F, -lane
Python solution:
python -c "import sys;[sys.stdout.write(' '.join(line.split()[2:]) + '\n') for line in sys.stdin]" < file
Well, you can easily accomplish the same effect using a regular expression. Assuming the separator is a space, it would look like:
awk '{ sub(/[^ ]+ +[^ ]+ +/, ""); print }'
awk '{print ""}{for(i=3;i<=NF;++i)printf $i" "}'
A bit late here, but none of the above seemed to work. Try this, using printf, inserts spaces between each. I chose to not have newline at the end.
awk '{for(i=3;i<=NF;++i) printf("%s ", $i) }'
awk '{for (i=4; i<=NF; i++)printf("%c", $i); printf("\n");}'
prints records starting from the 4th field to the last field in the same order they were in the original file
In Bash you can use the following syntax with positional parameters:
while read -a cols; do echo ${cols[#]:2}; done < file.txt
Learn more: Handling positional parameters at Bash Hackers Wiki
If its only about ignoring the first two fields and if you don't want a space when masking those fields (like some of the answers above do) :
awk '{gsub($1" "$2" ",""); print;}' file
awk '{$1=$2=""}1' FILENAME | sed 's/\s\+//g'
First two columns are cleared, sed removes leading spaces.
In AWK columns are called fields, hence NF is the key
all rows:
awk -F '<column separator>' '{print $(NF-2)}' <filename>
first row only:
awk -F '<column separator>' 'NR<=1{print $(NF-2)}' <filename>