I have a file which I'm trying to print only the lines with a timestamp greater than or equal to 22:01, but I cant seem to get it to work correctly. As can be seen below it still prints the 8:05 timestamps as well? Probably a school boy error but I'm struggling to get this working so any pointers in the right direction would be appreciated.
cat /tmp/m1.out | awk '$1>="22:01"'
22:05:42:710
23:05:42:710
8:05:42:710
8:05:42:710
8:05:42:710
8:05:42:710
8:05:42:710
Thanks,
Matt
The problem has been correctly identified in the comments. You are comparing against a string, which triggers a string comparison. In string comparison, "8:05:42:710" is greater than "22:01" because the first character "8" is greater than "2".
One option would be to split the time into the separate components and use numerical comparisons instead:
awk -F: '$1 >= 22 && $2 >= 1' /tmp/m1.out
If your logic is more complex, e.g. your file has more fields and you don't want to change the field separator, you can use split:
awk '{ split($1, pieces, /:/) } pieces[1] >= 22 && pieces[2] >= 1' file
Padding the field with a leading zero is a little more tricky and isn't necessary in your example, as a time with only one digit in the hours will never be greater than 22.
The best thing to do if possible would be to use a timestamp that is compatible with string comparison, although that would require control of whatever is producing the file you're working with.
Related
I have a logs file named source.log having time format like :-
Fri, 09 Dec 2016 05:03:29 GMT 127.0.0.1
and i am using script to get logs from a logs file for last 1 hour.
Script:-
awk -vDate=`date -d'now-1 hour' +[%d/%b/%Y:%H:%M:%S` '$4 > Date {print Date, $0}' source.log > target.log
But this script gives the result same as like the source file.
There is something wrong in time format matching, due to which it is not giving last hour records.
I know I'm late to help the OP, but maybe this answer can help anyone else in this situation.
First it's necessary to compare the whole date and not only the time part, because times near midnight.
Note that awk can only compare strings and numbers. Some awk implementations have the mktime() function that converts a specifically formatted string into UNIX timestamp, in order to make datetime comparisons, but it doesn't support any datetime format, so we can't use it.
The best way would be changing (if possible) the datetime format of the log entries, using 'YYMMDDhhmmss' datetime format or ISO format. In this way, comparing two datetimes is simple as compare strings or numbers.
But let's assume that we can't change log entries date format, so we'll need to convert ourselves inside awk:
awk -vDate="`date -d'now-1 hour' +'%Y%m%d%H%M%S'`" '
BEGIN{
for(i=0; i<12; i++)
MON[substr("JanFebMarAprMayJunJulAugSepOctNovDec", i*3+1, 3)] = sprintf("%02d", i+1);
}
toDate() > Date
function toDate(){
time = $5; gsub(/:/, "", time);
return $4 MON[$3] $2 time;
}' source.log
Explanation
-vDate=... sets the Date awk variable with the initial datetime (one hour ago).
BEGIN section creates an array indexed by the month abbreviation (it's especific to english)
toDate() function converts the line's fields into a string with the same format as Date variable (YYYMMDDhhmmss).
Finally when the condition toDate() > Date is true, awk prints the current line (log entry).
My simple gawk filter used in my program is not filtering out a value that is a digit longer than the rest.
Here is my text file:
172 East Fourth Street Toronto 4 1890 1500000 6
2213 Mt. Vernon Avenue Vaughn 2 890 500000 4
One Lincoln Plaza Toronto 2 980 900000 1
The columns are separated by tabs.
My gawk script:
echo "Enter max price"
read price
gawk -F "\t+" '$5 <= "'$price'"' file
The 1500000 value appears if I enter a value of 150001 or greater. I think it has to do with the gawk not reading the last digit correctly. I'm not permitted to change the original text file and I need to use the gawk command. Any help is appreciated!
Your awk command performs lexical comparison rather than numerical comparison, because the RHS - the price value - is enclosed in double-quotes.
Removing the double-quotes would help, but it's advisable to reformulate the command as follows:
gawk -F '\t+' -v price="$price" '$5 <= price' file
The shell variable $price is now passed to Awk using -v, as Awk variable price, which is the safe way to pass values to awk - you can then use a single-quoted awk script without having to splice in shell variables or having to worry about which parts may be expanded by the shell up front.
Afterthought: As Ed Morton points out in a comment, to ensure that a field or variable is treated as a number, append +0 to it; e.g., $5 <= price+0 (conversely, append "" to force treatment as a string).
By default, Awk infers from the values involved and the context whether to interpret a given value as a string or a number - which may not always give the desired result.
You're really calling a separate gawk for each column? One will do:
gawk -F "\t+" -v OFS="\t" \
-v city="$city" \
-v bedrooms="$bedrooms" \
-v space="$space" \
-v price="$price" \
-v weeks="$weeks" '
$2 == city && $3 >= bedrooms && $4 >= space && $5 <= price && $6 <= weeks {
$1 = $1; print
}
' listing |
sort -t $'\t' $sortby $ordering |
column -s $'\t' -t
(This is not an answer, just a comment that needs formatting)
The $1=$1 bit is an awk trick to make it rewrite the current record using the Output Field Separator, a single tab. Saves you a call to tr
So my input file is:
1;a;b;2;c;d;3;e;f;4;g;h;5
1;a;b;2;c;d;9;e;f;101;g;h;9
3;a;b;1;c;d;3;e;f;10;g;h;5
I want to sum the numbers then write it to a file (so i need every 4th field).
I tried many sum examples on the net but i didnt found answer for my problem.
My ouput file should looks:
159
Thanks!
Update:
a;b;**2**;c;d;g
3;e;**3**;s;g;k
h;5;**2**;d;d;l
The problem is the same.
I want to sum the 3th numbers (But in the line it is 3th).
So 2+3+2.
Output: 7
Apparently you want to print every 3rd field, not every 4th. The following code loops through all fields, suming each one in a 3k+1 position.
$ awk -F";" '{for (i=1; i<=NF; i+=3) sum+=$i} END{print sum}' file
159
The value is printed after processing the whole file, in the END {} block.
I am using awk to create a .cue sheet for a long mp3 from a list of track start times, so the input may look like this:
01:01:00-Title-Artist
01:02:00:00-Title2-Artist2
Currently, I am using "-" as the field separator so that I can capture the start time, Artist and Title for manipulation.
The first time can be used as is in a cue sheet. The second time needs to be converted to 62:00:00 (the cue sheet cannot handle hours). What is the best way to do this? If necessary, I can force all of the times in the input file to have "00:" in the hours section, but I'd rather not do this if I don't have to.
Ultimately, I would like to have time, title and artist fields with the time field having a number of minutes greater than 60 rather than an hour field.
fedorqui's solution is valid: just pipe the output into another instance of awk. However, if you want to do it inside one awk process, you can do something like:
awk 'split($1,a,":")==4 { $1 = a[1] * 60 + a[2] ":" a[3] ":" a[4]}
1' FS=- OFS=- input
The split works on the first field only. If there are 4 elements, the pattern re-writes the first field in the desired output.
Like this, for example:
$ awk -F: '{if (NF>3) $0=($1*60+$2)FS$3FS$4}1' file
01:01:00-Title-Artist
62:00:00-Title2-Artist2
In case the file contains 4 or more fields based on : split, it joins 1st and 2nd with the rule 60*1st + 2nd. FS means field separator and is set to : in the beginning.
I have a file 1.blast with coordinate information like this
1 gnl|BL_ORD_ID|0 100.00 33 0 0 1 3
27620 gnl|BL_ORD_ID|0 95.65 46 2 0 1 46
35296 gnl|BL_ORD_ID|0 90.91 44 4 0 3 46
35973 gnl|BL_ORD_ID|0 100.00 45 0 0 1 45
41219 gnl|BL_ORD_ID|0 100.00 27 0 0 1 27
46914 gnl|BL_ORD_ID|0 100.00 45 0 0 1 45
and a file 1.fasta with sequence information like this
>1
TCGACTAGCTACGACTCGGACTGACGAGCTACGACTACGG
>2
GCATCTGGGCTACGGGATCAGCTAGGCGATGCGAC
...
>100000
TTTGCGAGCGCGAAGCGACGACGAGCAGCAGCGACTCTAGCTACTG
I am searching now a script that takes from 1.blast the first column and extracts those sequence IDs (=first column $1) plus sequence and then from the sequence itself all but those positions between $7 and $8 from the 1.fasta file, meaning from the first two matches the output would be
>1
ACTAGCTACGACTCGGACTGACGAGCTACGACTACGG
>27620
GTAGATAGAGATAGAGAGAGAGAGGGGGGAGA
...
(please notice that the first three entries from >1 are not in this sequence)
The IDs are consecutive, meaning I can extract the required information like this:
awk '{print 2*$1-1, 2*$1, $7, $8}' 1.blast
This gives me then a matrix that contains in the first column the right sequence identifier row, in the second column the right sequence row (= one after the ID row) and then the two coordinates that should be excluded. So basically a matrix that contains all required information which elements from 1.fasta shall be extracted
Unfortunately I do not have too much experience with scripting, hence I am now a bit lost, how to I feed the values e.g. in the suitable sed command?
I can get specific rows like this:
sed -n 3,4p 1.fasta
and the string that I want to remove e.g. via
sed -n 5p 1.fasta | awk '{print substr($0,2,5)}'
But my problem is now, how can I pipe the information from the first awk call into the other commands so that they extract the right rows and remove from the sequence rows then the given coordinates. So, substr isn't the right command, I would need a command remstr(string,start,stop) that removes everything between these two positions from a given string, but I think that I could do in an own script. Especially the correct piping is a problem here for me.
If you do bioinformatics and work with DNA sequences (or even more complicated things like sequence annotations), I would recommend having a look at Bioperl. This obviously requires knowledge of Perl, but has quite a lot of functionality.
In your case you would want to generate Bio::Seq objects from your fasta-file using the Bio::SeqIO module.
Then, you would need to read the fasta-entry-numbers and positions wanted into a hash. With the fasta-name as the key and the value being an array of two values for each subsequence you want to extract. If there can be more than one such subsequence per fasta-entry, you would have to create an array of arrays as the value entry for each key.
With this data structure, you could then go ahead and extract the sequences using the subseq method from Bio::Seq.
I hope this is a way to go for you, although I'm sure that this is also feasible with pure bash.
This isn't an answer, it is an attempt to clarify your problem; please let me know if I have gotten the nature of your task correct.
foreach row in blast:
get the proper (blast[$1]) sequence from fasta
drop bases (blast[$7..$8]) from sequence
print blast[$1], shortened_sequence
If I've got your task correct, you are being hobbled by your programming language (bash) and the peculiar format of your data (a record split across rows). Perl or Python would be far more suitable to the task; indeed Perl was written in part because multiple file access in awk of the time was really difficult if not impossible.
You've come pretty far with the tools you know, but it looks like you are hitting the limits of their convenient expressibility.
As either thunk and msw have pointed out, more suitable tools are available for this kind of task but here you have a script that can teach you something about how to handle it with awk:
Content of script.awk:
## Process first file from arguments.
FNR == NR {
## Save ID and the range of characters to remove from sequence.
blast[ $1 ] = $(NF-1) " " $NF
next
}
## Process second file. For each FASTA id...
$1 ~ /^>/ {
## Get number.
id = substr( $1, 2 )
## Read next line (the sequence).
getline sequence
## if the ID is one found in the other file, get ranges and
## extract those characters from sequence.
if ( id in blast ) {
split( blast[id], ranges )
sequence = substr( sequence, 1, ranges[1] - 1 ) substr( sequence, ranges[2] + 1 )
## Print both lines with the shortened sequence.
printf "%s\n%s\n", $0, sequence
}
}
Assuming your 1.blasta of the question and a customized 1.fasta to test it:
>1
TCGACTAGCTACGACTCGGACTGACGAGCTACGACTACGG
>2
GCATCTGGGCTACGGGATCAGCTAGGCGATGCGAC
>27620
TTTGCGAGCGCGAAGCGACGACGAGCAGCAGCGACTCTAGCTACTGTTTGCGA
Run the script like:
awk -f script.awk 1.blast 1.fasta
That yields:
>1
ACTAGCTACGACTCGGACTGACGAGCTACGACTACGG
>27620
TTTGCGA
Of course I'm assumming some things, the most important that fasta sequences are not longer than one line.
Updated the answer:
awk '
NR==FNR && NF {
id=substr($1,2)
getline seq
a[id]=seq
next
}
($1 in a) && NF {
x=substr(a[$1],$7,$8)
sub(x, "", a[$1])
print ">"$1"\n"a[$1]
} ' 1.fasta 1.blast