I'm trying to query my customlogs table (Eg: CustomData_CL) by giving the time range. The result of this query will be the filtered time ranged data. I want to find out the data size of the resulted output.
Query which I have used to fetch the time ranged o/p:
CustomData_CL
| where TimeGenerated between (datetime(2022–09–14 04:00:00) .. datetime(2020–09–14 05:00:00))
But it is giving the following error:
Can anyone please suggest on the same ?
Note the characters with code point 8211.
These are not standard hyphens (-) 🙂.
let p_str = "(datetime(2022–09–14 04:00:00) .. datetime(2020–09–14 05:00:00))";
print str = p_str
| mv-expand str = extract_all("(.)", str) to typeof(string)
| extend dec = to_utf8(str)[0]
str
dec
(
40
d
100
a
97
t
116
e
101
t
116
i
105
m
109
e
101
(
40
2
50
0
48
2
50
2
50
–
8211
0
48
9
57
–
8211
1
49
4
52
32
0
48
4
52
:
58
0
48
0
48
:
58
0
48
0
48
)
41
32
.
46
.
46
32
d
100
a
97
t
116
e
101
t
116
i
105
m
109
e
101
(
40
2
50
0
48
2
50
0
48
–
8211
0
48
9
57
–
8211
1
49
4
52
32
0
48
5
53
:
58
0
48
0
48
:
58
0
48
0
48
)
41
)
41
Fiddle
Update, per OP request:
Please note that in addition to the use of a wrong character that caused the syntax error, your 2nd datetime year was wrong.
// Generation of mock table. Not part of the solution
let CustomData_CL = datatable(TimeGenerated:datetime)[datetime(2022-09-14 04:30:00)];
// Solution starts here
CustomData_CL
| where TimeGenerated between (datetime(2022-09-14 04:00:00) .. datetime(2022-09-14 05:00:00))
TimeGenerated
2022-09-14T04:30:00Z
Fiddle
I'm preparing the material for a KQL course, and I thought about creating a challenge, based on your question.
Check out what happened when I posted your code into Kusto Web Explorer... 🙂
How cool is that?!
Related
I a routine to take a string and make it into an array of numbers. This is in VBA running in Excel as part of Office Professional 2019.
The code below is a demo version to illustrate the problem, which encapsulates the original code.
I need to display the numberical equivalent of each char in the string, so am using Cstr(by) elsewhere in code.
Public Sub TestByteFromString()
'### vars
Dim ss As String, i As Integer
Dim arrBytes() As Byte
Dim by As Byte
'###
ss = ""
For i = 0 To 127 Step 1
ss = ss & Chr(Val(i + 126))
Next i
arrBytes = ss
'###
For i = LBound(arrBytes) To UBound(arrBytes) Step 2
by = arrBytes(i)
Debug.Print "Index " & CStr(i) & " Byte " & CStr(by) & " Original " & CStr((i / 2 + 126)) & " Difference = " & CStr(((i / 2 + 126)) - CInt(by))
Next i
'###
End Sub
`
It seems to work fine except for certain values greater than 126, some of which are shown by the demo above.
I am getting these results and cannot see an explanation or a consistant pattern. Does this make sense to anyone what is wrong?
Index 0 Byte 126 Original 126 Difference = 0
Index 2 Byte 127 Original 127 Difference = 0
Index 4 Byte 172 Original 128 Difference = -44
Index 6 Byte 129 Original 129 Difference = 0
Index 8 Byte 26 Original 130 Difference = 104
Index 10 Byte 146 Original 131 Difference = -15
Index 12 Byte 30 Original 132 Difference = 102
Index 14 Byte 38 Original 133 Difference = 95
Index 16 Byte 32 Original 134 Difference = 102
Index 18 Byte 33 Original 135 Difference = 102
Index 20 Byte 198 Original 136 Difference = -62
Index 22 Byte 48 Original 137 Difference = 89
Index 24 Byte 96 Original 138 Difference = 42
Index 26 Byte 57 Original 139 Difference = 82
Index 28 Byte 82 Original 140 Difference = 58
Index 30 Byte 141 Original 141 Difference = 0
Index 32 Byte 125 Original 142 Difference = 17
Index 34 Byte 143 Original 143 Difference = 0
Index 36 Byte 144 Original 144 Difference = 0
Index 38 Byte 24 Original 145 Difference = 121
Index 40 Byte 25 Original 146 Difference = 121
Index 42 Byte 28 Original 147 Difference = 119
Index 44 Byte 29 Original 148 Difference = 119
Index 46 Byte 34 Original 149 Difference = 115
Index 48 Byte 19 Original 150 Difference = 131
Index 50 Byte 20 Original 151 Difference = 131
Index 52 Byte 220 Original 152 Difference = -68
Index 54 Byte 34 Original 153 Difference = 119
Index 56 Byte 97 Original 154 Difference = 57
Index 58 Byte 58 Original 155 Difference = 97
Index 60 Byte 83 Original 156 Difference = 73
Index 62 Byte 157 Original 157 Difference = 0
Index 64 Byte 126 Original 158 Difference = 32
Index 66 Byte 120 Original 159 Difference = 39
Index 68 Byte 160 Original 160 Difference = 0
It seems fine for everything beyond 160 and below 126.
I don't think it is the Cstr() function. If I multiply the byte value by 2 and use Cstr() I get this kind of result, suggesting the byte numerical value is the problem.
Index 66 Byte 120 Original 159 Difference = 39
Index 66 2*Byte 240
Other causes investigated but cannot see an explanation-
-two byte storage in strings for chars.
-ASCII char set
-bytes being decoded as negative numbers if MSB set, but unlikley as 160 onwards is correct.
There may be much better ways to get the array, and they would be very useful, but if possible I would like to also know what has gone wrong so I, and anyone reading, would not make the same mistake again.
Thanks for any assistance, R.
I have a set of data that looks like this:
NK.Chr1:75500000-95000000:28960-29007 NG-unitig0655 97.872 47 1 0 1 47 121009 120963 2.90e-14 80.6
NK.Chr1:75500000-95000000:28960-29007 NG-1DRT-unitig0549 97.872 47 1 0 1 47 623680 623726 2.90e-14 80.6
NK.Chr1:75500000-95000000:28960-29007 NG-1DRT-unitig0278 97.872 47 1 0 1 47 1224581 1224627 2.90e-14 80.6
NK.Chr1:75500000-95000000:28960-29007 NG-1DRT-Chr4 97.872 47 1 0 1 47 8416368 8416414 2.90e-14 80.6
NK.Chr1:75500000-95000000:28960-29007 NG-1DRT-Chr4 97.872 47 1 0 1 47 20041035 20041081 2.90e-14 80.6
NK.Chr1:75500000-95000000:28960-29007 NG-1DRT-Chr4 97.872 47 1 0 1 47 35175472 35175426 2.90e-14 80.6
NK.Chr1:75500000-95000000:28960-29007 NG-1DRT-Chr4 97.872 47 1 0 1 47 56460095 56460049 2.90e-14 80.6
I need to filter the lines in the range of 0-3900000, considering only the numbers before NG.
grep 'NK.Chr1:75500000-95000000:[0-3900000]' NG.1DRT-blast.out > chr1-blast-NG.txt
I tried this code, but it returned all the lines with NK.Chr1:75500000-95000000, not considering the range.
Anyone knows how to build a proper code for it?
With your shown samples and attempts please try following awk code. Written and tested in GNU awk.
awk ' match($0,/NK.Chr1:75500000-95000000:([0-9]+)-([0-9]+)[[:space:]]+NG/,arr) && int(arr[1] arr[2])<=3900000' Input_file
OR
awk 'match($0,/NK.Chr1:75500000-95000000:([0-9]+)-([0-9]+)[[:space:]]+NG/,arr) && (arr[1] arr[2])+0<=3900000' Input_file
Explanation: Using match function of awk here, where using regex like: NK.Chr1:75500000-95000000:([0-9]+)-([0-9]+)[[:space:]]+NG where its creating 2 capturing groups whose values are further getting stored into array named arr. Then addition to match adding an AND condition if value of digits(by removing - between them) is lesser OR equals to 3900000 then print that line.
I'm writing a bingo game in python. So far I can generate a bingo card and print it.
My problem is after I've randomly generated a number to call out, I don't know how to 'cross out' that number on the card to note that it's been called out.
This is the ouput, it's a randomly generated card:
B 11 13 14 2 1
I 23 28 26 27 22
N 42 45 40 33 44
G 57 48 59 56 55
O 66 62 75 63 67
I was thinking to use random.pop to generate a number to call out (in bingo the numbers go from 1 to 75)
random_draw_list = random.sample(range(1, 76), 75)
number_drawn = random_draw_list.pop()
How can I write a funtion that will 'cross out' a number on the card after its been called.
So for example if number_drawn results in 11, it should replace 11 on the card with an x or a zero.
I am working on a shell script that will execute mongoexport and upload it to a S3 bucket.
The goal is to extract date as a readable JSON format on data that is 45 days old.The script will run everyday as a crontab.
So basically the purpose is to archive data older than 45 days
Normal queries work intended but when I try to use variables it results an error.
The code regular format is as the following:
firstdate="$(date -v-46d +%Y-%m-%d)"
afterdate="$(date -v-45d +%Y-%m-%d)"
backup_name=gamebook
colname=test1
mongoexport --uri mongodb+srv://<user>:<pass>#gamebookserver.tvdmx.mongodb.net/$dbname
--collection $colname --query '{"gameDate": {"$gte": {"$date": "2020-09-04T00:00:00:000Z"}, "$lte": {"$date": "2020-09-05T00:00:00.000Z"}}}' --out $backup_name;
The previous code works but I want to make it more dynamic in the dates so I tried the code as shown below:
firstdate="$(date -v-46d +%Y-%m-%d)"
afterdate="$(date -v-45d +%Y-%m-%d)"
backup_name=gamebook
colname=test1
mongoexport --uri mongodb+srv://<user>:<pass>#gamebookserver.tvdmx.mongodb.net/$dbname
--collection $colname --query '{"gameDate": {"$gte": {"$date": "$firstdateT00:00:00:000Z"}, "$lte": {"$date": "$afterdateT00:00:00.000Z"}}}' --out $backup_name;
This results in the error:
2020-10-20T15:36:13.881+0700 query '[123 34 103 97 109 101 68 97 116 101 34 58 32 123 34 36 103 116 101 34 58 32 123 34 36 100 97 116 101 34 58 32 36 102 105 114 115 116 100 97 116 101 84 48 48 58 48 48 58 48 48 58 48 48 48 90 125 44 32 34 36 108 116 101 34 58 32 123 34 36 100 97 116 101 34 58 32 34 36 97 102 116 101 114 100 97 116 101 84 48 48 58 48 48 58 48 48 46 48 48 48 90 34 125 125 125]' is not valid JSON: invalid character '$' looking for beginning of value
2020-10-20T15:36:13.881+0700 try 'mongoexport --help' for more information
I've read in the documentation and it says:
You must enclose the query document in single quotes ('{ ... }') to ensure that it does not interact with your shell environment.
So my overall question is that is there a way to use values in the shell environment and parse them into the query section?
Or is there a better way that might get me the same result?
I'm still new to mongodb in general so any advise would be great.
You can always put together a string by combining interpolating and non-interpolating parts:
For instance,
--query '{"gameDate": {"$gte": {"'"$date"'": "'"$firstdate"'T00:00:00:000Z"}, "$lte": {"$date": "$afterdateT00:00:00.000Z"}}}'
would interpolate the first occurance of date and the shell variable firstdate, but would passs the rest literally to mongoexport (I've picked two variables for demonstration, because I don't understand from your question, which ones you want to expand and which one you don't want to). Basically, a
'$AAAA'"$BBBB"'$CCCCC'
is in effect a single string, but the $BBBB part would undergo parameter expansion. Hence, if
BBBB=foo
you would get the literal string $AAAAfoo$CCCCC out of this.
Since this become tedious to work, an alternative approach is to enclose everything into double-quotes, which means all parameters are expanded, and manually escape those parts which you don't want to expand. You could write the last example also as
"\$AAAA$BBBB\$CCCCC"
I am trying to use awk to parse a tab delimited table -- there are several duplicate entries in the first column, and I need to remove the duplicate rows that have a smaller total sum of the other 4 columns in the table. I can remove the first or second row easily, and sum the columns, but I'm having trouble combining the two. For my purposes there will never be more than 2 duplicates.
Example file: http://pastebin.com/u2GBnm2D
Desired output in this case would be to remove the rows:
lmo0330 1 1 0 1
lmo0506 7 21 2 10
And keep the other two rows with the same gene id in the column. The final parsed file would look like this: http://pastebin.com/WgDkm5ui
Here's what I have tried (this doesn't do anything. But the first part removes the second duplicate, and the second part sums the counts):
awk 'BEGIN {!a[$1]++} {for(i=1;i<=NF;i++) t+=$i; print t; t=0}'
I tried modifying the 2nd part of the script in the best answer of this question: Removing lines containing a unique first field with awk?
awk 'FNR==NR{a[$1]++;next}(a[$1] > 1)' ./infile ./infile
But unfortunately I don't really understand what's going on well enough to get it working. Can anyone help me out? I think I need to replace the a[$1] > 1 part with [remove (first duplicate count or 2nd duplicate count depending on which is larger].
EDIT: I'm also using GNU Awk 3.1.7 if that matters.
You can use this awk command:
awk 'NR == 1 {
print;
next
} {
s = $2+$3+$4+$5
} s >= sum[$1] {
sum[$1] = s;
if (!($1 in rows))
a[++n] = $1;
rows[$1] = $0
} END {
for(i=1; i<=n; i++)
print rows[a[i]]
}' file | column -t
Output:
gene SRR034450.out.rpkm_0 SRR034451.out.rpkm_0 SRR034452.out.rpkm_0 SRR034453.out.rpkm_0
lmo0001 160 323 533 293
lmo0002 135 317 504 306
lmo0003 1 4 5 3
lmo0004 35 59 58 48
lmo0005 113 218 257 187
lmo0006 279 519 653 539
lmo0007 563 1053 1165 1069
lmo0008 34 84 203 107
lmo0009 13 45 90 49
lmo0010 57 210 237 169
lmo0011 65 224 247 179
lmo0012 65 226 250 215
lmo0013 342 500 738 682
lmo0014 662 1032 1283 1311
lmo0015 321 413 631 637
lmo0016 175 253 273 325
lmo0017 3 6 6 6
lmo0018 33 38 46 45
lmo0019 13 1 39 1
lmo0020 3 12 28 15
lmo0021 3 4 14 12
lmo0022 2 3 5 1
lmo0023 2 0 3 2
lmo0024 1 0 2 6
lmo0330 1 1 1 3
lmo0506 151 232 60 204