Azure stream analytics timestamp by format - azure

When running stream analytics, I get an error message:
"Dropping events due to improper timestamps. Stream Analytics only
supports ISO8601 format for DateTime values"
I have tried the following formats:
2017-09-19T13:17:29.0111070Z
2017-09-19T13:17:29.123456
2017-09-19 13:17:29.123456
2017-09-19T13:17:29.123
2017-09-19 13:17:29.123
However, when I use the Test button on the query in Stream Analytics, the output comes out fine. Also, when I comment out the timestamp by clause, the query works, but the System.timestamp in the select statment will not return the correct time.
Is this a formatting issue or something else?

Firstly, as Vignesh Chandramohan mentioned, you can try to use CAST to convert the expression to DateTime, and check if it returns data conversion error that indicates any input data/value cannot cast to type 'datetime'.
Secondly, many factors can cause no output issue, for example: A where clause in the query filtered out their events that prevented outputs from being generated; Timestamp for events is before the job start time and therefore events are being dropped etc.
For detailed steps to debug with Azure Stream Analytics jobs, please check the Diagnose and solve problems on Azure portal or this article: Troubleshooting guide for Azure Stream Analytics.

Related

data being overwritten when outputting data from stream analytics to powerbi

lately I've been playing around with Stream Analytics queries with PowerBI as output sink. I made a simple query which retrieves the total count of http responsecodes of our website requests over time and groups them by date and response code.
The input data is retrieved from a storage account which holds BLOB storage. This is my query:
SELECT
DATETIMEFROMPARTS(DATEPART(year,R.context.data.eventTime), DATEPART(month,R.context.data.eventTime),DATEPART(day,R.context.data.eventTime),0,0,0,0) as datum,
request.ArrayValue.responseCode,
count(request.ArrayValue.responseCode)
INTO
[requests-httpresponsecode]
FROM
[cvweu-internet-pr-sa-requests] R TIMESTAMP BY R.context.data.eventTime
OUTER APPLY GetArrayElements(R.request) as request
GROUP BY DATETIMEFROMPARTS(DATEPART(year,R.context.data.eventTime), DATEPART(month,R.context.data.eventTime),DATEPART(day,R.context.data.eventTime),0,0,0,0), request.ArrayValue.responseCode, System.TimeStamp
Since continuous export became active on 3 september 2018, I chose a job start time of 3 september 2018. Since I am interested in the statistics until today, I did not include a date interval so I am expecting to see data from 3 september 2018 until now (20 december 2018). The job is running fine without errors and I chose PowerBI as an output sink. Immediately I saw the chart being propagated starting from 3 september grouped by day and counting. So far, so good. A few days later I noticed the output dataset didnt start from 3 september anymore but from 2 December until now. Apparently data is being overwritten.
The following link says:
https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-power-bi-dashboard
"defaultRetentionPolicy: BasicFIFO: Data is FIFO, with a maximum of 200,000 rows."
But my output table does not have close to 200.000 rows:
datum,count,responsecode
2018-12-02 00:00:00,332348,527387
2018-12-03 00:00:00,3178250,3282791
2018-12-04 00:00:00,3170981,4236046
2018-12-05 00:00:00,2943513,3911390
2018-12-06 00:00:00,2966448,3914963
2018-12-07 00:00:00,2825741,3999027
2018-12-08 00:00:00,1621555,3353481
2018-12-09 00:00:00,2278784,3706966
2018-12-10 00:00:00,3160370,3911582
2018-12-11 00:00:00,3806272,3681742
2018-12-12 00:00:00,4402169,3751960
2018-12-13 00:00:00,2924212,3733805
2018-12-14 00:00:00,2815931,3618851
2018-12-15 00:00:00,1954330,3240276
2018-12-16 00:00:00,2327456,3375378
2018-12-17 00:00:00,3321780,3794147
2018-12-18 00:00:00,3229474,4335080
2018-12-19 00:00:00,3329212,4269236
2018-12-20 00:00:00,651642,1195501
EDIT: I have created the STREAM input source according to
https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-quick-create-portal. I can create a REFERENCE input as well, but this invalidates my query since APPLY and GROUP BY are not supported and I also think STREAM input is what I want according to https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-add-inputs.
What am I missing? Is it my query?
It looks like you are streaming to a Streaming dataset. Streaming datasets doesn't store the data in a database, but keeps only the last hour of data. If you want to keep the data pushed to it, then you must enable Historic data analysis option, when you create the dataset:
This will create PushStreaming dataset (a.k.a. Hybrid) with basicFIFO retention policy (i.e. about 200k-210k records kept).
You're correct that Azure Stream Analytics should be creating a "PushStreaming" or "Hybrid" dataset. Can you confirm that your dataset is correctly configured as "Hybrid" (you can check this attribute even after creation as shown here)?
If it is the correct type, can you please clarify the following:
Does the schema of your data change? If, for example, you send the datum {a: 1, b: 2} and then {c: 3, d: 4}, Azure Stream Analytics will attempt to change the schema of your table, which can invalidate older data.
How are you confirming the number of rows in the dataset?
Looks like my query was the problem. I had to use TUMBLINGWINDOW(day,1) instead of System.TimeStamp.
TUMBLINGWINDOW and System.TimeStamp produce exactly the same chart output on the frontend, but seem to be processed in a different way in the backend. This was not reflected to the frontend in any way so this was confusing. I suspect something is happening in the backend due to the way the query is processed when not using TUMBLINGWINDOW and you happen to hit the 200k row per dataset limit sooner than expected. The query below is the one which is producing the expected result.
SELECT
request.ArrayValue.responseCode,
count(request.ArrayValue.responseCode),
DATETIMEFROMPARTS(DATEPART(year,R.context.data.eventTime), DATEPART(month,R.context.data.eventTime),DATEPART(day,R.context.data.eventTime),0,0,0,0) as date
INTO
[requests-httpstatuscode]
FROM
[cvweu-internet-pr-sa-requests] R TIMESTAMP BY R.context.data.eventTime
OUTER APPLY GetArrayElements(R.request) as request
GROUP BY DATETIMEFROMPARTS(DATEPART(year,R.context.data.eventTime), DATEPART(month,R.context.data.eventTime),DATEPART(day,R.context.data.eventTime),0,0,0,0),
TUMBLINGWINDOW(day,1),
request.ArrayValue.responseCode
As we speak my stream analytics job is running smoothly and producing the expected output from 3 september until now without data being overwritten.

OData query for all rows from the last 10 minutes

I need to filter rows from an Azure Table Store that are less than 10 minutes old. I'm using a Azure Function App integration to query the table, so a coded solution is not viable in this case.
I'm aware of the datetime type, but for this I have to specify an explicit datetime, for example -
Timestamp gt datetime'2018-07-10T12:00:00.1234567Z'
However, this is insufficient as I need the query to run on a timer every 10 minutes.
According to the OData docs, there are built in functions such as totaloffsetminutes() and now(), but using these causes the function to fail.
[Error] Exception while executing function: Functions.FailedEventsCount. Microsoft.WindowsAzure.Storage: The remote server returned an error: (400) Bad Request.
Is there a way to query a Table Store dynamically in this way?
Turns out that this was easier than expected.
I added the following query filter to the Azure Table Store input integration -
Timestamp gt datetime'{afterDateTime}'
In conjunction with a parameter in the Function trigger route, and Bob's your uncle -
FailedEventsCount/after/{afterDateTime}
Appreciate for other use cases it may not be viable to pass in the datatime, but for me that is perfectly acceptable.

Stream Analytics Query working but no output to table

I got a problem with my Stream Analytics job. I'm pulling events from an IoT Hub and grouping them in timewindows based on their custom timestamps; I've already written a query that does this correctly. But the problem is that it just doesn't write anything into my output table (being a NoSQL table on my Storage Account).
The query runs without problems in the query editor (when testing with a sample input file) and produces the correct output, but when running 'for real', it doesn't output anything (the output table remains empty). I've even tried renaming the table and outputting to a blob storage, but no dice. Here's the query:
SELECT
'general' AS partitionKey,
MIN(ID_frame) AS rowKey,
DATEADD(second, 1, DATEADD(hour, -3, System.TimeStamp)) AS window_start,
System.TimeStamp AS window_end,
COUNT(ID_frame) AS device_count
INTO
[IoT-Hub-output-table]
FROM
[IoT-Hub-input] TIMESTAMP BY custom_timestamp
GROUP BY TumblingWindow(Duration(hour, 3), Offset(second, -1))
The interesting part is that, if I omit any windowing in my query, then the table output works just fine.
I've been beating my head against the wall about this for a few days now, so I think I've already tried most of the obvious things.
As you are using a TumblingWindow of 3 hours, it means you will get a single output every 3 hours which contains an aggregate of all the events within that period.
So did you already wait for 3 hours for the first output to be generated?
I would try and set the window smaller, and try again to see if the output works correctly.
Turns out the query did output into my table, but with an amount of delay I didn't expect; I was waiting for 20-30 minutes at max. while the first insertions would began after a little later than half an hour. Thus I was cancelling the Analytics job before any output was produced and falsely assuming it just wouldn't output anything.
I found this to be the case afer I noted that 'sometimes' (when the job was running for long enough) there appeared to be some output. And in those output records I noticed the big delay between my custom timestamp field and the general timestamp field (which the engine uses to remember when the entity was updated for the last time)

Azure Stream Analytics input corrupts strings containing timezone info

I am using Azure Event Hub for collection of timebased events. Connected Azure Stream Analytics (ASA) to it.
This results in losing the timezone info at ASA level.
What I ascertained is the following:
I have sent data in JSON format containing a string with a timestamp compatible with ISO 8601. e.g.:
"event_timestamp": "2016-09-02T19:51:38.657+02:00"
I checked by means of ServiceBus Explorer (thanks to the guys who wrote this tool) that this string arrived exactly as-is in Event Hub.
In the Stream Analytics I added the event hub as a input. When I use the option SAMPLE DATA in the Azure portal this result in data containing: "event_timestamp":"2016-09-02T17:51:38.6570000"
Why is Stream Analytics removing timezone info???
According to ISO 8601 not specifying a timezone in a timestamp means that de timestamp is converted to localtime. Does that mean the timezone where the Azure resource is running? How can I use geo-replication in that case?
This means that after consuming the data and presenting it in a dashboard all times are related to the time of the server where the stream analytic runs?
Do I need to add the timezone information seperately in the JSON payload and reconstruct it afterwards?
My conclusion is that actually ASA removes/destruct information from my data stream.
Imagine this ASA query: SELECT * INTO [myoutput] FROM [myinput]
This would change the content (*) of my data. All strings that appear to be a datetime with timezone info will be converted.
In my opinion this is very unwanted behaviour.
I am very interested in the opinions of others in this forum.
Everything in Azure runs in UTC Timezone, unless otherwise supported and explicitly configured (there are not many services which support setting timezone).
If you look at your quoted samples closely you will notice that the timestamp is converted to UTC in the ASA, that's why the TimeZone info is missing:
Sent to event hub: "event_timestamp": "2016-09-02T19:51:38.657+02:00"
Received in ASA: "event_timestamp":"2016-09-02T21:51:38.6570000"
Note that your event is sent in 19:51:38.657 +2:00 and ASA reads 21:51:38.6570000 which is absolutely the same.
UPDATE
I am not expert on ISO standard, but here are some exerpts from ASA Docu:
Azure Stream Analytics data types
datetime Defines a date that is combined with a time of day with
fractional seconds that is based on a 24-hour clock and relative to
UTC (time zone offset 0).
convertions:
datetime string converted to datetime following ISO 8601 standard
It is documented that date time is in UTC. Hence no need to explicitly specify it. Whether this comforts with the ISO I cannot tell, first because WikiPedia is not ISO Document, second because I am not ISO expert.

Basic query with TIMESTAMP by not producing output

I have a very basic setup, in which I never get any output if I use the TIMESTAMP BY statement.
I have a stream analytics job which is reading from Event Hub and writing to the table storage.
The query is the following:
SELECT
*
INTO
MyOutput
FROM
MyInput TIMESTAMP BY myDateTime;
If the query uses timestamp statement, I never get any output events. I do see incoming events in the monitoring, there are no errors neither in monitoring nor in the maintenance logs. I am pretty sure that the source data has the right column in the right format.
If I remove the timestamp statement, then everything is working fine. The reason why I need the timestamp statement in the first place is because I need to write a number of queries in the same job, writing various aggregations to different outputs. And if I use timestamp in one query, I am required to use it in all other queries itself.
Am I doing something wrong? Perhaps SELECT * does not play well with TIMESTAMP BY? I just did not find any documentation explaining that...
{"myDateTime":"2015-08-02T10:59:02.0000000Z", "EventEnqueuedUtcTime":"2015-08-07T10:59:07.6980000Z"}
Late tolerance window: 00.00:00:05
All of your events are considered late arriving because myDateTime is 5 days before EventEnqueuedUtcTime. Can you try sending new events where myDateTime is in UTC and is "now" so it matches within a couple of seconds?
Also, when you started the job, what did you pick as the job start date time? Can you make sure you pick a date before the myDateTime values? You might try this first.

Resources