recently I have been trying to get data from IoT Hub to Data Explorer along with the MXChip AZ3166. However, I am unable to map the EnqueuedTime variable onto the Data Explorer table, whereas the other variables are mapped just fine. I've inserted my code and screenshots to help me describe the issue. May I know what the issue might be? I've tried using different types such as datetime and string for the EnqueuedTime variable, but it still did not show up in the table. Thank you.
.create table TelemetryIoTHub (EnqueuedTime: datetime, Temperature: real, Humidity: real, Pressure: real, GyroX: real, GyroY: real, GyroZ: real, AccelX: real, AccelY: real, AccelZ: real, MagX: real, MagY: real, MagZ: real)
.create table TelemetryIoTHub ingestion json mapping 'DataMapping' '[{"column":"EnqueuedTime","path":"$.enqueuedTime","datatype":"datetime"},{"column":"Humidity","path":"$.humidity","datatype":"real"},{"column":"Pressure","path":"$.pressure","datatype":"real"},{"column":"Temperature","path":"$.temperature","datatype":"real"},{"column":"AccelX","path":"$.accelX","datatype":"real"},{"column":"AccelY","path":"$.accelY","datatype":"real"},{"column":"AccelZ","path":"$.accelZ","datatype":"real"},{"column":"GyroX","path":"$.gyroX","datatype":"real"},{"column":"GyroY","path":"$.gyroY","datatype":"real"},{"column":"GyroZ","path":"$.gyroZ","datatype":"real"},{"column":"MagX","path":"$.magnetX","datatype":"real"},{"column":"MagY","path":"$.magnetY","datatype":"real"},{"column":"MagZ","path":"$.magnetZ","datatype":"real"}]'
Table Output
Telemetry Output
Updated:
You should use the system names for this column, in this case it should be $.iothub-enqueuedtime, and also enable the iothub-enqueuedtime under the Event system properties. See the example in the ingest from IOT hub doc
{ "column" : "enqueuedtime", "Properties":{"Path":"$.iothub-enqueuedtime"}}'
Related
I'm trying to send a Timestamp field which is ISO 8601 with offset (
"2023-02-01T11:11:12.2220000+03:00" )
Azure doesn't really work with offsets, I first encountered that when sending data to event hub.
I was hoping to resolve this by splitting timestamp field into 2 fields:
timestamp: 2023-02-01T11:11:12.2220000
offset: +03:00
and combining them is SA query.
This seemed to have worked in Query editor, where test output is shown as a correct timestamp+offset
however when data is sent to output (in this case SQL, field type datetimeoffset), value looks like this:
2023-02-01T08:11:12.2220000+00:00
I suspect this is because timestamp field type in SA is datetime (seen in query explorer test results window)
even if I cast to nvarchar field type is still datetime.
is there a way to force SA to use specific types for fields (in this case, treat field as a string and not datetime)?
or, in general, how pass value like "2023-02-01T11:11:12.2220000+03:00" through SA without altering it? bonus points if it can be done in Event Hub as well
I am currently trying to connect 2 different devices to the IoT Hub, and I need to separate the data from each device. In order to do so, I tried configuring my stream analytics query like this:
SELECT
deviceId, temperature, humidity, CAST(iothub.EnqueuedTime AS datetime) AS event_date
INTO
NodeMCUOutput
FROM
iothubevents
WHERE
deviceId = "NodeMCU1"
However, for some reason, the output is not shown if the WHERE statement is in the code (the outputs are shown without it, but the data is not filtered). I need the WHERE statement in order to sort the data the way I want it. Am I missing something? Are there any solutions to this? Thanks a lot. Cheers!
The device ID and other properties that are not in the message itself are included as metadata on the message. You can read that metadata using the GetMetadataPropertyValue() function. This should work for you:
SELECT
GetMetadataPropertyValue(iothubevents, 'IoTHub.ConnectionDeviceId') as deviceId,
temperature,
humidity,
CAST(GetMetadataPropertyValue(iothubevents, 'IoTHub.EnqueuedTime') AS datetime) AS event_date
INTO
NodeMCUOutput
FROM
iothubevents
WHERE
GetMetadataPropertyValue(iothubevents, 'IoTHub.ConnectionDeviceId') = 'NodeMCU1'
I noticed you use a double quote in the WHERE clause.
You need a simple quote to get a match on strings. In this case it will be
WHERE deviceId = 'NodeMCU1'
If the deviceId is the one from IoT Hub metadata, Matthijs answer will help you to retrieve it.
I am trying to get the count of all records present in cosmos db in a lookup activity of azure data factory. I need this value to do a comparison with other value activity outputs.
The query I used is SELECT VALUE count(1) from c
When I try to preview the data after inserting this query I get an error saying
One or more errors occurred. Unable to cast object of type
'Newtonsoft.Json.Linq.JValue' to type 'Newtonsoft.Json.Linq.JObject'
as shown in the below image:
snapshot of my azure lookup activity settings
Could someone help me in resolving this error and if this is the limitation of azure data factory how can I get the count of all the rows of the cosmos db document using some other ways inside azure data factory?
I reproduce your issue on my side exactly.
I think the count result can't be mapped as normal JsonObject. As workaround,i think you could just use Azure Function Activity(Inside Azure Function method ,you could use SDK to execute any sql as you want) to output your desired result: {"number":10}.Then bind the Azure Function Activity with other activities in ADF.
Here is contradiction right now:
The query sql outputs a scalar array,not other things like jsonObject,or even jsonstring.
However, ADF Look Up Activity only accepts JObject,not JValue. I can't use any convert built-in function here because the query sql need to be produced with correct syntax anyway. I already submitted a ticket to MS support team,but get no luck with this limitation.
I also tried select count(1) as num from c which works in the cosmos db portal. But it still has limitation because the sql crosses partitions.
So,all i can do here is trying to explain the root cause of issue,but can't change the product behaviours.
2 rough ideas:
1.Try no-partitioned collection to execute above sql to produce json output.
2.If the count is not large,try to query columns from db and loop the result with ForEach Activity.
You can use:
select top 1 column from c order by column desc
So, a quick overview of what I'm doing:
We're currently storing events to Azure Table storage from a Node.js cloud service using the "azure-storage" npm module. We're storing our own timestamps for these events in storage (as opposed to using the Azure defined one).
Now, we have coded a generic storage handler script that for the moment just stores all values as strings. To save refactoring this script, I was hoping there would be a way to tweak the query instead.
So, my question is, is it possible to query by datetime where the stored value is not actually a datetime field and instead a string?
My original query included the following:
.where( "_timestamp ge datetime'?'", timestamp );
In the above code I need to somehow have the query treat _timestamp as a datetime instead of a string...
Would something like the following work, or what's the best way to do it?
.where( "datetime _timestamp ge datetime'?'", timestamp );
AFAIK, if the attribute type is String in an Azure Table, you can't convert that to DateTime. Thus you won't be able to use .where( "_timestamp ge datetime'?'", timestamp );
If you're storing your _timestamp in yyyy-MM-ddTHH:mm:ssZ format, then you could simply do a string based query like
.where( "_timestamp ge '?'", timestamp );
and that should work just fine other than the fact that this query is going to do a full table scan and will not be an optimized query. However if you're storing in some other format, you may get different results.
In My Data Sync Group there have 2 tables . Let say A and B . some columns of A reference to B table. B is Master Table.
When sync occur table A is do the processing ahead of Table B. So I got the Foreign Key Reference Error.
IS Azure Data sync processing with alphabetic order ?
Any solution for this ?
Thz in advance.
As #JuneT suggested, you might want to post this question in the "SQL Azure Forum" (http://aka.ms/sqlazureforum). When you do, they have asked specifically that you include the following information when asking Data Sync related questions:
Server id: Your server id
Region: Where your Data Sync Server is located/created
Time / Date: Date and time when you encountered the issue
Tracing Id: The tracing id you see in the message in UI