How to get local time when Azure Data Factory Linked Service parses Timezone offset to UTC - azure

I have a data factory pipeline that receives JSON files from an Azure Blob storage.
These files have the following structure:
{
"Time": {
"UTC": {
"Sent": "2020-09-01T11:45:00.0Z"
}
},
"LocalTime": {
"Original": {
"Sent": "2020-09-01T13:45:00+02:00"
}
}
}
When the lookup data activity gets the file from the blob it parses the localtime to UTC. I would like to ignore the offset and just grab the datetime as is.
How do I go about doing this?

According your comment:
We decided to solve this by stripping the offset with regex while
moving the folder to the blob using a logic app.
We are glad to hear that you found a solution to solve it. I help you post it as answer, others can ref this way. This also can be beneficial to other community members. Thank you!

Thanks for posting the ask , as definitely I had never thought it would behave the way its did . For clarity of others when we try
#activity('Lookup1').output.firstRow.LocalTime.Original.Sent will give the output as "2020-09-01T11:45Z" and not "2020-09-01T13:45:00+02:00".
This is what I tried while creating the Dataset , create it as if the file is delimited and not JSON ( this is what i think you are doing ) and the intend is to read the content of the whole file as string . Please make the adjustment to the Column& row limiter as shown below
Now you can use the expression ( i hate to hard code , but then we can suyre make it dynamic )
#substring(string(activity('Lookup1').output.firstRow),187,16)
Output
{
"name": "Datepart",
"value": "2020-09-01T13:45"
}
HTH

Related

ADF: can't build simple Json file transformation (one field flattening)

I need a help in transforming simple json file inside Azure Data Flow. I need to flatten just one field date_sk in example here:
{
"date_sk": {"string":"2021-09-03"}
"is_influencer": 0,
"is_premium": -1,
"doc_id": "234"
}
Desired transformation:
"date_sk": {"string":"2021-09-03"}
to become
"dateToGroupBy" : "2021-09-03"
I create source stream, note the strange projection Azure picks, there is no "string" field anymore, but this is how automatic Azure transformation works for some reason:
Data preview of the same source stream node:
And here's how it suggest me to transform it in a separate "Derived Column" modifier. I played with the right part, but this is the only format (date_sk.{}) that does not display any error I was able to pick:
But then output dateToGroupBy field happens to be empty:
Any ideas on what could got wrong and how can I build the expected transformation? Thank you
Alright, it happened to be a Microsoft bug in ADF.
ADF stumbles upon "string" field name as JSON field, can't handle it, though schema and data validation passes through Ok, and showing no errors.
When I replace date_sk": {"string":"2021-09-03"} by date_sk": {"s1":"2021-09-03"} or anything other than string everything starts working just fine
and dateToGroupBy is filled with date values taken from date_sk.s1
When I return string back, it shows NULL in output values.
It supposed to either show error on verification stage or handle this field naming properly.

Get value from json in LogicApp

Rephrasing question entirely, as first attempt was unclear.
In my logic app I am reading a .json from blob which contains:
{
"alpha": {
"url": "https://linktoalpha.com",
"meta": "This logic app does job aaaa"
},
"beta": {
"url": "https://linktobeta.com",
"meta": "This logic app does job beta"
},
"theta": {
"url": "https://linktotheta.com",
"meta": "This logic app does job theta"
}
}
I'm triggering the logic app with a http post which contains in the body:
{ "logicappname": "beta" }
But the value for 'logicappname' could be alpha, beta or theta. I now need to set a variable which contains the url value for 'beta'. How can this be achieved without jsonpath support?
I am already json parsing the file contents from the blob and this IS giving me the tokens... but I cannot see how to select the value I need. Would appreciate any assistance, thank you.
For your requirement, I think just use "Parse JSON" action to do it. Please refer to the steps below:
1. I upload a file testJson.json to my blob storage, then get it and parse it in my logic app.
2. We can see there are three url in the screenshot below. As you want to get the url value for beta, it is the second one, so we can choose the second one.
If you want to get the url value by the param logicappname from the "When a HTTP request is received" trigger, you can use a expression when you create the result variable.
In my screenshot, the expression is:
body('Parse_JSON')?[triggerBody()?['logicappname']]?['url']
As the description of your question is a little unclear and I'm confused about the meaning of I am already json parsing the file contents from the blob and this IS giving me the tokens, why is "tokens" involved in it ? And in the original question it seems you want to do it by jsonpath but in the latest description you said without jsonpath ? So if I misunderstand your question, please let me know. Thanks.
Not sure if I understand your question. But I believe you can use Pars Json action after the http trigger.
With this you will get a control over the incoming JSON message and you can choose the 'URL' value as a dynamic content in the subsequent actions.
Let me know if my understanding about your question is wrong.

Is there any possibility for adding thousand separator inside an Azure Logic App Expression?

Is there any possibility for adding thousand separator inside an Azure Logic App Expression?
I'm creating an Azure Logic App which sends an JSON to an REST Service. The JSON is build inside the Logic App with a Compose. The Data for the JSON comes from different REST Services.
The Services deliver me numbers like "13251", "11231543.3" etc.
I need to transform and send the numbers with thousand separator like "13.251", "11,231,543.3" etc.
My code looks like:
{
"Item": {
"nr": "#{body('current')?['nr']}",
"amount": "#{body('current')?['amount']}",
}
}
So I basically need something like: .ToString("#,##0.00")
"13251" => "13.251"
"11231543.3" => "11,231,543.3"
Thanks for your help!
You cannot send numbers with thousand separators in Json, since it will invalidate the Json.
Consider this Json:
{
"age": 123,456.0
}
This will be seen as:
{
"age": 123,
456.0
}
Which is invalid Json.
If you wish to format it as a string: there doesn't seem to be a conversion available to format numbers. There are several format-enabled conversions for DateTime.
More info: Reference guide to using functions in expressions for Azure Logic Apps and Microsoft Flow
You might want to try the Execute JavaScript code action for this. Sample:
enter image description here
Hope this Helps!
It can be achieved in logic app, but it's complicated. We can use "Math functions" in logic app(div and mod) and we also need to use "String functions", "if condition", "until" and initialize some variables. I achieved it by the actions and methods I mentioned above but it's too complicated. I think it is easy for us to do it by add additional code in azure function.

U-SQL: How to skip files from analysis based on content

I have a lot of files each containing a set of json objects like this:
{ "Id": "1", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
{ "Id": "2", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
{ "Id": "3", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
etc.
Each file containts information about a specific type of session. In this case it are sessions from a Web App, but it could also be sessions of a Desktop App. In that case the value for Origin is "DesktopClient" instead of "WebClient"
For analysis purposes say I am only interested in DesktopClient sessions.
All files representing a session are stored in Azure Blob Storage like this:
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef77.json
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef78.json
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef79.json
Is it possible to skip files of which the first line already makes it clear if it is not a DesktopClient session file, like in my example? I think it would save a lot of query resources if files that I know of do not contain the right session type can be skipped since they can be quit big.
At the moment my query read the data like this:
#RawExtract = EXTRACT [RawString] string
FROM #"wasb://plancare-events-blobs#centrallogging/2017/07/20/{*}.json"
USING Extractors.Text(delimiter:'\b', quoting : false);
#ParsedJSONLines = SELECT Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple([RawString]) AS JSONLine
FROM #RawExtract;
...
Or should I create my own version of Extractors.Text and if so, how should I do that.
To answer some questions that popped up in the comments to the question first:
At this point we do not provide access to the Blob Store meta data. That means that you need to express any meta data either as part of the data in the file or as part of the file name (or path).
Depending on the cost of extraction and sizes of files, you can either extract all the rows and then filter out the rows where the beginning of the row is not fitting your criteria. That will extract all files and all rows from all files, but does not need a custom extractor.
Alternatively, write a custom extractor that checks for only the files that are appropriate (that may be useful if the first solution does not give you the performance and you can determine the conditions efficiently inside the extractors). Several example extractors can be found at http://usql.io in the example directory (including an example JSON extractor).

How to store a time value in MongoDB using SailsJS v0.11.3?

I'm working with SailsJS and MongoDB and I have an API that has two models and I want to store Date and Time separately but as in the official documentation said it doesn't have a typeof Time attribute, just Date and DateTime. So, I'm using DateTime to store Time values.
I send the values in the request to be stored in the database, I have no problem to store dates, I just send a value like:
2015-12-16
and that's it, it is stored in the table with no problem, but when I want to store a Time value like:
19:23:12
it doesn't works. The error is:
{
"error": "E_VALIDATION",
"status": 400,
"summary": "1 attribute is invalid",
"model": "ReTime",
"invalidAttributes": {
"value": [
{
"rule": "datetime",
"message": "`undefined` should be a datetime (instead of \"19:23:12\", which is a string)"
}
]
}
}
So, any idea how to send the time value to be stored in a DateTime attribute?
I also have tried to send it in different formats like:
0000-00-00T19:23:12.000Z
0000-00-00T19:23:12
T19:23:12.000Z
19:23:12.000Z
19:23:12
But any of them works fine.
Also I was thinking to store both values (Date and Time) in plain text, I mean typeof String attributes. But I need to make some queries and I don't know if it will affect the performance with the waterline ORM.
Please any kind of help will come in handy.
Thanks a lot!
You have two options here. The best would be to simply store the date and time as a datetime and parse them into separate values when you need to use them. MongoDB will store this in BSON format and this will be the most efficient method.
Otherwise, you could use string and create a custom validation rule as described in the Sails documentation.

Resources