Serilog Expressions output template newline in Azure Function app - azure

I have just added the Serilog Expressions package to my Azure Function App, so that I could use the ability to shorten the SourceContext down to just the class name (which works beautifully btw). The function app has its config parameters stored in the Azure portal in the function app's Settings -> Configuration -> Application Settings section. The setting for the log entry template is named SerilogSettings:OutputTemplate, and the value entered for that setting is exactly the same there as it is in a different app where it works correctly:
{#t:yyyy-MM-dd HH:mm:ss.fff zzz}|{CorrelationId}|{#l:u3}|{Substring(SourceContext, LastIndexOf(SourceContext, '.') + 1)}|{#m}\n{#x}
The problem I'm seeing is with the newline character in there. The Application Settings section in Azure has an "Advanced Edit" view, where you can see that the setting values entered are actually translated behind the scenes into a big json string, and that json string is what actually gets read by the application at startup time. Here's a key chunk from that json string:
[
...
{
"name": "SerilogSettings:OutputTemplate",
"value": "{#t:yyyy-MM-dd HH:mm:ss.fff zzz}|{CorrelationId}|{#l:u3}|{Substring(SourceContext, LastIndexOf(SourceContext, '.') + 1)}|{#m}\\n{#x}",
"slotSetting": false
},
...
]
Notice that the newline \n has been escaped, and is now \\n. So now at startup time, that template string is sent into Serilog, and it does not understand the \\n. The end result is that the log entries written do not have any newline characters in them, and the log file consists of one enormously long line.
What are my options to address this while still using the Expressions package?

What are my options to address this while still using the Expressions package?
When using Serilog with the message property in the template should have the format specifier 1 to achieve the desired format:
"{Timestamp:yyyy-MM-dd HH:mm:ss.fff} [{Level:u3}] {Message:l}{NewLine}{Exception}"
refer this blog for more details of expression implementations.

Related

Why can't Azure Search import JSON blobs?

When importing data using the configuration found below, Azure Cognitive Search returns the following error:
Error detecting index schema from data source: ""
Is this configured incorrectly? The files are stored in the container "example1" and in the blob folder "json". When creating the same index with the same data in the past there were no errors, so I am not sure why it is different now.
Import data:
Data Source: Azure Blob Storage
Name: test-example
Data to extract: Content and metadata
Parsing mode: JSON
Connection string:
DefaultEndpointsProtocol=https;AccountName=EXAMPLESTORAGEACCOUNT;AccountKey=EXAMPLEACCOUNTKEY;
Container name: example1
Blob folder: json
.json file structure.
{
"string1": "vaule1",
"string2": "vaule2",
"string3": "vaule3",
"string4": "vaule4",
"string5": "vaule5",
"string6": "vaule6",
"string7": "vaule7",
"string8": "vaule8",
"list1": [
{
"nested1": "value1",
"nested2": "value2",
"nested3": "value3",
"nested4": "value4"
}
],
"FileLocation": null
}
Here is an image of the screen with the error when clicking "Next: Add cognitive skills (Optional)" button:
To clarify there are two problems:
1) There is a bug in the portal where the actual error message is not showing up for errors, hence we are observing the unhelpful empty string "" as an error message. A fix is on the way and should be rolled out early next week.
2) There is an error when the portal attempts to detect index schema from your data source. It's hard to say what the problem is when the error message is just "". I've tried your sample data and it works fine with importing.
I'll update the post once the fix for displaying the error message is out. In the meantime (again we're flying blind here without the specific error string) here are a few things to check:
1) Make sure your firewall rules allow the portal to read from your blob storage
2) Make sure there are no extra characters inside your JSON files. Check the whitespace charcters are whitespace (you should be able to open the file in VSCode and check).
Update: The portal fix for the missing error messages has been deployed. You should be able to see a more specific error message should an error occur during import.
Seems to me that is a problem related to the list1 data type. Make sure you're selecting: "Collection(Edm.String)" for it during the index creation.
more info, please check step 5 of the following link: https://learn.microsoft.com/en-us/azure/search/search-howto-index-json-blobs
I have been in contact with Microsoft, and this is a bug in the Azure Portal. The issue is the connection string wizard does not append the Endpoint suffix correctly. They have recommeded to manually pasting the connection string, but this still does not work for me. So this is a suggested answer by Microsoft, but I don't believe is completely correct because the portal outputs the same error message:
Error detecting index schema from data source: ""

Azure : How to write path to get a file from a time series partitioned folder using the Azure logic apps

I am trying to retrieve a csv file from the Azure blob storage using the logic apps.
I set the azure storage explorer path in the parameters and in the get blob content action I am using that parameter.
In the Parameters I have set the value as:
concat('Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv')
So during the run time this path should form as:
Directory1/Year=2019/Month=12/Day=30/myfile.csv
but during the execution action is getting failed with the following error message
{
"status": 400,
"message": "The specifed resource name contains invalid characters.\r\nclientRequestId: 1e2791be-8efd-413d-831e-7e2cd89278ba",
"error": {
"message": "The specifed resource name contains invalid characters."
},
"source": "azureblob-we.azconn-we-01.p.azurewebsites.net"
}
So my question is: How to write path to get data from the time series partitioned path.
The response of the Joy Wang was partially correct.
The Parameters in logic apps will treat values as a String only and will not be able to identify any functions such as concat().
The correct way to use the concat function is to use the expressions.
And my solution to the problem is:
concat('container1/','Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv')
You should not use that in the parameters, when you use this line concat('Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv') in the parameters, its type is String, it will be recognized as String by logic app, then the function will not take effect.
And you need to include the container name in the concat(), also, no need to use string(int()), because utcNow() and substring() both return the String.
To fix the issue, use the line below directly in the Blob option, my container name is container1.
concat('container1/','Directory1/','Year=',substring(utcNow(),0,4),'/Month=',substring(utcnow(),5,2),'/Day=',substring(utcnow(),8,2),'/myfile.csv')
Update:
As mentioned in #Stark's answer, if you want to drop the leading 0 from the left.
You can convert it from string to int, then convert it back to string.
concat('container1/','Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv')

Is there any possibility for adding thousand separator inside an Azure Logic App Expression?

Is there any possibility for adding thousand separator inside an Azure Logic App Expression?
I'm creating an Azure Logic App which sends an JSON to an REST Service. The JSON is build inside the Logic App with a Compose. The Data for the JSON comes from different REST Services.
The Services deliver me numbers like "13251", "11231543.3" etc.
I need to transform and send the numbers with thousand separator like "13.251", "11,231,543.3" etc.
My code looks like:
{
"Item": {
"nr": "#{body('current')?['nr']}",
"amount": "#{body('current')?['amount']}",
}
}
So I basically need something like: .ToString("#,##0.00")
"13251" => "13.251"
"11231543.3" => "11,231,543.3"
Thanks for your help!
You cannot send numbers with thousand separators in Json, since it will invalidate the Json.
Consider this Json:
{
"age": 123,456.0
}
This will be seen as:
{
"age": 123,
456.0
}
Which is invalid Json.
If you wish to format it as a string: there doesn't seem to be a conversion available to format numbers. There are several format-enabled conversions for DateTime.
More info: Reference guide to using functions in expressions for Azure Logic Apps and Microsoft Flow
You might want to try the Execute JavaScript code action for this. Sample:
enter image description here
Hope this Helps!
It can be achieved in logic app, but it's complicated. We can use "Math functions" in logic app(div and mod) and we also need to use "String functions", "if condition", "until" and initialize some variables. I achieved it by the actions and methods I mentioned above but it's too complicated. I think it is easy for us to do it by add additional code in azure function.

Azure Resource Manager - Convert value to 'lower'

I was recently using ARM templates to deploy multiple resources into Azure. While deploying Storage accounts, I ran into an issue which was due to some constraints put up by Azure like
Name of Storage Account should not contain upper case letters
Its max length should be 24.
I want this name from the user and can handle the 2nd issue using the "maxLength" property on 'parameters'. But for lower case, there is no such property in 'parameters' also I'm unable to find any function which will convert the user entered value to lower case.
What I expect:
Method to convert the user entered value in lower case.
Any other method to suit my use case.
Thanks in advance.
You should look at the string function reference of the ARM templates.
you need to create a variable (or just add those functions to the name input, like so:
"name": "[toLower(parameters('Name'))]"
or add a substring method, something like this:
"variables": {
"storageAccountName": "[tolower(concat('sawithsse', substring(parameters('storageAccountType'), 0, 2), uniqueString(subscription().id, resourceGroup().id)))]"
},

JSON stored in AWS EB environment variables is retrieved without quotes

I'm running a node.js EB container and trying to store JSON inside an Environment Variable. The JSON is stored correctly, but when retrieving it via process.env.MYVARIABLE it is returned with all the double quotes stripped.
E.g. MYVARIABLE looks like this:
{ "prop": "value" }
when I retrieve it via process.env.MYVARIABLE its value is actualy { prop: value} which isn't valid JSON. I've tried to escape the quotes with '\' ie { \"prop\": \"value\" } that just adds more weird behavior where the string comes back as {\ \"prop\\":\ \"value\\" }. I've also tried wrapping the whole thing in single quotes e.g. '{ "prop": "value" }', but it seems to strip those out too.
Anyone know how to store JSON in environment variables?
EDIT: some more info, it would appear that certain characters are being doubly escaped when you set an environment variable. E.g. if I wrap the object in single quotes. the value when I fetch it using the sdk, becomes:
\'{ "prop": "value"}\'
Also if I leave the quotes out, backslashes get escaped so if the object looks like {"url": "http://..."} the result when I query via the sdk is {"url": "http:\\/\\/..."}
Not only is it mangling the text, it's also rearranging the JSON properties, so properties are appearing in a different order than what I set them to.
UPDATE
So I would say this seems to be a bug in AWS based on the fact that it seems to be mangling the values that are submitted. This happens whether I use the node.js sdk or the web console. As a workaround I've taken to replacing double quotes with single quotes on the json object during deployment and then back again in the application.
Use base64 encoding
An important string is being auto-magically mangled. We don't know the internals of EB, but we can guess it is parsing JSON. So don't store JSON, store the base64-encoded JSON:
a = `{ "public": { "s3path": "https://d2v4p3rms9rvi3.cloudfront.net" } }`
x = btoa(a) // store this as B_MYVAR
// "eyAicHVibGljIjogeyAiczNwYXRoIjogImh0dHBzOi8vZDJ2NHAzcm1zOXJ2aTMuY2xvdWRmcm9udC5uZXQiIH0gfQ=="
settings = JSON.parse(atob(process.env.B_MYVAR))
settings.public.s3path
// "https://d2v4p3rms9rvi3.cloudfront.net"
// Or even:
process.env.MYVAR = atob(process.env.B_MYVAR)
// Sets MYVAR at runtime, hopefully soon enough for your purposes
Since this is JS, there are caveats about UTF8 and node/browser support, but I think atob and btoa are common. Docs.

Resources