How to format Azure WAD Log messages Pushed using Sink? - azure

We have a bunch of Azure Service Fabric Services. We are currently using WAD for pushing all the ETW logs to Event hub. The event read from the Azure EH looks like below -
{
"records": [{
"time": "2016-12-01T03:54:36.3117117Z",
"category": "-",
"level": "Informational",
"properties": {
"DeploymentId": "2c07d034-de51-4c7b-a733-7147124512ef",
"Role": "IaaS",
"RoleInstance": "-",
"Level": 0,
"ProviderGuid": "a26b2183-a5f0-5eeb-a02d-ea55c138fcb9",
"ProviderName": "-",
"EventId": 1,
"Pid": 5836,
"Tid": 2936,
"OpcodeName": "",
"KeywordName": "Session3;Session2;Session1;Session0",
"TaskName": "-",
"ChannelName": "",
"EventMessage": "Verbose",
"ActivityId": "00000000-0000-0000-0000-000000000000",
"RelatedActivityId": "00000000-0000-0000-0000-000000000000",
}
"Message": "traceId=\"c365ffeb-0c10-4b8b-bc12-12a79aab19bd\" correlationId=\"\" userId=\"System\"auditTypeId=\"1\" auditEvent=\"-\" auditSource=\"-" cloudDeploymentId=\"\" auditLevel=\"Verbose\" auditMessage=\"-" tags=\"\" dataCenter=\"**\""
}]
}
Is their a way i can format the message in Azure WAD?. Basically , I want to format the Message to have a separator instead of just the space in the middle of key value pairs.

Related

Azure Function's log streams from Azure Event Hub can't be consumed by Filebeat

Our team wants to import the log of Azure Function by Azure Event Hub and Filebeat into Elastic Search. We followed this references to set up an Event Hub for Azure function. But we faced an issue with the wrong format of the log stream.
Firstly, Let me show what's the correct format we expect. Take Azure PostgresSQL's log from Event Hub for example:
[
{
"records": [
{
"time": "2023-01-04T03:45:31.1040000Z",
"properties": {
"timestamp": "2023-01-04 03:45:31.104 UTC",
"processId": 8909,
"errorLevel": "LOG",
"sqlerrcode": "00000",
"message": "2023-01-04 03:45:31 UTC-63b4f65b.22cd-LOG: connection received: host=<host> port=<port>"
},
"resourceId": "/SUBSCRIPTIONS/<my subscription id>/RESOURCEGROUPS/<my resource group>/PROVIDERS/MICROSOFT.DBFORPOSTGRESQL/FLEXIBLESERVERS/<postgres server>",
"category": "PostgreSQLLogs",
"operationName": "LogEvent"
}
]
}
]
Notice that the properties is a flattened json so that it can be consumed by Filebeat. We want this kind of properties. But the Azure Function's log looks like the following, which is a string rather than flattened json:
[
{
"records": [
{
"level": "Informational",
"resourceId": "/SUBSCRIPTIONS/<my subscription id>/RESOURCEGROUPS/<my resource group>/PROVIDERS/MICROSOFT.WEB/SITES/<my azure function>"
"operationName": "Microsoft.Web/sites/functions/log",
"category": "FunctionAppLogs",
"time": "01/04/2023 01:55:00",
"properties": "{'appName':'<my azure function>','roleInstance':'<id>','message':'Host Status: {\\n \\'id\\': \\'<function app id>\\',\\n \\'state\\': \\'Running\\',\\n \\'version\\': \\'4.13.0.0\\',\\n \\'versionDetails\\': \\'4.13.0+da9a765ed67be48c79440526f78fa1b5c6efdeea\\',\\n \\'platformVersion\\': \\'99.0.10.764\\',\\n \\'instanceId\\': \\'<instance id>\\',\\n \\'computerName\\': \\'<computer name>\\',\\n \\'processUptime\\': 69254486,\\n \\'functionAppContentEditingState\\': \\'Unknown\\'\\n}','category':'Host.Controllers.Host','hostVersion':'4.13.0.0','hostInstanceId':'<host id>','level':'Information','levelId':2,'processId':1}",
"EventStampType": "Stamp",
"EventPrimaryStampName": "waws-prod-ty1-081",
"EventStampName": "waws-prod-ty1-081",
"Host": "<host name>",
"EventIpAddress": "<ip address>"
},
The string value of properties can't be processed by decode_json_fields of Filebeat either because the format is not json (the format is 'key': value rather than "key": value). Is there any way to correct the format of properties before it is consumed by Filebeat? By the way, our Azure Function is deployed using a container.

how to get multiple folder path on the azure logic app

I have a scenario : I want to build an azure logic app, where I have to got documents from various folder from the Sharepoint get process and give email notification. My confusion is how can I give multiple input folder path?
I'm going to make an assumptions with your architecture in my answer. I'm assuming you want to process multiple files in different sites within the same SharePoint tenant. So, not across tenants.
To achieve what you're asking for, I created a Parse JSON action which takes in the following structure (as an example, obviously the structure is the key point here, not the data) ...
Scenario 1 - Specific Files
[
{
"SiteName": "ExampleSolution",
"FileName": "/Shared Documents/General/Book.xlsx"
},
{
"SiteName": "TestSite",
"FileName": "/Shared Documents/Test Folder/Document.docx"
}
]
The SP tenant needs to be authenticated to with the appropriate user.
Then, in a For Each action, loop through each item and retrieve the contents of each document using the Get file content using path action.
Site Address = concat('https://yourtenant.sharepoint.com/sites/', items('For_each')?['SiteName'])
File Path = File Name (from Dynamic Content)
It will then retrieve the contents dynamically using those expressions.
File 1 (Excel Document)
File 2 (Word Document)
Scenario 2 - All Files
If you want to do it for all files, just change it up slightly ...
[
{
"FolderName": "/Shared Documents/General",
"SiteName": "ExampleSolution"
},
{
"FolderName": "/Shared Documents/Test Folder",
"SiteName": "TestSite"
}
]
Site Address = concat('https://yourtenant.sharepoint.com/sites/', items('For_each')?['SiteName'])
File Identifier = Folder Name (from Dynamic Content)
Output - Folder 1
[
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fBook.xlsx",
"Name": "Book.xlsx",
"DisplayName": "Book.xlsx",
"Path": "/Shared Documents/General/Book.xlsx",
"LastModified": "2021-12-24T02:56:14Z",
"Size": 15330,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{23948609-0DA0-43E0-994C-2703FEEC8567},7\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9uLnNoYXJlcG9pbnQuY29tL3NpdGVzL0V4YW1wbGVTb2x1dGlvbg==,id=JTI1MmZTaGFyZWQlMmJEb2N1bWVudHMlMjUyZkdlbmVyYWwlMjUyZkJvb2sueGxzeA==",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fTest%2bDocument.docx",
"Name": "Test Document.docx",
"DisplayName": "Test Document.docx",
"Path": "/Shared Documents/General/Test Document.docx",
"LastModified": "2021-12-30T11:49:28Z",
"Size": 17959,
"MediaType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"IsFolder": false,
"ETag": "\"{7A3C7133-02FC-4A63-9A58-E11A815AB351},8\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9u etc",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fHierarchy.xlsx",
"Name": "Hierarchy.xlsx",
"DisplayName": "Hierarchy.xlsx",
"Path": "/Shared Documents/General/Hierarchy.xlsx",
"LastModified": "2022-01-07T02:49:38Z",
"Size": 41719,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{C919454C-48AB-4897-AD8C-E3F873B52E50},72\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9uL etc",
"LastModifiedBy": null
}
]
Output - Folder 2
[
{
"Id": "%252fShared%2bDocuments%252fTest%2bFolder%252fTest.xlsx",
"Name": "Test.xlsx",
"DisplayName": "Test.xlsx",
"Path": "/Shared Documents/Test Folder/Test.xlsx",
"LastModified": "2022-01-09T11:08:31Z",
"Size": 17014,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{CCF71CE7-89E7-4F89-B5CB-0F078E22C951},163\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9u etc",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fTest%2bFolder%252fDocument.docx",
"Name": "Document.docx",
"DisplayName": "Document.docx",
"Path": "/Shared Documents/Test Folder/Document.docx",
"LastModified": "2022-01-09T11:08:16Z",
"Size": 17293,
"MediaType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"IsFolder": false,
"ETag": "\"{317C5767-04EC-4264-A58B-27A3FA8E4DF3},3\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG etc",
"LastModifiedBy": null
}
]
From here, just process each file individually using one of the files actions like in the first scenario above.
Note: You'll need to work through sub folders and recursion. There doesn't appear to be a way to do that easily.
You've provided very little information but it should be enough for you to adapt it accordingly.
Also, I strongly recommend you use a means other than a hardcoded JSON document in the action itself. There are way better means for housing that information which wouldn't result in a need to update the action itself everytime you want to add or delete a file.
The concept of the loop and and the expressions are the most important part to grasp as they will give you what you want.

How do I retrieve multiple callbacks on translation progress

I have created a webhook that is using the even extraction.updated that should trigger when a job is in progress. I want to retrieve multiple calls on the progress of the translation so that I can show it in my progress bar. Unfortunately I only retrieve a callback when the job translation is finished. When I create the job I set the misc.workflow parameter and same goes for the hook. Am I missing some parameters when creating a webhook or posting a job?
I was following this tutorial: https://forge.autodesk.com/en/docs/webhooks/v1/tutorials/create-a-hook-model-derivative/
The job payload takes the input which is my urn, output which is the filetype(svf2) and views(2d,3d), and misc which is the workflow(testworkflowname)
Callback result:
{{
"version": "1.0",
"resourceUrn": "<my-resourceUrn>",
"hook": {
"hookId": "<my-hookId>",
"tenant": "testworkflowname",
"callbackUrl": "<my-callbackUrl>",
"createdBy": "<my-createdBy>",
"event": "extraction.updated",
"createdDate": "<my-createdDate>",
"lastUpdatedDate": "<my-lastUpdatedDate>",
"system": "derivative",
"creatorType": "Application",
"status": "active",
"scope": {
"workflow": "testworkflowname"
},
"hookAttribute": {
"progress": "test"
},
"autoReactivateHook": false,
"urn": "<my-urn>"
},
"payload": {
"TimeStamp": <my-timestamp>,
"Env": "production",
"URN": "<my-urn>",
"EventType": "UPDATED",
"Payload": {
"status": "success",
"bubble": {
"guid":"<my-guid>",
"owner": "<my-owner>",
"hasThumbnail": "true",
"startedAt": "my-startedAt>",
"type": "design",
"urn":"<my-urn>",
"success": "100%",
"progress": "complete",
"region": "US",
"status": "success",
"children": []
},
"scope": "<my-scope>",
"registerKey": []
},
"WorkflowAttributes": null
}
}}
You've got your webhooks setup correctly. I'm afraid this is a limitation on the Model Derivative service side. The service can translate over 60 different file formats today, and as you can imagine, different formats must be converted using different libraries. And while some of the converters support progress reporting, others may not, so being able to get notified of translation progress really depends on the file format you're processing.

Azure Advisor REST API Recommendation Extended Properties Do Not Populate?

I am pulling Recommendations from the Azure Advisor Rest Api and am not able to retrieve the extendedProperties values.
Specifically, I am looking for savings data from Recommendations of the Cost category.
In the following video at 58 seconds there is an example of the expected response.
https://www.youtube.com/watch?v=hAxrdmOAB8s
Are there specific permissions necessary to give my account in order to pull the data, or is the API not capable of supplying the values?
I am able to see the data in the portal, but the extendedProperties property is always empty.
I'm supposing you're trying the Recommendations - List API.
Essentially, extended properties expose additional information about a recommendation from Azure Advisor.
AFAIK, they need not be present for every recommendation, and shouldn't need additional privileges to list. It could just be the case that the type of recommendations you are receiving do not have any to list.
Here is a sample response that I received that has a mix of both:
[
{
"properties": {
"category": "Cost",
"impact": "Medium",
"impactedField": "Microsoft.Network/publicIPAddresses",
"impactedValue": "foo",
"lastUpdated": "2020-03-20T14:10:24.6928024Z",
"recommendationTypeId": "1b4dd958-c202-47af-af97-99bfc98376a5",
"shortDescription": {
"problem": "Delete Public IP address not associated to a running Azure resource",
"solution": "Delete Public IP address not associated to a running Azure resource"
},
"extendedProperties": {}
},
"id": "xxx",
"type": "Microsoft.Advisor/recommendations",
"name": "xxx"
},
{
"properties": {
"category": "Cost",
"impact": "Medium",
"impactedField": "Microsoft.Sql/servers/databases",
"impactedValue": "bar",
"lastUpdated": "2020-03-20T13:27:35.8394386Z",
"recommendationTypeId": "b83241d3-47ba-4603-8d5a-a1b3331e74f4",
"shortDescription": {
"problem": "Right-size underutilized SQL Databases",
"solution": "Right-size underutilized SQL Databases"
},
"extendedProperties": {
"ServerName": "fooserver",
"DatabaseName": "fooDB",
"IsInReplication": "1",
"ResourceGroup": "xyz",
"DatabaseSize": "6",
"Region": "East US 2",
"ObservationPeriodStartDate": "03/04/2020 00:00:00",
"ObservationPeriodEndDate": "03/19/2020 00:00:00",
"Recommended_DTU": "10",
"Recommended_SKU": "S0",
"HasRecommendation": "true"
}
}
}
]

Flickr API Not Returning Actual Photos

Hi i'm on a project and want to use Flickr for my image galery, i'm using the photosets.* method but whenever i make a request i don't get images, i only get info.
Json Result:
{
"photoset": {
"id": "77846574839405047",
"primary": "88575847594",
"owner": "998850450#N03",
"ownername": "mr.barde",
"photo": [
{
"id": "16852316982",
"secret": "857fur848c",
"server": "8568",
"farm": 9,
"title": "wallpaper-lenovo-blue-pc-brand",
"isprimary": "1",
"ispublic": 1,
"isfriend": 0,
"isfamily": 0
},
{
"id": "16665875068",
"secret": "857fur848c",
"server": "7619",
"farm": 8,
"title": "white_horses-1280x720",
"isprimary": "0",
"ispublic": 1,
"isfriend": 0,
"isfamily": 0
}
],
"page": 1,
"per_page": "2",
"perpage": "2",
"pages": 3,
"total": "6",
"title": "My First Album"
},
"stat": "ok"
}
Please would like to have actual image URLs returned, how can i do this.
Thanks to the comment by #CBroe
I found this in the Flickr API doc.
You can construct the source URL to a photo once you know its ID, server ID, farm ID and secret, as returned by many API methods.
https://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}.jpg
or
https://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}_[mstzb].jpg
or
https://farm{farm-id}.staticflickr.com/{server-id}/{id}_{o-secret}_o.(jpg|gif|png)
The final result would then look something like this.
https://farm1.staticflickr.com/2/1418878_1e92283336_m.jpg
Reference: https://www.flickr.com/services/api/misc.urls.html

Resources