Is logic app will retry to insert the failed record again or not? - azure

I have a Logic App which will trigger whenever a record is created in Salesforce CRM, after that I have SQL server insert action where it will inserts the salesforce CRM record into azure SQL database.
Here my question is, if my Azure SQL database is down or failed to connect. What will happen to the record which is failed? Is logic app will retry to insert the failed record again or not?

By default not.
But you have the Do-Until Loops, where you define a condition for repeating an action. In your condition you can simply evaluate the result of the SQL Insert.
I use, for example the following expression to make a reliable call to a REST API:
"GetBerlinDataReliable": {
"actions": {
"GetBerlinData": {
"inputs": {
"method": "GET",
"uri": "http://my.rest.api/path?query"
},
"runAfter": {},
"type": "Http"
}
},
"expression": "#and(equals(outputs('GetBerlinData').statusCode, 200),greaterOrEquals(body('GetBerlinData').query?.count, 1))",
"limit": {
"count": 100,
"timeout": "PT30M"
},
"runAfter": {},
"type": "Until"
},

It depends on whether the HTTP code from such API is retry-able or not. If it is, we will by default retry 4 times with 30 seconds in between (you can change that in Settings of a given action as well). If it is not, then no retry will happen.
There are multiple ways to handle errors, depending on what and how you expect the error to occur: do-until like mentioned above is one way, or you an consider a try(insert)-catch(save to blob) and have another Logic Apps to check blob and retry insert.

Related

How to monitor execution time per endpoint when using Express with Google Cloud Functions?

I have a Cloud Function, actually, a Firebase Function, running on Node.js runtime that serves an API based on Express Framework. Looking at the function logs I have outputs like this (real data omitted):
{
"insertId": "000000-00000000-0000-0000-0000-000000000000",
"labels": {
"execution_id": "000000000000"
},
"logName": "projects/project-00000/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"receiveTimestamp": "2000-01-01T00:00:00.000000000Z",
"resource": {
"labels": {
"function_name": "api",
"project_id": "project-00000",
"region": "us-central1"
},
"type": "cloud_function"
},
"severity": "DEBUG",
"textPayload": "Function execution took 5000 ms, finished with status code: 200",
"timestamp": "2000-01-01T00:00:00.000000000Z",
"trace": "projects/project-00000/traces/00000000000000000000000000000000"
}
The relevant data I wanted to extract is the execution time and response code, in the textPayload attribute. However, I wanted to create a metric that breaks the data for each API endpoint to identify which endpoints are slow. This is an HTTP function but I don't have any request detail in the logs.
I probably can achieve what I want coding the logs into the function. However, I was wondering if I can extract the info directly from Google Cloud without touching the code function.
There is a way to create a log-based metric to show execution times split in endpoints?
References:
https://firebase.google.com/docs/functions/monitored-metrics
https://cloud.google.com/functions/docs/monitoring/metrics
I don't believe this will be possible without writing code. If you want to collect information about some running code, typically folks turn to Stackdriver, and use its APIs to collection specific information for analysis.

Cosmosdb Trigger for Azure Functions Application

We are developing applications using azure functions (python). I have 2 questions regarding Azure Functions Application
Can we monitor 2 collections using single Cosmosdb trigger ?
--- I have looked through the documentation and it seems it doesn't support. Did i miss anything ?
If there are 2 functions monitoring the same collection, will only one of the functions be triggered.
-- I observed this behaviour today. I was running 2 instances of functions app and the data from cosmosdb trigger was sent to only one of the application. I am trying to find out the reason for it.
EDIT:
1-I had never used multiple input triggers, but according to the official wiki it's possible, just add function.json file:
{
"bindings": [
{
"type": "queueTrigger",
"direction": "in",
"queueName": "image-resize"
},
{
"type": "blob",
"name": "original",
"direction": "in",
"path": "images-original/{name}"
},
{
"type": "blob",
"name": "resized",
"direction": "out",
"path": "images-resized/{name}"
}
]
}
PS: I know you're using cosmosDB, the sample above is just to illustrate
2- I assume it's due the way it's implemented (e.g topic vs queue). So the first function lock the event / message, then the second one is not aware of the event. At this moment, Durable Functions for python is still under development and should be released next month (03/2020). It will allow you to chain the execution of the functions just like it's available for c# / node:
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview?tabs=csharp#chaining
What you can do is output to a queue, which will trigger your second function after the first function completes. (pretty much what Durable Functions offers)

Azure User/Group provisioning with SCIM problem with boolean values

I have written an application compliant to the SCIM standard (https://www.rfc-editor.org/rfc/rfc7644), but integrating with Azure I can see that it fails to update a user if it is disabled, the request that Azure send is the following:
PATCH /Users/:id
{
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:PatchOp"
],
"Operations": [
{
"op": "Replace",
"path": "active",
"value": "False"
}
]
}
The SCIM protocol "sais" that the attribute active accept boolean values (https://www.rfc-editor.org/rfc/rfc7643#section-4.1.1), so following the PATCH protocol (https://www.rfc-editor.org/rfc/rfc6902#section-4.3) I expect a boolean value not a string with a boolean written inside it, so the expected request is the following:
PATCH /Users/:id
{
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:PatchOp"
],
"Operations": [
{
"op": "Replace",
"path": "active",
"value": false
}
]
}
So the problem is that the given value "False" should be false.
Is this a bug of Azure or am I missing something? If it is a bug, should I try to parse the string and eventually extract a boolean? But if I do that I'm going to be out of standard. How did you manage this problem?
I also spent a lot of time trying to figure out if Azure was being compliant with the SCIM spec and the answer is that they are not.
The default values that they send for PATCH requests are indeed strings, not booleans as the User JSON schema defines.
You can override the values that get send/mapped into the SCIM schema by:
Go into your provisioning app
Mappings > Synchronize Azure Active Directory Users to customappsso (the name here might be different in your directory)
Find Switch([IsSoftDeleted], "False", "True", "True", "False")
Replace with Switch([IsSoftDeleted], , false, true, true, false) (note the additional comma.)
Hit OK and SAVE
NOTE that after saving it will still see quotes around the booleans, but the PATCH request will be sent correctly.
See screenshots for reference
The default Azure implementation of SCIM isn't fully compliant with the required SCIM schema.
I found I was able to use the default NOT([IsSoftDeleted]) by using Microsoft's workaround which does aim to be SCIM compliant for PATCH operations (returns booleans rather than strings for the 'active' attribute).
This is achieved by appending the URL parameter ?aadOptscim062020 after the tenant url input.

Azure Search, listAdminKeys, ARM output error (does not support http method 'POST')

I am using this bit of code as an output object in my ARM template,
"[listAdminKeys(variables('searchServiceId'), '2015-08-19').PrimaryKey]"
Full text sample of the output section:
"outputs": {
"SearchServiceAdminKey": {
"type": "string",
"value": "[listAdminKeys(variables('searchServiceId'), '2015-08-19').PrimaryKey]"
},
"SearchServiceQueryKey": {
"type": "string",
"value": "[listQueryKeys(variables('searchServiceId'), '2015-08-19')[0]]"
}
I receive the following error during deployment (unfortunately, any error means the template deployment skips output section):
"The requested resource does not support http method 'POST'."
Checking the browser behavior seems to validate the error is related to the function (and, it using POST).
listAdminKeys using POST
How might I avoid this error and retrieve the AzureSearch admin key in the output?
Update: the goal of doing this is to gather all the relevant bits of information to plug into other scripts (.ps1) as parameters, since those resources are provisioned by this template. Would save someone from digging through the portal to copy/paste.
Thank you
You error comes from listQueryKeys, not admin keys.
https://learn.microsoft.com/en-us/rest/api/searchmanagement/adminkeys/get
https://learn.microsoft.com/en-us/rest/api/searchmanagement/querykeys/listbysearchservice
you wont be able to retrive those in the arm template, it can only "emulate" POST calls, not GET
With the latest API version, it's possible to get the query key using this:
"SearchServiceQueryKey": {
"type": "string",
"value": "[listQueryKeys(variables('searchServiceId'), '2020-06-30').value[0].key]"
}

Which action(s) can I use to create a folder in SharePoint Online via Azure Logic App?

As question title states, I am looking for a proper action in Logic Apps to create a folder. This action will be executed several times -- once per directory as per business rule. There will be no files created in these folders because the intent of the Logic App is to prepare a template folder structure for the users' needs.
In the official documentation I see that there are create file, create item, and list folder actions. They suggest that there might be an action to create a folder too (which I can't find).
If such action does not exist, I may need to use some SharePoint Online API, but that will be a last resort solution.
I was able to create a directory by means of SharePoint - CreateFile action. Creating a directory via a side effect of the file creation action is definitely a dirty hack (btw, inspired by a comment on MS suggestion site). This bug/feature is not documented, so relying on it in production environment is probably not a good idea.
More that that, if my problem requires creating a directory in SharePoint without any files in it whatsoever, an extra step in App Logic needs to be used. Make sure to delete the file using the Id provided by Create File action.
Here's what your JSON might look like, if you were trying to create a directory called folderCreatedAsSideEffect under preexisting TestTarget document library.
"actions": {
"Create_file": {
"inputs": {
"body": "#triggerBody()?['Name']",
"host": { "connection": { "name": "#parameters('$connections')['sharepointonline']['connectionId']" } },
"method": "post",
"path": "/datasets/#{encodeURIComponent(encodeURIComponent('https://MY.sharepoint.com/LogicApps/'))}/files",
"queries": {
"folderPath": "/TestTarget/folderCreatedAsSideEffect",
"name": "placeholder"
}
},
"runAfter": {},
"type": "ApiConnection"
},
"Delete_file": {
"inputs": {
"host": { "connection": { "name": "#parameters('$connections')['sharepointonline']['connectionId']" } },
"method": "delete",
"path": "/datasets/#{encodeURIComponent(encodeURIComponent('https://MY.sharepoint/LogicApps/'))}/files/#{encodeURIComponent(body('Create_file')?['Id'])}"
},
"runAfter": {
"Create_file": [
"Succeeded"
]
},
"type": "ApiConnection"
}
},
Correct, so far the SharePoint Connector does not support Folder management tasks.
So, your best option currently is to use the SharePoint API or client libraries in an API or Function App.

Resources