I am try to utilise the For Each object in logic apps and I am not sure if this is possible:
Step 1 - SQL Stored Procedure is executed returning a number of rows
Step 2 - For Each based on the number of rows in Step 1
Step 2a - Execute a SQL Procedure based on values in loop
The problem I am finding is that when I add the loop, it states that there is no dynamic content available even though Step 1 will return results.
I've googled and there seems to be no information on using the for each loop in Logic Apps.
Any suggestions?
Thanks
We're working on a fix for this. Before the fix is available, try one of these two workarounds:
If you're comfortable with code view, create a for-each loop in designer, then specify ResultSets as the input for foreach.
Alternatively, add a Compose card after the SQL action, choose ResultSets as the input. Then use the Output of the Compose card for for-each loop.
unfortunately its by design:
https://learn.microsoft.com/en-us/azure/automation/automation-runbook-types#powershell-runbooks
I resolved it by calling the SP's in parallel via Logic Apps.
(note: when you do so, don't forget to hit 'Publish' runbook in order for the input variables passed to be picked up correctly)
Related
I'm looking into Azure Workbooks now, and I'm wondering if the below is actually even possible.
Scenario
List all function apps under Subscription / Resource Group scope.
This step is done - simple Azure Resource Graph query with parameters does the job.
Execute Azure Resource Manager action against each Function App that was returned as a part of Query in Step1.
Specifically, I'm interested in detectors/functionExecutionErrors ARM API - and return a parsed result out of it. When doing that for hardcoded resource, I can get the results I need. Using the following JSON Path $.properties.dataset[0].table.rows[0][1] I get back the summary: All running functions in healthy state with execution failure rate less than 0.1%.
I realize this might either be undoable in Workbooks or something trivial that I missed - it would be easiest if I could just run 'calculated columns' when rendering outputs. So, the summarizing question is:
How, if possible, can I combine Azure Resource Graph query with Azure Resource Manager DataSource, where Azure Resource Manager query runs per each returned Graph resource and display them as table in form: "Resource ID | ARM api results".
I think I have achieved closest result to this by marking Resource Graph Query output as parameter (id -> FunctionAppId) and referencing that in ARM query as /{FunctionAppId}/detectors/functionExecutionErrors - this works fine as long as only one resource is selected, but there are two obstacles: I want to execute against all query results regardless if they are selected, and I need Azure Resource Manager understand it needs to loop resources - not concatenate them (as seen in invoke HTTP call from F12 dev tools, the resource names are just joined together).
Hopefully there's someone out there who could help out with this. Thanks! :-)
I'm also new to Workbooks and I think creating a parameter first with the functionId is best. I do the same ;)
With multiple functions the parameter will have them all. You can use split() to get an array and then loop.
Will that work for you?
Can you share your solution if you managed to solve this differently?
cloudsma.com is a resource I use a lot to understand the queries and options better. Like this one: https://www.cloudsma.com/2022/03/azure-monitor-alert-reports
Workbooks doesn't currently have an ability to run the ARM data source against many resources, though it is on our backlog and are actively investigating a way to run any datasource for a set of values and merge the results togther.
The general workaround is to do as stated, either use a parameter so select a resource and run that one query for the selected item, or do similar with something like a query step with grid, and have selection of the grid output a parameter used as input to the ARM query step.
I am currently developing an Azure ML pipeline that as one of its outputs is maintaining a SQL table holding all of the unique items that are fed into it. There is no way to know in advance if the data fed into the pipeline is new unique items or repeats of previous items, so before updating the table that it maintains it pulls the data already in that table and drops any of the new items that already appear.
However, due to this there are cases where this self-reference results in zero new items being found, and as such there is nothing to export to the SQL table. When this happens Azure ML throws an error, as it is considered an error for there to be zero lines of data to export. In my case, however, this is expected behaviour, and as such absolutely fine.
Is there any way for me to suppress this error, so that when it has zero lines of data to export it just skips the export module and moves on?
It sounds as if you are struggling to orchestrate a data pipeline because there orchestration is happening in two places. My advice would be to either move more orchestration into Azure ML, or make the separation between the two greater. One way to do this would be to have a regular export to blob of the table you want to use as training. Then you can use a Logic App to trigger a pipeline whenever a non-empty blob lands in the location
This issue has been resolved by an update to Azure Machine Learning; You can now run pipelines with a flag set to "Continue on Failure Step", which means that steps following the failed data export will continue to run.
This does mean you will need to design your pipeline to be able to handles upstream failures in its downstream modules; this must be done very carefully.
We have a Fulfillment script in 1.0 that pulls a Serial number from the custom record based on SKU and other parameters. There is a seach that is created based on SKU and the fist available record is used. One of the criteria for search is that thee is no end user associated with the key.
We are working on converting the script to 2.0. What I am unable to figure out is, if the script(say the above functionality is put into Map function for a MR script) will run on multiple queues/instances, does that mean that there is a potential chance that 2 instance might hit the same entry of the Custom record? What is a workaround to ensure that X instances of Map function dont end us using the same SN/Key? The way this could happen in 2.0 would be that 2 instance of Map make a search request on Custom record at same time and get the same results since the first Map has not completed processing and marked the key as used(updating the end user information on key).
Is there a better way to accomplish this in 2.0 or do I need to go about creating another custom record that script would have to read to be able to pull key off of. Also is there a wait I can implement if the table is locked?
Thx
Probably the best thing to do here would be to break your assignment process into two parts or restructure it so you end up with a Scheduled script that you give an explicit queue. That way your access to Serial Numbers will be serialized and no extra work would need to be done by you. If you need hint on processing large batches with SS2 see https://github.com/BKnights/KotN-Netsuite-2 for a utility script that you can require for large batch processing.
If that's not possible then what I have done is the following:
Create another custom record called "Lock Table". It must have at least an id and a text field. Create one record and note its internal id. If you leave it with a name column then give it a name that reflects its purpose.
When you want to pull a serial number you:
read from Lock Table with a lookup field function. If it's not 0 then do a wait*.
If it's 0 then generate a random integer from 0 to MAX_SAFE_INTEGER.
try to write that to the "Lock Table" with a submit field function. Then read that back right away. If it contains your random number then you have the lock. If it doesn't then wait*.
If you have the lock then go ahead and assign the serial number. Release the lock by writing back a 0.
wait:
this is tough in NS. Since I am not expecting the s/n assignment to take much time I've sometimes initiated a wait as simply looping through what I hope is a cpu intensive task that has no governance cost until some time has elapsed.
I have a logic app with a sql trigger that gets multiple rows.
I need to split on the rows so that I have a better overview about the actions I do per row.
Now I would like that the logic app is only working on one row at a time.
What would be the best solution for that since
"operationOptions": "singleInstance", and
"runtimeConfiguration": {
"concurrency": {
"runs": 1
}
},
are not working with splitOn.
I was also thinking about calling another logic app and have the logic app use a runtimeConfiguration but that sounds just like an ugly workaround.
Edit:
The row is atomic, and no sorting is needed. Each row can be worked on separately and independent of other data.
As fare as I can tell I wouldn't use a foreach for that since than one failure within a row will lead to a failed logic app.
If one dataset (row) other should also be tried and the error should be easily visible.
Yes, you are seeing the expected behavior. Keep in mind, the split happens in the trigger, not the workflow. BizTalk works the same way except it's a bit more obvious there.
You don't want concurrent processing, you want ordered processing. Right now, the most direct way to handle this is by Foreach'ing over the collection. Though waiting ~3 weeks might be a better option.
One decision point will be weather the atomicity is the collection or the item. Also, you'll need to know if overlapping batches are ok or not.
For instance, if you need to process all items in order, with batch level validation, Foreach with concurrency = 1 is what you need.
Today (as of 2018-03-06) concurrency control is not supported for split-on triggers.
Having said that, concurrency control should be enabled for all trigger types (including split-on triggers) within the next 2-3 weeks.
In the interim you could remove the splitOn property on your trigger and set its concurrency limit to 1. This will start a single run for the entire collection of items, but you can use a foreach loop in your definition to limit concurrency as well. The drawback here is that the trigger will wait until the run as a whole is completed (all items are processed), so the throughput will not be optimal.
I know it might be a bit a confusing title but couldn't get up to anythig better.
The problem ...
I have a ADF Pipeline with 3 Activities, first a Copy to a DB, then 2 times a Stored procedure. All are triggered by day and use a WindowEnd to read the right directory or pass a data to the SP.
There is no way I can get a import-date into the XML files that we are receiving.
So i'm trying to add it in the first SP.
Problem is that once the first action from the pipeline is done 2 others are started.
The 2nd action in the same slice, being the SP that adds the dates, but in case history is loaded the same Pipeline starts again a copy for another slice.
So i'm getting mixed up data.
As you can see in the 'Last Attempt Start'.
Anybody has a idea on how to avoid this ?
ADF Monitoring
In case somebody hits a similar problem..
I've solved the problem by working with daily named tables.
each slice puts its data into a staging table with a _YYYYMMDD after, can be set as"tableName": "$$Text.Format('[stg].[filesin_1_{0:yyyyMMdd}]', SliceEnd)".
So now there is never a problem anymore of parallelism.
The only disadvantage is that the SP's coming after this first have to work with Dynamic SQL as the table name where they are selecting from is variable.
But that wasn't a big coding problem.
Works like a charm !