I'm looking into Azure Workbooks now, and I'm wondering if the below is actually even possible.
Scenario
List all function apps under Subscription / Resource Group scope.
This step is done - simple Azure Resource Graph query with parameters does the job.
Execute Azure Resource Manager action against each Function App that was returned as a part of Query in Step1.
Specifically, I'm interested in detectors/functionExecutionErrors ARM API - and return a parsed result out of it. When doing that for hardcoded resource, I can get the results I need. Using the following JSON Path $.properties.dataset[0].table.rows[0][1] I get back the summary: All running functions in healthy state with execution failure rate less than 0.1%.
I realize this might either be undoable in Workbooks or something trivial that I missed - it would be easiest if I could just run 'calculated columns' when rendering outputs. So, the summarizing question is:
How, if possible, can I combine Azure Resource Graph query with Azure Resource Manager DataSource, where Azure Resource Manager query runs per each returned Graph resource and display them as table in form: "Resource ID | ARM api results".
I think I have achieved closest result to this by marking Resource Graph Query output as parameter (id -> FunctionAppId) and referencing that in ARM query as /{FunctionAppId}/detectors/functionExecutionErrors - this works fine as long as only one resource is selected, but there are two obstacles: I want to execute against all query results regardless if they are selected, and I need Azure Resource Manager understand it needs to loop resources - not concatenate them (as seen in invoke HTTP call from F12 dev tools, the resource names are just joined together).
Hopefully there's someone out there who could help out with this. Thanks! :-)
I'm also new to Workbooks and I think creating a parameter first with the functionId is best. I do the same ;)
With multiple functions the parameter will have them all. You can use split() to get an array and then loop.
Will that work for you?
Can you share your solution if you managed to solve this differently?
cloudsma.com is a resource I use a lot to understand the queries and options better. Like this one: https://www.cloudsma.com/2022/03/azure-monitor-alert-reports
Workbooks doesn't currently have an ability to run the ARM data source against many resources, though it is on our backlog and are actively investigating a way to run any datasource for a set of values and merge the results togther.
The general workaround is to do as stated, either use a parameter so select a resource and run that one query for the selected item, or do similar with something like a query step with grid, and have selection of the grid output a parameter used as input to the ARM query step.
Related
I am currently developing an Azure ML pipeline that as one of its outputs is maintaining a SQL table holding all of the unique items that are fed into it. There is no way to know in advance if the data fed into the pipeline is new unique items or repeats of previous items, so before updating the table that it maintains it pulls the data already in that table and drops any of the new items that already appear.
However, due to this there are cases where this self-reference results in zero new items being found, and as such there is nothing to export to the SQL table. When this happens Azure ML throws an error, as it is considered an error for there to be zero lines of data to export. In my case, however, this is expected behaviour, and as such absolutely fine.
Is there any way for me to suppress this error, so that when it has zero lines of data to export it just skips the export module and moves on?
It sounds as if you are struggling to orchestrate a data pipeline because there orchestration is happening in two places. My advice would be to either move more orchestration into Azure ML, or make the separation between the two greater. One way to do this would be to have a regular export to blob of the table you want to use as training. Then you can use a Logic App to trigger a pipeline whenever a non-empty blob lands in the location
This issue has been resolved by an update to Azure Machine Learning; You can now run pipelines with a flag set to "Continue on Failure Step", which means that steps following the failed data export will continue to run.
This does mean you will need to design your pipeline to be able to handles upstream failures in its downstream modules; this must be done very carefully.
Is there a way to Bind share point list to Sql Db tables to fetch the updates dynamically. Example: I have a share point list of 2 columns and, I have azure sql Db table with 2 columns. I would like to bind them together so when an update is happened in DB column, respective share point list column data will be updated.
I have tried write a sprint book job to do this but, it is lot of code to maintain. Also we need to manage the real time sync on our own.
I am expecting there might be some out of the box connecter in Microsoft flow, or azure logic app or something automation which will help me automate this.
I would suggest you check BCS so your db data could sync with SharePoint external list.
https://learn.microsoft.com/en-us/sharepoint/make-external-list
Another thread with demo.
https://www.c-sharpcorner.com/article/integrate-azure-sql-db-with-sharepoint-online-as-an-external-list-using-business/
There is SQL Server connector, suppose this is what you want. You could use the trigger When an item is created or When an item is modified to get the sql updates details.
The output would be like the below pic shows.
Further more information you could refer to this doc:SQL Server.Note: there are some known limitation, like When invoking triggers,
A ROWVERSION column is required for OnUpdatedItems
An IDENTITY column is required for OnNewItems
After the trigger, you could use the table details to update sharepoint list.
Hope this could help you.
I have the following setup:
Azure function in Python 3.6 is processing some entities utilizing
the TableService class from the Python Table API (new one for
CosmosDB) to check whether the currently processed entity is already
in a storage table. This is done by invoking
TableService.get_entity.
get_entity method throws exception each
time it does not find an entity with the given rowid (same as the
entity id) and partition key.
If no entity with this id is found then i call insert_entity to add it to the table.
With this i am trying to implement processing only of entities that haven't been processed before (not logged in the table).
I observe however consistent behaviour of the function simply freezing after exactly 10 exceptions, where it halts execution on the 10th invocation and does not continue processing for another minute or 2!
I even changed the implementation of instead doing a lookup first to simply call insert_entity and let it fail when a duplicate row key is added. Surprisingly enough the behaviour there is exactly the same - on the 10th duplicate invocation the execution freezes and continues after a while.
What is the cause of such behaviour? Is this some sort of protection mechanism on the storage account that kicks in? Or is it something to do with the Python client? To me it looks very much something by design.
I wasnt able to find any documentation or settings page on the portal for influencing such behaviour.
I am wondering of it is possible to implement such logic with using table storage? I don't seem to find it justifiable to spin up a SQL Azure db or Cosmos DB instance for such trivial functionality of checking whether an entity does not exist in a table.
Thanks
I know it might be a bit a confusing title but couldn't get up to anythig better.
The problem ...
I have a ADF Pipeline with 3 Activities, first a Copy to a DB, then 2 times a Stored procedure. All are triggered by day and use a WindowEnd to read the right directory or pass a data to the SP.
There is no way I can get a import-date into the XML files that we are receiving.
So i'm trying to add it in the first SP.
Problem is that once the first action from the pipeline is done 2 others are started.
The 2nd action in the same slice, being the SP that adds the dates, but in case history is loaded the same Pipeline starts again a copy for another slice.
So i'm getting mixed up data.
As you can see in the 'Last Attempt Start'.
Anybody has a idea on how to avoid this ?
ADF Monitoring
In case somebody hits a similar problem..
I've solved the problem by working with daily named tables.
each slice puts its data into a staging table with a _YYYYMMDD after, can be set as"tableName": "$$Text.Format('[stg].[filesin_1_{0:yyyyMMdd}]', SliceEnd)".
So now there is never a problem anymore of parallelism.
The only disadvantage is that the SP's coming after this first have to work with Dynamic SQL as the table name where they are selecting from is variable.
But that wasn't a big coding problem.
Works like a charm !
I am try to utilise the For Each object in logic apps and I am not sure if this is possible:
Step 1 - SQL Stored Procedure is executed returning a number of rows
Step 2 - For Each based on the number of rows in Step 1
Step 2a - Execute a SQL Procedure based on values in loop
The problem I am finding is that when I add the loop, it states that there is no dynamic content available even though Step 1 will return results.
I've googled and there seems to be no information on using the for each loop in Logic Apps.
Any suggestions?
Thanks
We're working on a fix for this. Before the fix is available, try one of these two workarounds:
If you're comfortable with code view, create a for-each loop in designer, then specify ResultSets as the input for foreach.
Alternatively, add a Compose card after the SQL action, choose ResultSets as the input. Then use the Output of the Compose card for for-each loop.
unfortunately its by design:
https://learn.microsoft.com/en-us/azure/automation/automation-runbook-types#powershell-runbooks
I resolved it by calling the SP's in parallel via Logic Apps.
(note: when you do so, don't forget to hit 'Publish' runbook in order for the input variables passed to be picked up correctly)