So you can view past runs of Logic Apps(LA)...but if a Loop(with many steps within it) is present in your Logic App, and you stop the LA run(because it seems to run forever/isnt doing what you expect) you cant see what happened in the the Loop.
I want to be able to track the Logic App(LA) progress, i thought about adding an additional table storage step between every step to log where its at, this would work, but thats a daft amount of work just see what your LA is doing.
I tried adding diagnostic/log analytics to the LA but it just seems to give a broader view of the LA runs...not the detail i need. Can someone tell me if diagnositcs can give me the detail im looking for OR if there is another way of doing this. There must be a way.
Thanks.
The past runs should allow you iterate though the runs of the loop, showing the detail of actions within.
If this doesn't satisfy, you can also add Tracked Properties to log specific values from within the loop execution to Log Analytics in an AzureDiagnostics table
Related
I have one drive synced local folder and the files will be synced with a SharePoint site when we add files to this folder. I also have a Flow that gets triggered for every file added.
The detailed article about what I am achieving here can be found here.
The problem is that it is not triggered all the time. Let's say I added 100 files and the Flow triggered only 78 times. Are there any limitations on the Flow that it can run only this many times in a timeframe? Anyone else faced this issue? Any help is really appreciated. #sharepoint #sharepointonline #flow #onedrive
Finally, after spending a few hours, I got it working with 120 files at the same time. The flow runs smoothly and efficiently now. Here is what I did.
Click on the three dots on your trigger in the flow, and then click on settings.
Now in the new screen, enable the Split On (Without this my Flow was not getting triggered) and give the Array value. Clicking on the array dropdown will give you the matching value. Now turn on the Concurrency as shown in the preceding image and give the Degree of Parallelism to maximum (50 as of now).
According to Microsoft:
Concurrency Control is to Limit the number of concurrent runs of the flow or leave it off to run as many as possible at the same time. Concurrency control changes the way new runs are queued. It cannot be undone once enabled.
I am testing Azure Logic apps for a used case where I want to parse new tweets and write them to SQL. The flow works seamlessly.
But the problem is that although I have selected 1 sec for "How often do you want to check for items?" field, it seems triggers are not fired automatically. I have to press Run Trigger to to capture new tweets.
Is there any idea how to overcome this problem?
Thank you
The "How often do you want to check for items" means the trigger will check if a new tweet was posted every 1 second but not run every 1 second. If the problem is not caused by misunderstanding, please check if any of the following limits are not met:
You can find more information on this tutorial.
I am trying to understand change feeds in Azure. I see I can trigger an event when something changes in cosmos db. This is useful. However, in some situations, I expect a document to be changed after a while. A question should have a status change that it has been answered. After a while an order should have a status change "confirmed" and a problem should have status change "resolved" or should a have priority change (to "low"). It is useful to trigger an event when such a change is happening for a certain document. However, it is even more useful to trigger an event when such a change after a (specified) while (like 1 hour) does not happen. A problem needs to be resolved after a while, an order needs to be confirmed after while etc. Can I use change feeds and azure functions for that too? Or do I need something different? It is great that I can visualize changes (for example in power BI) once they happen after a while but I am also interested in visualizing changes that do not occur after a while when they are expected to occur.
Achieving that with Change Feed doesn't sound possible, because as you describe it, Change Feed is reacting based on operations/events that happen.
In your case it sounds as if you needed an agent that needs to be running every X amount of time (maybe an Azure Functions with a TimerTrigger?) and executes a query to find items with X state that have not been modified in the past Y pre-defined interval (possibly the time interval associated with the TimerTrigger). This could be done by checking the _ts field of the state documents or your own timestamp field, see https://stackoverflow.com/a/39214165/5641598.
If your goal is to just deploy it on a dashboard, you could query using Power BI too.
As long as you don't need too much time precision (the Change Feed notifications are usually delayed by a few seconds) for this task, the Azure CosmosDB Change Feed could be easily used as a solution, but it would require some extra work from the Microsoft team to also support capturing deletion TTL expiration events.
A potential solution, if the Change Feed were to capture such TTL expiration events, would be: whenever you insert (or in your use case: change priority of) a document for which you want to monitor lack of changes, you also insert another document (possibly in another collection) that acts as a timer, specifying a TTL of 1h.
You would delete the timer document manually or by consuming the Change Feed for changes, in case a change actually happened.
You could also easily consume from the Change Feed the TTL expiration event and assert that if the TTL expired then there were no changes in the specified time window.
If you'd like this feature, you should consider voting issues such as this one: https://github.com/Azure/azure-cosmos-dotnet-v2/issues/402 and feature requests such as this one: https://feedback.azure.com/forums/263030-azure-cosmos-db/suggestions/14603412-execute-a-procedure-when-ttl-expires, which would make the Change Feed a perfect fit for scenarios such as yours. Sadly it is not available yet :(
TL;DR No, the Change Feed as it stands would not be a right fit for your use case. It would need some extra functionalities that are planned but not implemented yet.
PS. In case you'd like to know more about the Change Feed and its main use cases anyways, you can check out this article of mine :)
I am trying to troubleshoot a problem where I run an Azure Function locally on my instance and have it disabled on the Portal. After sending some data through I can see that it successfully hits my local Azure Function but never hits it again after. Strangely enough the data appears to still go through my channels of Queue - Function - Queue - Function but never hits the breakpoints on my local machine after the first successful run. Triple checking the Portal I can see that it is definitely disabled which leads me to believe there might be another instance of the Azure Function running about. I've confirmed that no other devs are working on it so I've also ruled that out...
Looking at https://[MY_FUNCTION_NAME].scm.azurewebsites.net/azurejobs/#/functions I see that there seem to be duplicates of some of my functions with varying statistics on the repeats. My guess is that Azure might be tracking my local instances when I start them but I see the "Successful" green numbers go up on both versions of the function when I pass data through. Blocked out the function names but replaced the matching ones with matching colors (blacked out bars are just single functions I was too lazy to color code). The red circles indicate the function of interest that have different success statistics.
Has anyone else run into this issue?
Turns out there were duplicate functions in a Slot setting... Someone put them there to get deployment options set up but they left the project and never noted it.
Hope this saves someone some frustrations at some point!
I'm relatively new to Azure and am trying to see if there's a way to create notifications to occur in real time (or close to) whenever only certain exceptions occur using Application Insights.
Right now I'm able to track exceptions and to trigger metric alerts for when a threshold of exceptions occur over a certain amount of time but can't seem to figure out how to make these alerts sensitive to only certain kinds of exceptions. My first thoughts were to add properties to an exception as I used a telemetry client to track it with the 'TrackException' method then create an alert specific to that property but I'm still unable to figure out how to do it.
Any help is appreciated.
A couple years later now, there's a way to mostly do this with built in functionality.
There isn't an easy way to do this on every exception as it occurs, though. some apps have literally billions of exceptions per day, so evaluating your function every time a exception occurs would be pretty expensive.
Things like this are generally done with custom alerts that do a query and see if anything that meets the criteria exists in the new time period.
you'd do this with "log alerts", documented here: https://learn.microsoft.com/en-us/azure/azure-monitor/platform/alerts-unified-log
instead of getting an email every time a specific exception occurred, your query would run every N minutes, and if any rows meet the criteria, you'd get a single mail (or whatever you have the alert configured to do), and you keep getting mails every N minutes where rows that meet the criteria are found.
There are two options:
Call TrackMetric (provide some metric name) when exception of particular type happens in addition to TrackException. Then configure alert based on this metric.
Write a tool/service/azure function which every few minutes runs a query in Application Insights Analytics and posts result as metric (using TrackMetric). Then configure alert from portal.
Right now AI team is working on providing #2 out of the box.