Microsoft Flow with File Created Action is not triggered all the time - sharepoint

I have one drive synced local folder and the files will be synced with a SharePoint site when we add files to this folder. I also have a Flow that gets triggered for every file added.
The detailed article about what I am achieving here can be found here.
The problem is that it is not triggered all the time. Let's say I added 100 files and the Flow triggered only 78 times. Are there any limitations on the Flow that it can run only this many times in a timeframe? Anyone else faced this issue? Any help is really appreciated. #sharepoint #sharepointonline #flow #onedrive

Finally, after spending a few hours, I got it working with 120 files at the same time. The flow runs smoothly and efficiently now. Here is what I did.
Click on the three dots on your trigger in the flow, and then click on settings.
Now in the new screen, enable the Split On (Without this my Flow was not getting triggered) and give the Array value. Clicking on the array dropdown will give you the matching value. Now turn on the Concurrency as shown in the preceding image and give the Degree of Parallelism to maximum (50 as of now).
According to Microsoft:
Concurrency Control is to Limit the number of concurrent runs of the flow or leave it off to run as many as possible at the same time. Concurrency control changes the way new runs are queued. It cannot be undone once enabled.

Related

How to apply ratelimiting (restricting access) on logged-in users taking screenshots of my website?

I maintain a record of users' email/IP taking screenshots (44 keypress) of our website.
Currently, I am blocking them based on their weekly screenshot count.
However, I'm thinking of applying daily rate-limiting such that they are restricted access for some time (which is increased if they cross daily thresholds multiple times & the daily threshold limit also starts to decrease) and at some point, they are permanently restricted.
Is this the best way to reduce unrestrained screenshots of my website?
Thank You
I tried restricting users based on their weekly print-screen count. However, there were some users who were crossing the weekly threshold in only a few hours. I would definitely like to restrict such users immediately.
I think you will have a hard time restricting access based on a client side action. Screenshots can always be taken by using OS tools, such as snipping tool, or web scrapers EyeWitness. It may be worth going back to the drawing board to get some better answers:
Why do you want to block people for taking screenshots?
Is this temporary restriction going to actually stop this happening?
How long to you want users to be restricted for?
Have you researched any methods of preventing screenshots from being taken? Rather than trying to detect who takes them?
Have you warned users that taking screenshots will result in them being blocked? This may stop them in the first place.

Azure Logic Apps Trigger not fired

I am testing Azure Logic apps for a used case where I want to parse new tweets and write them to SQL. The flow works seamlessly.
But the problem is that although I have selected 1 sec for "How often do you want to check for items?" field, it seems triggers are not fired automatically. I have to press Run Trigger to to capture new tweets.
Is there any idea how to overcome this problem?
Thank you
The "How often do you want to check for items" means the trigger will check if a new tweet was posted every 1 second but not run every 1 second. If the problem is not caused by misunderstanding, please check if any of the following limits are not met:
You can find more information on this tutorial.

Can I track unexpected lack of changes using change feeds, cosmos db and azure functions?

I am trying to understand change feeds in Azure. I see I can trigger an event when something changes in cosmos db. This is useful. However, in some situations, I expect a document to be changed after a while. A question should have a status change that it has been answered. After a while an order should have a status change "confirmed" and a problem should have status change "resolved" or should a have priority change (to "low"). It is useful to trigger an event when such a change is happening for a certain document. However, it is even more useful to trigger an event when such a change after a (specified) while (like 1 hour) does not happen. A problem needs to be resolved after a while, an order needs to be confirmed after while etc. Can I use change feeds and azure functions for that too? Or do I need something different? It is great that I can visualize changes (for example in power BI) once they happen after a while but I am also interested in visualizing changes that do not occur after a while when they are expected to occur.
Achieving that with Change Feed doesn't sound possible, because as you describe it, Change Feed is reacting based on operations/events that happen.
In your case it sounds as if you needed an agent that needs to be running every X amount of time (maybe an Azure Functions with a TimerTrigger?) and executes a query to find items with X state that have not been modified in the past Y pre-defined interval (possibly the time interval associated with the TimerTrigger). This could be done by checking the _ts field of the state documents or your own timestamp field, see https://stackoverflow.com/a/39214165/5641598.
If your goal is to just deploy it on a dashboard, you could query using Power BI too.
As long as you don't need too much time precision (the Change Feed notifications are usually delayed by a few seconds) for this task, the Azure CosmosDB Change Feed could be easily used as a solution, but it would require some extra work from the Microsoft team to also support capturing deletion TTL expiration events.
A potential solution, if the Change Feed were to capture such TTL expiration events, would be: whenever you insert (or in your use case: change priority of) a document for which you want to monitor lack of changes, you also insert another document (possibly in another collection) that acts as a timer, specifying a TTL of 1h.
You would delete the timer document manually or by consuming the Change Feed for changes, in case a change actually happened.
You could also easily consume from the Change Feed the TTL expiration event and assert that if the TTL expired then there were no changes in the specified time window.
If you'd like this feature, you should consider voting issues such as this one: https://github.com/Azure/azure-cosmos-dotnet-v2/issues/402 and feature requests such as this one: https://feedback.azure.com/forums/263030-azure-cosmos-db/suggestions/14603412-execute-a-procedure-when-ttl-expires, which would make the Change Feed a perfect fit for scenarios such as yours. Sadly it is not available yet :(
TL;DR No, the Change Feed as it stands would not be a right fit for your use case. It would need some extra functionalities that are planned but not implemented yet.
PS. In case you'd like to know more about the Change Feed and its main use cases anyways, you can check out this article of mine :)

Spotfire idle screen deactivation

in regard to spotfire inactivity screen,
can i disable that feature somehow? I need to set a screen always displaying a dashboard but with no interaction and i cannot do that because of the server logging out after some inactivity time.
Will a simple mouse jiggler do the trick?
besides changing the global inactivity timeout (probably not an ideal solution, but I will happily edit to include instructions), you (or your Spotfire admin) can put the analysis on a Scheduled Update. I am going to assume that your Spotfire instance is already configured to allow Scheduled Updates as per the admin manual.
I've done this before and I'm 95% sure this will sidestep the inactivity timeout, but it's been a while, so please try this and reply if it doesn't work for some reason.
to create a scheduled update for an analysis...
log in to the Spotfire server as an administrator or a user with permissions to create Scheduled Updates
click Schedules and Routing
click the Create Rule button
leave the selector on FILE and click Next
give a name for the schedule and browse to the file in the library
leaving other settings at their defaults, click Create new schedule
tick the days you'd like this analysis to remain loaded, set the time range it should be loaded on those days. set the Check for updates every ... to your update range (I suggest something like five minutes but it's up to you), then click Save
you're done! click Save again
now while you have the analysis open, it will reload itself every five minutes with the current data.
I would avoid setting it to something like ~1 minute as this could, depending on the amount of data being loaded; how many users you have; the complexity of the analysis; etc., consume too many server resources. keep in mind Spotfire is not for "real time" analysis but for "data at rest." we (Tibco) do offer products for real time data visualization like LiveView and Streambase, but they are separate from Spotfire (from a licensing perspective. I believe they can be integrated in fun ways).

Creating alerts for specific exceptions with Application Insight (Microsoft Azure)

I'm relatively new to Azure and am trying to see if there's a way to create notifications to occur in real time (or close to) whenever only certain exceptions occur using Application Insights.
Right now I'm able to track exceptions and to trigger metric alerts for when a threshold of exceptions occur over a certain amount of time but can't seem to figure out how to make these alerts sensitive to only certain kinds of exceptions. My first thoughts were to add properties to an exception as I used a telemetry client to track it with the 'TrackException' method then create an alert specific to that property but I'm still unable to figure out how to do it.
Any help is appreciated.
A couple years later now, there's a way to mostly do this with built in functionality.
There isn't an easy way to do this on every exception as it occurs, though. some apps have literally billions of exceptions per day, so evaluating your function every time a exception occurs would be pretty expensive.
Things like this are generally done with custom alerts that do a query and see if anything that meets the criteria exists in the new time period.
you'd do this with "log alerts", documented here: https://learn.microsoft.com/en-us/azure/azure-monitor/platform/alerts-unified-log
instead of getting an email every time a specific exception occurred, your query would run every N minutes, and if any rows meet the criteria, you'd get a single mail (or whatever you have the alert configured to do), and you keep getting mails every N minutes where rows that meet the criteria are found.
There are two options:
Call TrackMetric (provide some metric name) when exception of particular type happens in addition to TrackException. Then configure alert based on this metric.
Write a tool/service/azure function which every few minutes runs a query in Application Insights Analytics and posts result as metric (using TrackMetric). Then configure alert from portal.
Right now AI team is working on providing #2 out of the box.

Resources