How to avoid the MS Flow trigger loop? - sharepoint-online

There is a trigger "Item created or Modified" and just after the trigger we are calling an action to update that item that will certainly trigger the flow and eventually it will go into loop. Please suggest a way to prevent this looping behavior.

This is known behavior for SharePoint list and this product request got declined as well. Normally we will have update trigger for any database table and filtering attributes to decide the scenario, in addition have to avoid updating the same attribute again in that transaction.
What you can do is, keep a hidden column in the list, to check and stop the infinite loop like discussed in community thread

Related

Can I track unexpected lack of changes using change feeds, cosmos db and azure functions?

I am trying to understand change feeds in Azure. I see I can trigger an event when something changes in cosmos db. This is useful. However, in some situations, I expect a document to be changed after a while. A question should have a status change that it has been answered. After a while an order should have a status change "confirmed" and a problem should have status change "resolved" or should a have priority change (to "low"). It is useful to trigger an event when such a change is happening for a certain document. However, it is even more useful to trigger an event when such a change after a (specified) while (like 1 hour) does not happen. A problem needs to be resolved after a while, an order needs to be confirmed after while etc. Can I use change feeds and azure functions for that too? Or do I need something different? It is great that I can visualize changes (for example in power BI) once they happen after a while but I am also interested in visualizing changes that do not occur after a while when they are expected to occur.
Achieving that with Change Feed doesn't sound possible, because as you describe it, Change Feed is reacting based on operations/events that happen.
In your case it sounds as if you needed an agent that needs to be running every X amount of time (maybe an Azure Functions with a TimerTrigger?) and executes a query to find items with X state that have not been modified in the past Y pre-defined interval (possibly the time interval associated with the TimerTrigger). This could be done by checking the _ts field of the state documents or your own timestamp field, see https://stackoverflow.com/a/39214165/5641598.
If your goal is to just deploy it on a dashboard, you could query using Power BI too.
As long as you don't need too much time precision (the Change Feed notifications are usually delayed by a few seconds) for this task, the Azure CosmosDB Change Feed could be easily used as a solution, but it would require some extra work from the Microsoft team to also support capturing deletion TTL expiration events.
A potential solution, if the Change Feed were to capture such TTL expiration events, would be: whenever you insert (or in your use case: change priority of) a document for which you want to monitor lack of changes, you also insert another document (possibly in another collection) that acts as a timer, specifying a TTL of 1h.
You would delete the timer document manually or by consuming the Change Feed for changes, in case a change actually happened.
You could also easily consume from the Change Feed the TTL expiration event and assert that if the TTL expired then there were no changes in the specified time window.
If you'd like this feature, you should consider voting issues such as this one: https://github.com/Azure/azure-cosmos-dotnet-v2/issues/402 and feature requests such as this one: https://feedback.azure.com/forums/263030-azure-cosmos-db/suggestions/14603412-execute-a-procedure-when-ttl-expires, which would make the Change Feed a perfect fit for scenarios such as yours. Sadly it is not available yet :(
TL;DR No, the Change Feed as it stands would not be a right fit for your use case. It would need some extra functionalities that are planned but not implemented yet.
PS. In case you'd like to know more about the Change Feed and its main use cases anyways, you can check out this article of mine :)

Avoiding race condition for inserting model to DB on complex conditions

We are trying to create an algorithm/heuristic that will schedule a delivery at a certain time period, but there is definitely a race condition here, whereby two conflicting scheduled items could be written to the DB, because the write is not really atomic.
The only way to truly prevent race conditions is to create some atomic insert operation, TMK.
The server receives a request to schedule something for a certain time period, and the server has to check if that time period is still available before it writes the data to the DB. But in that time the server could get a similar request and end up writing conflicting data.
How to circumvent this? Is there some way to create some script in the DB itself that hooks into the write operation to make the whole thing atomic? By putting a locking mechanism on that script? What makes the whole thing non-atomic is the read and the wire time between the server and the DB.
Whenever I run into race condition I think of one immediate solution QUEUE.
Step 1) What you can do is that instead of adding data to a database directly you can add it to queue without checking anything.
Step 2) A separate reader will read from the queue check DB for any conflict and take necessary action.
This is one of the ways to solve this If you implement any better solution please do share it.
Hope that helps

a synchronization issue between requests in express/node.js

I've come up with a fancy issue of synchronization in node.js, which I've not able to find an elegant solution:
I setup a express/node.js web app for retrieving statistics data from a one row database table.
If the table is empty, populate it by a long calculation task
If the record in table is older than 15 minutes from now, update it by a long calculation task
Otherwise, respond with a web page showing the record in DB.
The problem is,
when multiple users issue requests simultaneously, in case the record is old, the long calculation task would be executed once per request, instead of just once.
Is there any elegant way that only one request triggers the calculation task, and all others wait for the updated DB record?
Yes, it is called locks.
Put an additional column in your table say lock which will be of timestamp type. Once a process starts working with that record put a now+timeout time into it (by the rule of thumb I choose timeout to be 2x the average time of processing). When the process stops processing update that column with NULL value.
At the begining of processing check that column. If the value > now condition is satisfied then return some status code to client (don't force client to wait, it's a bad user experience, he doesn't know what's going on unless processing time is really short) like 409 Conflict. Otherwise start processing (also ideally processing takes place in a separate thread/process so that user won't have to wait: respond with an appropriate status code like 202 Accepted).
This now+timeout value is needed in case your processing process crashes (so we avoid deadlocks). Also remember that you have to "check and set" this lock column in transaction because of race conditions (might be quite difficult if you are working with MongoDB-like databases).

How do I stall until a SharePoint List Item is Deleted with SPLongOperation?

I have a workflow, which creates a task and deletes it after the task is edited and its useful information acquired. I created a custom edit form for the task, so I have an SPLongOperation that I can use to stall the page. This is necessary, because if I don't stall the page in some fashion, the person will see the task in the task list for the minute moment before the workflow gets to delete the task, and that is bad. So some code to stall the page until the task is fully deleted is necessary.
I have currently implemented a solution for this, but I am unsatisfied with the approach. It basically is summed up to a while loop that calls SPList.GetItemById until it throws an error. Deliberately attempting to cause an error doesn't sit well with me, but I cannot think of a faster method for checking this. I'm looking for alternatives that would preferably work faster if not as fast, and preferably without relying on catching exceptions.
How about using an SPQuery to lookup the ID and if it doesn't find it then continue. This doesn't throw any exceptions.

Sharepoint StateMachine : Handling multiple responses to multiple created tasks

I created a StateMachine workflow for sharepoint and at one state, I create multiple tasks using a replicator. The number of tasks created is variable.
I need to handle the OnTaskChanged event for all the tasks I created which seems impossible as one event handler can only be associated with one task.
I can use a restrictive number of tasks which can be created and handled by a specific number of handlers but I am considering that as a last resort or create a sequential workflow as a last resort.
Please do let me know if this is even supported or if there are any workarounds.
Reference Link: http://social.msdn.microsoft.com/Forums/en-US/sharepointworkflow/thread/a174ac5f-03ed-4e27-998b-bbdb7d01d09b/
It won't work for the reasons you laid out. The workaround is to restructure your state machine workflow as a sequential workflow (which may not be possible) or to switch to item event receivers (which may not work for you). I've actually blogged about this topic: Workflow Nuttiness vol. 1
Hilariously, I just checked the MSDN forums link you provided, and sure enough, I'm in that thread, asking "so, uh, I guess we all rewrite to sequential workflows?" And there's no better answer in that thread either :)

Resources