Concurrency editing or locking cells in Excel with REST-APIs - excel

I'm working on a web application as a front end for an Excel Sheet. The REST-APIs seems to be quiet clear. But I am not sure how to handle concurrency correctly. I want to avoid that two clients accidentally override their data. I need some kind of primary key which could be edited in worst case by two users. What is the correct way to handle that with the Microsoft Graph?
Right now I have in mind to do some kind of double locking so that I allocate a key and check then if it was overwritten after a second. But that seems to be quiet hacky and I'm sure that there is a way to lock cells so that two users cannot edit the same cells.

Normally you do this with ETag and update only when the If-Match header is verified. When somebody changes the resource, then the ETag changes and the old ETag won't match any longer. There can be still a short period of time when the ETag is the old for both requests, so there is no perfect solution.
In the case of MS Graph API I see an "#odata.etag" property for the resource and the sub-resources, so I assume they use it for this and maybe the send the ETag header for the actual resource too. At least it works this way for this MS WebAPI, so if this is a different product, then still I think they use the same solution for the Graph API too. https://learn.microsoft.com/en-us/power-apps/developer/data-platform/webapi/perform-conditional-operations-using-web-api#bkmk_DetectIfChanged They might send the ETag header for the actual resource too.

Related

Can I track unexpected lack of changes using change feeds, cosmos db and azure functions?

I am trying to understand change feeds in Azure. I see I can trigger an event when something changes in cosmos db. This is useful. However, in some situations, I expect a document to be changed after a while. A question should have a status change that it has been answered. After a while an order should have a status change "confirmed" and a problem should have status change "resolved" or should a have priority change (to "low"). It is useful to trigger an event when such a change is happening for a certain document. However, it is even more useful to trigger an event when such a change after a (specified) while (like 1 hour) does not happen. A problem needs to be resolved after a while, an order needs to be confirmed after while etc. Can I use change feeds and azure functions for that too? Or do I need something different? It is great that I can visualize changes (for example in power BI) once they happen after a while but I am also interested in visualizing changes that do not occur after a while when they are expected to occur.
Achieving that with Change Feed doesn't sound possible, because as you describe it, Change Feed is reacting based on operations/events that happen.
In your case it sounds as if you needed an agent that needs to be running every X amount of time (maybe an Azure Functions with a TimerTrigger?) and executes a query to find items with X state that have not been modified in the past Y pre-defined interval (possibly the time interval associated with the TimerTrigger). This could be done by checking the _ts field of the state documents or your own timestamp field, see https://stackoverflow.com/a/39214165/5641598.
If your goal is to just deploy it on a dashboard, you could query using Power BI too.
As long as you don't need too much time precision (the Change Feed notifications are usually delayed by a few seconds) for this task, the Azure CosmosDB Change Feed could be easily used as a solution, but it would require some extra work from the Microsoft team to also support capturing deletion TTL expiration events.
A potential solution, if the Change Feed were to capture such TTL expiration events, would be: whenever you insert (or in your use case: change priority of) a document for which you want to monitor lack of changes, you also insert another document (possibly in another collection) that acts as a timer, specifying a TTL of 1h.
You would delete the timer document manually or by consuming the Change Feed for changes, in case a change actually happened.
You could also easily consume from the Change Feed the TTL expiration event and assert that if the TTL expired then there were no changes in the specified time window.
If you'd like this feature, you should consider voting issues such as this one: https://github.com/Azure/azure-cosmos-dotnet-v2/issues/402 and feature requests such as this one: https://feedback.azure.com/forums/263030-azure-cosmos-db/suggestions/14603412-execute-a-procedure-when-ttl-expires, which would make the Change Feed a perfect fit for scenarios such as yours. Sadly it is not available yet :(
TL;DR No, the Change Feed as it stands would not be a right fit for your use case. It would need some extra functionalities that are planned but not implemented yet.
PS. In case you'd like to know more about the Change Feed and its main use cases anyways, you can check out this article of mine :)

Purge varnish cache based on request header values

I am caching multiple copies of an object based on certain header values in the request using vcl hash. How do I purge them all at once?
My answer is based on assumption that you really want to purge as in PURGEand not BAN:
In case all the possible values of the certain header are known, you would use restarts coupled with setting custom header. Logic is the following:
received PURGE request for object with req.http.X-Custom == foo
return(purge)
in vcl_purge, set req.http.X-Custom = bar, and introduce / adjust helper header with the set of values already purged, and return (restart)
As a result, Varnish will recursively purge all the objects.
You can see example of this approach in complete Brotli VCL implementation.
But in case the values of the certain header are really arbitrary, you can't really PURGE them all at once. If you need this, you have to make use of Vary: X-Custom so that Varnish will consider all those objects as one with many variations. With Vary in place, you don't have to hash on the header and PURGE on one variation will effectively clear out all other variations.
I like Vary approach much better.

How to manage concurrency for page blobs?

I want to have multiple clients writing to the same page, and if a race condition occurs then I want all but one to fail and then retry (sort of like ETags on the entire blob).
According to this, https://learn.microsoft.com/en-us/azure/storage/storage-concurrency#managing-concurrency-in-blob-storage, Put Page returns an ETag value, but is that only for the entire page blob? I think it's not for every page right?
Also in https://learn.microsoft.com/en-us/rest/api/storageservices/fileservices/put-page there's a section "Managing Concurrency Issues", which says that ETag works well if the number of concurrent writes is relatively low - I assume this is because it indeed won't work on each page.
I am not sure which options I am left with? It seems all of the options apply to the blob as a whole. I high number of concurrent writes to the same blob, and low to moderate to the same page.

Prevent certain optionset changes in CRM via plugin

Is it possible to have a plugin intervene when someone is editing an optionset?
I would have thought crm would prevent the removal of optionset values if there are entities that refer to them, but apparently this is not the case (there are a number of orphaned fields that refer to options that no longer exist). Is there a message/entity pair that I could use to check if there are entities using the value that is to be deleted/modified and stop it if there are?
Not sure if this is possible, but you could attempt to create a plugin on the Execute Method, and check the input parameters in the context to determine what the Request Type that is being processed is. Pretty sure you'll be wanting to look for either UpdateAttributeRequest for local OptionSets, or potentially UpdateOptionSetRequest for both. Then you could run additional logic to determine what values are changing, and ensuring the database values are correct.
The big caveat to this, is if you even have a moderate amount of data, I'm guessing you'll hit the 2 minute limit for plugin execution and it will fail.

What's the best way to keep count of the data set size information in Core Data?

Right now whenever I need to access my data set size (and it can be quite frequently), I perform a countForFetchRequest on the managedObjectContext. Is this a bad thing to do? Should I manage the count locally instead? The reason I went this route is to ensure I am getting 100% correct answer. With Core Data being accessed from more than one places (for example, through NSFetchedResultsController as well), it's hard to keep an accurate count locally.
-countForFetchRequest: is always evaluated in the persistent store. When using the Sqlite store, this will result in IO being performed.
Suggested strategy:
Cache the count returned from -countForFetchRequest:.
Observe NSManagedObjectContextObjectsDidChangeNotification for your own context.
Observe NSManagedObjectContextDidSaveNotification for related contexts.
For the simple case (no fetch predicate) you can update the count from the information contained in the notification without additional IO.
Alternately, you can invalidate your cached count and refresh via -countForFetchRequest: as necessary.

Resources