Non-Idempotent Actions & Blob Leases on Azure - azure

I have a multi-instance worker role.
It needs to do 2 things.
Download emails from a pop-inbox, save them, create a DB entry & then delete the emails
Download files from an FTP Server, save them, create a DB entry & then delete the files
Both these operations are time-sensitive and in a multi-instance environment, it's possible the second instance could pull duplicate copies of files/emails before the first instance goes back and deletes them.
I'm planning to implement a sync-lock mechanism around the main download method, which acquires a lease on a blob-file. The goal being that it would act as a lock, preventing another instance from interfering for the duration of the download-save-delete operation. If anything goes wrong with Instance 1 (i.e. it crashes), then the lease will eventually expire, and the second instance will pick up where it left off on the next loop and I can maintain my SLA
Just wondering if this is a viable solution or if there are any gotcha's I should be aware of ?

Blob leases are a viable locking strategy across multiple servers.
However, I'd still be cautious and record downloading of each individual email as a separate record, so that i would minimize accidental double downloading of the same email.

Related

DataLake locks on read and write for the same file

I have 2 different applications that handle data from Data Lake Storage Gen1.
The first application uploads files: if multiple uploads on the same day, the existing file will be overridden (it is always a file per day saved using the YYYY-MM-dd format)
The second application reads the data from the files.
Is there an option to lock this operations: when a write operation is performed, no read should take place and the same when a read happens the write should wait until the read operation is finished.
I did not find any option using the AdlsClient.
Thanks.
As I know, ADL gen1 is Apache Hadoop file system that's compatible with Hadoop Distributed File System (HDFS). So I searched some documents of HDFS and I'm afraid that you can't control mutual exclusion of reading and writing directly. Please see below documents:
1.link1: https://www.raviprak.com/research/hadoop/leaseManagement.html
writers must obtain an exclusive lock for a file before they’d be
allowed to write / append / truncate data in those files. Notably,
this exclusive lock does NOT prevent other clients from reading the
file, (so a client could be writing a file, and at the same time
another could be reading the same file).
2.link2: https://blog.cloudera.com/understanding-hdfs-recovery-processes-part-1/
Before a client can write an HDFS file, it must obtain a lease, which is essentially a lock. This ensures the single-writer semantics. The lease must be renewed within a predefined period of time if the client wishes to keep writing. If a lease is not explicitly renewed or the client holding it dies, then it will expire. When this happens, HDFS will close the file and release the lease on behalf of the client so that other clients can write to the file. This process is called lease recovery.
I provide a workaround here for your reference: Adding a Redis database before your writes and reads!
No matter when you do read or write operations, firstly please judge whether there is a specific key in the Redis database. If not, write a set of key-value into Redis. Then do business logic processing. Finally don't forget to delete the key.
Although this is may a little bit cumbersome or affecting performance, I think it can meet your needs. BTW,considering that the business logic may fail or crash so that the key is never released, you can add the TTL setting when creating the key to avoid this situation.

Azure Storage Performance Queue vs Table

I've got a nice logging system I've set up that writes to Azure Table Storage and it has worked well for a long time. However, there are certain places in my code where I need to now write a lot of messages to the log (50-60 msgs) instead of just a couple. It is also important enough that I can't start a new thread to finish writing to the log and return the MVC action before I know the log is successful because theoretically that thread could die. I have to write to the log before I return data to the web user.
According to the Azure dashboard, Table Storage transactions take ~37ms to commit, end to end (E2E), while queues only take ~6ms E2E to commit.
I'm now considering not logging directly to table storage, and instead log to an Azure Queue, then have a batch job run that reads off the queue and then puts them in their proper place in table storage. That way I can still index them properly via their partition and row keys. I can also write just a single queue message with all of the log entries. So it should only take 6ms instead of (37 * 50) ms.
I know that there are Table Storage batch operations. However, each of the log entries typically goes to different partition, and batch ops need to stay within a single partition.
I know that queue messages only live for 7 days, so I'll make sure I store queue messages in a new mechanism if they're older than a day (if it doesn't work the first 50 times, it just isn't going to work).
My question, then is: what am I not thinking about? How could this completely kick me in the balls in 4 months down the road?

Windows Azure - leader instance without single point of failure

I am looking for a way to have a "Singleton" module over multiple worker role instances.
I would like to have a parallel execution model with Queues and multiple worker roles in Azure.
The idea is that would like to have a "master" instance, that is let's say checking for new data, and is scheduling it by adding it to a queue, processing all messages from a special queue, that is not processed by nobody else, and has mounted blob storage as a virtual drive, with read/write access.
I will always have only one "master instance". When that master instance goes down for some reason, another instance from the one already instantiated should very quickly be "elected" for a master instance (couple of seconds). This should happen before the broken instance is replaced by a new one by the Azure environment (about 15 min).
So it will be some kind of self-organizing, dynamic environment.
I was thinking of having some locking, based on a storage or table data. the opportunity to set lock timeouts and some kind of "watchdog" timer if we can talk with microprocessor terminology.
There is general approach to what you seek to achieve.
First, your master instance. You could do your check based on instance ID. It is fairly easy. You need RoleEnvironment.CurrentRoleInstance to get the "Current instance", now compare the Id property with what you get out of RoleEnvironment.CurrentRoleInstance.Role.Instances first member ordered by Id. Something like:
var instance = RoleEnvironment.CurrentRoleInstance;
if(instance.Id.Equals(instance.Role.Instances.OrderBy(ins => ins.Id).First().Id))
{
// you are in the single master
}
Now you need to elect master upon "Healing"/recycling.
You need to get the RoleEnvironment's Changed event. Check if it is TopologyChange (just check whether it is topology change, you don't need the exact change in topology). And if it is Topology Change - elect the next master based on the above algorithm. Check out this great blog post on how to exactly perform events hooking and change detection.
Forgot to add.
If you like locks - blob lease is the best way to acquire / check locks. However working with just the RoleEnvironment events and the simple master election based on Instance ID, I don't think you'll need that complicated locking mechanism. Besides - everything lives in the Queue until it is successfully processed. So if the master dies before it processes something, the "next master" will process it.

What assumptions can I make about global time on Azure?

I want my Azure role to reprocess data in case of sudden failures. I consider the following option.
For every block of data to process I have a database table row and I could add a column meaning "time of last ping from a processing node". So when a node grabs a data block for processing it sets "processing" state and that time to "current time" and then it's the node responsibility to update that time say every one minute. Then periodically some node will ask for "all blocks that have processing state and ping time larger than ten minutes" and consider those blocks as abandoned and somehow queue them for reprocessing.
I have one very serious concern. The above approach requires that nodes have more or less the same time. Can I rely on all Azure nodes having the same time with some reasonable precision (say several seconds)?
For processing times under 2 hrs, you can usually rely on queue semantics (visibility timeout). If you have the data stored in blob storage, you can have a worker pop a queue message containing the name of the blob to work on and set a reasonable visibility timeout on the message (up to 2 hrs today). Once it completes the work, it can delete the queue message. If it fails, the delete is never called and after the visibility timeout, it will reappear on the queue for reprocessing. This is why you want your work to be idempotent, btw.
For processing that lasts longer than two hours, I generally recommend a leasing strategy where the worker leases the underlying blob data (if possible or a dummy blob otherwise) using the intrisic lease functionality in Windows Azure blob storage. When a worker goes to retrieve a file, it tries to lease it. A file that is already leased is indicative of a worker role currently processing it. If failure occurs, the lease will be broken and it will become leasable by another instance. Leases must be renewed every min or so, but they can be held indefinitely.
Of course, you are keeping the data to be processed in blob storage, right? :)
As already indicated, you should not rely on synchronized times between VM nodes. If you store datetimes for any reason - use UTC or you will be sorry later.
The answer here isn't to use time based synchronization (if you would however, make sure you use UTCNow), but there is still no guarantee anywhere that the clocks are synced. Nor should there be.
For the problem you are describing a queue based system is the answer. I've been referencing a lot to it, and will do it again, but I've explained some benefits of queue based systems in my blog post.
The idea is the following:
You put a work item to the queue
Your worker role (one or many of them) peeks & locks the message
You try to process the message, if you succeed, you remove the message from the queue,
if not, you let it stay where it is
With your approach I would use AppFabric Queues because you can also have topics & subscriptions, which allows you to monitor the data items. The example in my blog post coveres this exact scenario, with the only difference being that instead of having a worker role I poll the queue from my web application. But the concept is the same.
I would try this a different way using queue storage. If you pop your block of data on a queue with a timeout then have your processing nodes (worker roles?) pull this data off the queue.
After the data is popped off the queue if the processing node does not delete the entry from the queue it will reappear on the queue for processing after the timeout period.
Remote desktop into a role instance and check (a) the time zone (UTC, I think), and (b) that Internet Time is enabled in Date and Time settings. If so then you can rely on them being no more than a few ms apart. (This is not to say that the suggestions to use a message queue instead won't work, but perhaps they do not suit your needs.)

Controlling azure worker roles concurrency in multiple instance

I have a simple work role in azure that does some data processing on an SQL azure database.
The worker basically adds data from a 3rd party datasource to my database every 2 minutes. When I have two instances of the role, this obviously doubles up unnecessarily. I would like to have 2 instances for redundancy and the 99.95 uptime, but do not want them both processing at the same time as they will just duplicate the same job. Is there a standard pattern for this that I am missing?
I know I could set flags in the database, but am hoping there is another easier or better way to manage this.
Thanks
As Mark suggested, you can use an Azure queue to post a message. You can have the worker role instance post a followup message to the queue as the last thing it does when processing the current message. That should deal with the issue Mark brought up regarding the need for a semaphore. In your queue message, you can embed a timestamp marking when the message can be processed. When creating a new message, just add two minutes to current time.
And... in case it's not obvious: in the event the worker role instance crashes before completing processing and fails to repost a new queue message, that's fine. In this case, the current queue message will simply reappear on the queue and another instance is then free to process it.
There is not a super easy way to do this, I dont think.
You can use a semaphore as Mark has mentioned, to basically record the start and the stop of processing. Then you can have any amount of instances running, each inspecting the semaphore record and only acting out if semaphore allows it.
However, the caveat here is that what happens if one of the instances crashes in the middle of processing and never releases the semaphore? You can implement a "timeout" value after which other instances will attempt to kick-start processing if there hasnt been an unlock for X amount of time.
Alternatively, you can use a third party monitoring service like AzureWatch to watch for unresponsive instances in Azure and start a new instance if the amount of "Ready" instances is under 1. This will save you can save some money by not having to have 2 instances up and running all the time, but there is a slight lag between when an instance fails and when a new one is started.
A Semaphor as suggested would be the way to go, although I'd probably go with a simple timestamp heartbeat in blob store.
The other thought is, how necessary is it? If your loads can sustain being down for a few minutes, maybe just let the role recycle?
Small catch on David's solution. Re-posting the message to the queue would happen as the last thing on the current execution so that if the machine crashes along the way the current message would expire and re-surface on the queue. That assumes that the message was originally peeked and requires a de-queue operation to remove from the queue. The de-queue must happen before inserting the new message to the queue. If the role crashes in between these 2 operations, then there will be no tokens left in the system and will come to a halt.
The ESB dup check sounds like a feasible approach, but it does not sound like it would be deterministic either since the bus can only check for identical messages currently existing in a queue. But if one of the messages comes in right after the previous one was de-queued, there is a chance to end up with 2 processes running in parallel.
An alternative solution, if you can afford it, would be to never de-queue and just lease the message via Peek operations. You would have to ensure that the invisibility timeout never goes beyond the processing time in your worker role. As far as creating the token in the first place, the same worker role startup strategy described before combined with ASB dup check should work (since messages would never move from the queue).

Resources