Change Feed Processor: multi-region deployment units - azure

I have microservices (Kubernetes pods) running in multiple regions. I have one monitored collection. (all regions will be monitoring the same collection; each region has multiple pods)
I want each document (that was updated) to be processed independently by the deployment-units in each region. Here's how I plan to make this work:
Each region's deployment-unit will use a distinct processor-name (say CFP-US, CFP-Europe, CFP-Australia)
All pods running in a region, will use the processor-name for that region, same lease container, and distinct instance-names (say the pod's unique id). (The lease container will actually be the same for all regions).
Questions:
Does the setup above achieve what I want (i.e. each region independently processing every document that was updated?)
My collection has only one physical partition. My understanding is that in this case, only one lease will exist for all pods/instances in a region (since a lease is for a physical partition); and each region will have its own lease (since the processor-name is different for each region). Is my understanding correct?
Say I have 3 pods/instances in a region. Since only lease exists, only pod receives CFP updates at a time while other 2 sit idle. How can I configure the leases/intervals so that all 3 pods get updates (almost) equally? i.e. pod-A gets updates (and pod-B, pod-C sit idle), then after some time, pod-B get updates (and pod-A, pod-C sit idle); then pod-C gets updates (and pod-B, pod-A sit idle); and so-on.
The doc says that:
the number of instances should not be greater than the number of leases.
Other than the fact that some pods could be sitting idle, does having more instances than leases, cause any other issues? (particularly issues in the lease state, or in acquiring/renewing leases, etc.)

Yes. As long as they are different Deployment Units (https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/change-feed-processor?tabs=dotnet#deployment-unit) it doesn't matter where they are physically located, each Deployment Unit will receive the changes.
Correct. Each Deployment Unit will create it's own independent set of leaes.
You cannot. The distribution of leases-per-machine is by design, there is no configuration to force it.
That is the ideal scenario, the reason is that all machines (using the LeaseAcquireInterval) frequently check if they can acquire a new lease. If the machine currently owning it releases it (for an unhandled error for example) then the lease could jump to any other machine. Depending on your telemetry, that might be undesirable.

Related

Azure Functions: Understanding Change Feed in the context of multiple apps

According to the below diagram on https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed-processor, at least 4 partition key ranges are distributed between two hosts. What I'm struggling to understand in this diagram is the distinction between a host and a consumer. In the context of Azure Functions, would it be true to say that a host is a Function app whereas a consumer is an active/warm instance?
I'd like to create a setup with N many Function apps each with 0-200 active instances (depending on workload). At the same time, I'd like to read Change Feed. If I use a CosmosDBTrigger with the same connection string and lease container in each app, is this taken care of automatically or do I need a manual implementation?
The documentation you linked is mainly for the Change Feed Processor, but the Azure Functions binding actually runs the Change Feed Processor underneath.
When just using CFP, it's maybe easier to understand because you are mainly in control of the instances and distribution, but I'll try to map it to Functions.
The document mentions a deployment unit concept:
A single change feed processor deployment unit consists of one or more instances with the same processorName and lease container configuration. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
The deployment unit in Functions is the Function App. One Function App can span many instances. So each instance/host within that Function App deployment, will act as a available host/consumer.
Further down, the article talks about the dynamic scaling and what it says is basically that, within a Deployment Unit (Function App), the leases will get evenly distributed. So if you have 20 leases and 10 Function App instances, then each instance will own 2 leases and process them independently from the other instances.
One important note on that article is, scaling enables a higher CPU pool, but not a necessarily a higher parallelism.
As the documentation mentions, even on a single instance, CFP will process and read each lease it owns on an independent Task. The problem is, all these parallel processing is sharing the same CPU, so adding more instances will help if you currently see the instance having a CPU thread/bottleneck.
Now, in your example, you want to have N Function Apps, I assume that each one, doing something different. Basically, microservice deployments which would trigger on any change, but do a different task or fire a different business flow.
This other article covers that. Basically you can either, have each Function App use a separate Lease collection (having the monitored collection be the same) or you can share the lease collection but use a different LeaseCollectionPrefix for each Function App deployment. If the number of Function Apps you will be shared the lease collection is high, please check the RU usage on the lease collection as you might need to increase it (there is a note about it on the article).

Is It Possible To Run Nodes In Bursts?

NOTE: This question is framed in the context of a private network where the business network operator owns and manages all the nodes on the network as a service and only provides access via a REST API or a web gui.
Assuming that the application is mostly batch based and not real time, is it possible to run nodes in bursts so that they start once an hour, process any transactions and then shut down again when the processing is complete?
Or maybe have a trigger that starts up the node automatically when it is needed.
Azure has per second billing which has the potential to drastically reduce infrastructure costs.
Generally speaking this wouldn't be possible. You can think of nodes being like email servers -- You never know when an email (ie a transaction for a Node) comes in, so they have to be online all the time.
However if you control all nodes on the network, you could build a queuing system outside of Corda, once an hour spin up all the nodes on the network, and then process your own queue by sending the transactions then.
This would be likely to become tricky once you have other entities you don't control on the network though. You could also run the nodes on the smallest possible instances on Azure and keep the cost down that way?

SLURM Highly Availability Head Node

According to https://slurm.schedmd.com/quickstart_admin.html#HA high availability of SLURM is achieved by deploying a second BackupController which takes over when the primary fails and retrieves the current state from a shared file system (probably NFS).
In my opinion this has a number of drawbacks. E.g. this limits the total number of server to two and the second server is probably barely used.
Is this the only way to get a highly available head node with SLURM?
What I would like to do is a classic 3-tiered setup: A load balancer in the first tier which spreads all requests evenly across the nodes in the seconds tier. This requires the head node(s) to be stateless. The third tier is the database tier where all information is stored or read. I don't know anything about the internals of SLURM and I'm not sure if this is even remotely possible.
In the current design, the controller internal state is in-memory, and Slurm saves it to a set of files in the directory pointed to by the StateSaveLocation configuration parameter regularly. Only one instance of slurmctld can write to that directory at a time.
One problem with storing the state in the database would be a terrible latency in resource allocation with a lot of synchronisations needed, because optimal resource allocation can only be done with full information. The infrastructure needed to support the same level of throughput as Slurm can handle now with in-memory state would be very costly compared with the current solution implying only bitwise operations on arrays in memory.
Is this the only way to get a highly available head node with SLURM?
You can also have a single MasterController managed with Corosync. But indeed Slurm only has active/passive options available for HA.
In my opinion this has a number of drawbacks. E.g. this limits the
total number of server to two and the second server is probably barely
used.
The load on the controller is often very reasonable with respect to the current processing power, and the resource allocation problem cannot be trivially parallelised (or made stateless). Often, the backup controller is co-located on a machine running another service. For instance, on small deployments, one machine runs the Slurm primary controller, and other services (NFS, LDAP, etc.), etc. while another is the user login node, that also acts as a secondary Slurm controller.

Azure changing hardware

I have a product which uses CPU ID, network MAC, and disk volume serial numbers for validation. Basically when my product is first installed these values are recorded and then when the app is loaded up, these current values are compared against the old ones.
Something very mysterious happened recently. Inside of an Azure VM that had not been restarted in weeks, my app failed to load because some of these values were different. Unfortunately the person who caught the error deleted the VM before it was brought to my attention.
My question is, when an Azure VM is running, what hardware resources may change? Is that even possible?
Thanks!
Answering this requires a short rundown of how Azure works.
In each data centres there are thousands of individual machines. Each machine runs a hypervisor which allows a number of operating systems to share the same underlying hardware.
When you start a role, Azure looks for available resources - disk space CPU RAM etc and boots up a copy of the appropriate OS VM in thoe avaliable resources. I understand from your question that this is a VM role - so this VM is the one you uploaded or created.
As long as your VM is running, the underlying virtual resources provided by the hypervisor are not likely to change. (the caveat to this is that windows server 2012's hyper visor can move virtual machines around over the network even while they are running. Whether azure takes advantage of this, I don't know)
Now, Azure keeps charging you for even when your role has stopped because it considers your role "deployed". So in theory, those underlying resources still "belong" to your role.
This is not guaranteed. Azure could decided to boot up your VM on a different set of virtualized hardware for any number of reasons - hardware failure being at the top of the list, with insufficient capacity being second.
It is even possible (tho unlikely) for your resources to be provided by different hardware nodes.
An additional point of consideration is that it is Azure policy that disaster recovery (or other major event) may include transferring your roles to run in a separate data centre entirely.
My point is that the underlying hardware is virtual and treating it otherwise is most unwise. Roles are at the mercy of the Azure Management Routines, and we can't predict in advance what decisions they may make.
So the answer to your question is that ALL of the underlying resources may change. And it is very, very possible.

How to implement critical section in Azure

How do I implement critical section across multiple instances in Azure?
We are implementing a payment system on Azure.
When ever account balance is updated in the SQL-azure, we need to make sure that the value is 100% correct.
But we have multiple webroles running, thus they would be able to service two requests concurrently from different customers, that would potentially update current balance for one single product. Thus both instances may read the old amount from database at the same time, then both add the purchase to the old value and the both store the new amount in the database. Who ever saves first will have it's change overwritten. :-(
Thus we need to implement a critical section around all updates to account balance in the database. But how to do that in Azure? Guides suggest to use Azure storage queues for inter process communication. :-)
They ensure that the message does not get deleted from the queue until it has been processed.
Even if a process crash, then we are sure that the message will be processed by the next process. (as Azure guarantee to launch a new process if something hang)
I thought about running a singleton worker role to service requests on the queue. But Azure does not guarantee good uptime when you don't run minimum two instances in parallel. Also when I deploy new versions to Azure, I would have to stop the running instance before I can start a new one. Our application cannot accept that the "critical section worker role" does not process messages on the queue within 2 seconds.
Thus we would need multiple worker roles to guarantee sufficient small down time. In which case we are back to the same problem of implementing critical sections across multiple instances in Azure.
Note: If update transaction has not completed before 2 seconds, then we should role it back and start over.
Any idea how to implement critical section across instances in Azure would be deeply appreciated.
Doing synchronisation across instances is a complicated task and it's best to try and think around the problem so you don't have to do it.
In this specific case, if it is as critical as it sounds, I would just leave this up to SQL server (it's pretty good at dealing with data contentions). Rather than have the instances say "the new total value is X", call a stored procedure in SQL where you simply pass in the value of this transaction and the account you want to update. Somthing basic like this:
UPDATE Account
SET
AccountBalance = AccountBalance + #TransactionValue
WHERE
AccountId = #AccountId
If you need to update more than just one table, do it all in the same stored procedure and wrap it in a SQL transaction. I know it doesn't use any sexy technologies or frameworks, but it's much less complicated than any alternative I can think of.

Resources