We have an API where we store the configuration in a container in Cosmos DB. We are considering using the Cosmos change feed to subscribe to configuration changes using a change feed processor in order to be able to remove the configurations from cache when they are changed. We have deployments in multiple Azure regions, thus our account is multiregion write account. Now, I read in the documentation that
Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
What does it mean in practice? Will the processor read and handle all changes from the beginning every time the API process is restarted? Is there any way to pass around this limitation?
Your Cosmos DB account either has 1 write region (with as many read region replicas as you want) or has all regions being both write and read regions). Reference: https://learn.microsoft.com/azure/cosmos-db/sql/how-to-multi-master
You can start a change feed processor with 3 different starting points:
Now
The beginning of the collection lifetime
Some particular point in time
This note means that if your account has multiple write regions (instead of 1 write region), you can only start a change feed from Now or the Beginning, you cannot start a Change Feed from a specific point in time.
Related
I am trying to design a timer-triggered processor (all in azure) which will process a set of records that are set out for it to be consumed. It will be grouping it based on a column, creating files out of it, and dumping in a blob container. The records that it will consume are supposed to be generated based on an event - when the event is raised, containing a key, which can be used to query the data for the record (the data/ record being generated is to be pulled from different services.)
This is what I am thinking currently
Event is raised to event-grid-topic
Azure Function(ConsumerApp) is event triggered, reads the key, calls a service API to get all the data, stores that record in storage
table, with flag ready to be consumed.
Azure Function(ProcessorApp) is timer triggered, will read from the storage table, group based on another column, create and dump them as
files. This can then mark the records as processed, if not updated
already by ConsumerApp.
Some of my questions on these, apart from any way we can do it in a different better way are -
The table storage is going to fill up quickly, which will again decrease the speed to read the 'ready cases' so is there any better approach to store this intermediate & temporary data? One thing which I thought was to regularly flush the table or delete the record from the consumer app instead of marking it as 'processed'
The service API is being called for each event, which might increase the strain on that service/its database. should I group the call for records as a single API call, since the processor will run only after a said interval, or is there a better approach here?
Any feedback on this approach or a new design will be appreciated.
If you don't have to process data on Step 2 individually, you can try saving it in a blob too and add a record the blob path in Azure Table Storage to keep minimal row count.
Azure Table Storage has partitions that you can use to partition your data and keep your read operations fast. Partition scan is faster compared to table scan. In addition, Azure Table Storage is cheap, but if you have pricing concern. Then you can write a clean up function to periodically clean the processed rows. Keeping the processed rows around for a reasonable time is usually a good idea. Because you may need those for debugging issues.
By batching multiple calls in a single call, you can decrease network I/O delay. But resource contention will remain at service level. You can try moving that API to a separate service if possible to scale it separately.
Right now I have one Cosmos DB that have three different containers, therefore I use three different functions that are listening for Change Feed events from this Cosmos DB.
In the future amount of my containers will be grown from 3 to 100.
So, is it possible to have one function that will be listening for all changes in all containers and that can detect from what container changes have come?
The recommended pattern with Cosmos DB is to have a single or few data containers and partition data logically via property values, rather than segmenting into many containers. If at all possible, for change feed and other reasons, it would be worthwhile to review the proposed design to see if there is a way to consolidate containers and avoid this pain.
That said, if an unknown and growing number of containers must be supported, one way that might be achieved dynamically is with the Change Feed Processor via the SDK. When instantiating a processor instance using GetChangeFeedProcessorBuilder, you can provide the container name as a parameter. Given a configured or discovered list of all target containers, multiple change feed processor instances could be created and run in parallel.
This could be hosted in multiple ways. Consider using an ASP.NET Core app with an IHostedService, and avoiding Azure Functions in this case.
In short: no.
Change feed in Azure Cosmos DB is a persistent record of changes to a container in the order they occur. Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos container for any changes.
and
Change feed is available for each logical partition key within the container, and it can be distributed across one or more consumers for parallel processing.
The documentation on Change feed in Azure Cosmos DB clearly states a Change Feed is for one specific container.
There probably are, however, different approaches you could take to solve your problem. Most important question is: what actually is the problem you are trying to solve?
If you need to process the changes in Cosmos DB using a Function, I can imaging the logic for processing the changes can (will) be different for each type of data. So for each container. If this is not the case the data doesn't have to be in different containers?
One option could be to create a timer triggered Function that will be Reading change feed with a pull model. This enables you to loop the containers in that Function and prepare processing the changes per container (for instance by putting the information in a queue or using Durable Functions with the Fan-Out/Fan-In pattern).
According to the below diagram on https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed-processor, at least 4 partition key ranges are distributed between two hosts. What I'm struggling to understand in this diagram is the distinction between a host and a consumer. In the context of Azure Functions, would it be true to say that a host is a Function app whereas a consumer is an active/warm instance?
I'd like to create a setup with N many Function apps each with 0-200 active instances (depending on workload). At the same time, I'd like to read Change Feed. If I use a CosmosDBTrigger with the same connection string and lease container in each app, is this taken care of automatically or do I need a manual implementation?
The documentation you linked is mainly for the Change Feed Processor, but the Azure Functions binding actually runs the Change Feed Processor underneath.
When just using CFP, it's maybe easier to understand because you are mainly in control of the instances and distribution, but I'll try to map it to Functions.
The document mentions a deployment unit concept:
A single change feed processor deployment unit consists of one or more instances with the same processorName and lease container configuration. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
The deployment unit in Functions is the Function App. One Function App can span many instances. So each instance/host within that Function App deployment, will act as a available host/consumer.
Further down, the article talks about the dynamic scaling and what it says is basically that, within a Deployment Unit (Function App), the leases will get evenly distributed. So if you have 20 leases and 10 Function App instances, then each instance will own 2 leases and process them independently from the other instances.
One important note on that article is, scaling enables a higher CPU pool, but not a necessarily a higher parallelism.
As the documentation mentions, even on a single instance, CFP will process and read each lease it owns on an independent Task. The problem is, all these parallel processing is sharing the same CPU, so adding more instances will help if you currently see the instance having a CPU thread/bottleneck.
Now, in your example, you want to have N Function Apps, I assume that each one, doing something different. Basically, microservice deployments which would trigger on any change, but do a different task or fire a different business flow.
This other article covers that. Basically you can either, have each Function App use a separate Lease collection (having the monitored collection be the same) or you can share the lease collection but use a different LeaseCollectionPrefix for each Function App deployment. If the number of Function Apps you will be shared the lease collection is high, please check the RU usage on the lease collection as you might need to increase it (there is a note about it on the article).
In Azure Cosmos DB, is it possible to create multiple read replicas at a database / container / partition key level to increase read throughput? I have several containers that will need more than 10K RU/s per logical partition key, and re-designing my partition key logic is not an option right now. Thus, I'm thinking of replicating data (eventual consistency is fine) several times.
I know Azure offers global distribution with Cosmos DB, but what I'm looking for is replication within the same region and ideally not a full database replication but a container replication. A container-level replication will be more cost effective since I don't need to replicate most containers and I need to replicate the others up to 10 times.
Few options available though:
Within same region no replication option but you could use the Change Feed to replicate to another DB with the re-design in mind just for the purpose of using with read queries. Though it might be a better idea to either use the serverless option which is in preview or use the auto scale option as well. But you can also look at provisioned throughput and reserve the provisioned RUs for 1 year or 3 year and pay monthly just as you would in PAYG model but with a huge discount. One option would also be to do a latency test from the VM in the region where your main DB and app is running and finding out the closest region w.r.t Latency (ms) and then if the latency is bearable then you can use global replication to that region and start using it. I use this tool for latency tests but run it from the VM within the region where your app\DB is running.
My guess is your queries are all cross-partition and each query is consuming a ton of RU/s. Unfortunately there is no feature in Cosmos DB to help in the way you're asking.
Your options are to create more containers and use change feed to replicate data to them in-region and then add some sort of routing mechanism to route requests in your app. You will of course only get eventual consistency with this so if you have high concurrency needs this won't work. Your only real option then is to address the scalability issues with your design today.
There is something that can help is this live data migrator. You can use this to keep a second container in sync with your original that will allow you to eventually migrate off your first design to another that scales better.
How do I implement critical section across multiple instances in Azure?
We are implementing a payment system on Azure.
When ever account balance is updated in the SQL-azure, we need to make sure that the value is 100% correct.
But we have multiple webroles running, thus they would be able to service two requests concurrently from different customers, that would potentially update current balance for one single product. Thus both instances may read the old amount from database at the same time, then both add the purchase to the old value and the both store the new amount in the database. Who ever saves first will have it's change overwritten. :-(
Thus we need to implement a critical section around all updates to account balance in the database. But how to do that in Azure? Guides suggest to use Azure storage queues for inter process communication. :-)
They ensure that the message does not get deleted from the queue until it has been processed.
Even if a process crash, then we are sure that the message will be processed by the next process. (as Azure guarantee to launch a new process if something hang)
I thought about running a singleton worker role to service requests on the queue. But Azure does not guarantee good uptime when you don't run minimum two instances in parallel. Also when I deploy new versions to Azure, I would have to stop the running instance before I can start a new one. Our application cannot accept that the "critical section worker role" does not process messages on the queue within 2 seconds.
Thus we would need multiple worker roles to guarantee sufficient small down time. In which case we are back to the same problem of implementing critical sections across multiple instances in Azure.
Note: If update transaction has not completed before 2 seconds, then we should role it back and start over.
Any idea how to implement critical section across instances in Azure would be deeply appreciated.
Doing synchronisation across instances is a complicated task and it's best to try and think around the problem so you don't have to do it.
In this specific case, if it is as critical as it sounds, I would just leave this up to SQL server (it's pretty good at dealing with data contentions). Rather than have the instances say "the new total value is X", call a stored procedure in SQL where you simply pass in the value of this transaction and the account you want to update. Somthing basic like this:
UPDATE Account
SET
AccountBalance = AccountBalance + #TransactionValue
WHERE
AccountId = #AccountId
If you need to update more than just one table, do it all in the same stored procedure and wrap it in a SQL transaction. I know it doesn't use any sexy technologies or frameworks, but it's much less complicated than any alternative I can think of.