Multiple CosmosClient for single cosmos db account - azure

I read online that one should keep single instance of CosmosClient for a cosmos db account per application.
In my case, my app & cosmos db is deployed to multiple regions.
Normally the app will read from the cosmos db in the same region.
However, in some scenario I want my app (whichever region it is running) to read from single cosmos db region, e.g. East US always.
Reason is, our cosmos db is on bounded staleness consistency, so data might not be replicated to other read regions instantaneously.
If I always write & read from the same region, I will be guaranteed to see the document there. So I am sacrificing latency for consistency in that scenario.
In order to achieve this, I have to specify which region I want to read from
var clientOptions = new CosmosClientOptions
{
ApplicationRegion = "East US"
};
return new CosmosClient(_cosmosDbDataConnectionOptions.CosmosDbUrl, new DefaultAzureCredential(), clientOptions);
I want to use this CosmosClient for specific scenario.
In normal case, I will set
ApplicationRegion = <app deployed region>
This requires me to have 2 CosmosClient for the same cosmos db account. Does it make sense to have 2 CosmosClient then ? Or is there any other recommended approach to this problem.
I looked up google and found out https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/performance-tips-dotnet-sdk-v3?tabs=trace-net-core#sdk-usage . This recommends me to have 1 cosmos client per app. But in my case, I have to set read region differently per scenario.

If the concern is about Consistency, then the SessionToken might help (Session Token from the Write on a Read call even if they are different client instances). If you have other scenarios where the logic is different and you, for whatever reason, want to change the read endpoint, yes, there is no way to flip it on a running client or say for a particular request, that you want it to go to another reagion.

Related

CosmosDB: Will call go to write replica if provided read replica region doesn't exist?

I have a CosmosDB instance on Azure, with 1 write replica and multiple read replicas. Normally we call SetCurrentLocation to make calls to read replica. My understanding is that this automatically create PreferredLocations for us. But not sure how the preferredlocations work.
Now let's say the location passed to the SetCurrentLocation method is improper. That is, there's no replica in that single location we passed, but the location is a valid azure region. In that case, will the call go to the write replica, or a closer by read replica?
SetCurrentLocation will order Azure regions based on geographical distance between the indicated region and them, and the SDK client will then take this ordered list and map it with your account available regions. So it ends up being your account available regions ordered by distance to the region you indicated on SetCurrentLocation.
For an account with a single write region, all write operations always go to that region, the Preferred Locations affect read operations. More information at: https://learn.microsoft.com/azure/cosmos-db/troubleshoot-sdk-availability
Further adding to Matias's answer, from https://learn.microsoft.com/en-us/azure/cosmos-db/sql/troubleshoot-sdk-availability:
Primary region refers to the first region in the Azure Cosmos account region list. If the values specified as regional preference do not match with any existing Azure regions, they will be ignored. If they match an existing region but the account is not replicated to it, then the client will connect to the next preferred region that matches or to the primary region.
So if the specified location is bad, or there's no read replica there, the client will try connect to the next location, where eventually the primary region (in this case, the singular write replica) is used.

Cosmos Db Throughput

Is there any option to retrieve Cosmos DB (SQL API) throughput programmatically. I'm using the below code to get the list of DB
DocumentClient client = new DocumentClient(ClientURL, ClientKey,);
var databaseList = client.CreateDatabaseQuery().ToList();
Next I wanted to know the throughput for each of the database.
Please let me know if this is feasible
You could refer to CreateOfferQuery method to get throughout settings of Database or Collections.
Also,please refer to this rest api:https://learn.microsoft.com/en-us/rest/api/cosmos-db/get-an-offer

CosmosDB How to read replicated data

I'm using CosmosDB and replicating the data globally. (One Write region; multiple Read regions). Using the Portal's Data Explorer, I can see the data in the Write region. How can I query data in the Read regions? I'd like some assurance that it's actually working, and haven't been able to find any info or even an URL for the replicated DBs.
Note: I'm writing to the DB via the CosmosDB "Create or update document" Connector in a Logic App. Given that this is a codeless environment, I'd prefer to validate the replication without having to write code.
How can I query data in the Read regions?
If code is possible, we could access from every region your application is deployed, configure the corresponding preferred regions list for each region via one of the supported SDKs
The following is the demo code for Azure SQL API CosmosDB. For more information, please refer to this tutorial.
ConnectionPolicy usConnectionPolicy = new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp
};
usConnectionPolicy.PreferredLocations.Add(LocationNames.WestUS); //first preference
usConnectionPolicy.PreferredLocations.Add(LocationNames.NorthEurope); //second preference
DocumentClient usClient = new DocumentClient(
new Uri("https://contosodb.documents.azure.com"),
"<Fill your Cosmos DB account's AuthorizationKey>",
usConnectionPolicy);
Update:
We can enable Automatic Failover from Azure portal. Then we could drag and drop the read regions items to recorder the failover priorties.

Having a read/write region in Azure CosmosDB

Is it possible to make a given Region as both Read/Write in CosmosDB Geo Redundant setup.
I am using Azure DocumentDB with Java SDK and tried to overwrite the location preference using the Connection policy as below
ConnectionPolicy policy = new ConnectionPolicy();
policy.setEnableEndpointDiscovery(true);
List<String> locations = new ArrayList<>();
locations.add("West US");
policy.setPreferredLocations(locations);
But i could still see some requests going to East in the Metrics Explorer. Any help here would be greatly appreciated.
TIA
You also need to set enableEndpointDiscovery to be true.
When the value of this property is true, the SDK will automatically
discover the current write and read regions to ensure requests are
sent to the correct region based on the regions specified in the
PreferredLocations property. Default value is true indicating endpoint
discovery is enabled.
SAMPLE CODE

Azure Shared Access Signature for whole storage account

I am using an API (node.js) to generate a read only shared access signature for an iOS app using Azure Mobile Services. The API generates the SAS using the following code...
var azure = require('azure-storage');
var blobService = azure.createBlobService(accountName, accountKey);
var sas = blobService.generateSharedAccessSignature("containerName", null, sharedAccessPolicy);
This works great when I want a SAS for access to one container. But I really need access to all containers in the storage account. I could obviously do this with a separate API call for each container but this would require hundreds of extra calls.
I have looked everywhere for a solution but I can't get anything to work, I would very much appreciate knowing if there is a way to generate a SAS for all containers in a storage account?
You can construct an account-level SAS, where you get to specify:
services to include (blob, table, queue, file)
resource access (e.g. container create & delete)
permissions (e.g. read, write, list)
protocol (e.g. https only, vs http+https)
Just like a service-specific SAS, you get to specify expiry date (and optionally start date).
Given your use case, you can tailor your account SAS to be just for blobs; there's no need to include unneeded services (in your case, tables/queues/files).
More specifics are documented here.

Resources