I read online that one should keep single instance of CosmosClient for a cosmos db account per application.
In my case, my app & cosmos db is deployed to multiple regions.
Normally the app will read from the cosmos db in the same region.
However, in some scenario I want my app (whichever region it is running) to read from single cosmos db region, e.g. East US always.
Reason is, our cosmos db is on bounded staleness consistency, so data might not be replicated to other read regions instantaneously.
If I always write & read from the same region, I will be guaranteed to see the document there. So I am sacrificing latency for consistency in that scenario.
In order to achieve this, I have to specify which region I want to read from
var clientOptions = new CosmosClientOptions
{
ApplicationRegion = "East US"
};
return new CosmosClient(_cosmosDbDataConnectionOptions.CosmosDbUrl, new DefaultAzureCredential(), clientOptions);
I want to use this CosmosClient for specific scenario.
In normal case, I will set
ApplicationRegion = <app deployed region>
This requires me to have 2 CosmosClient for the same cosmos db account. Does it make sense to have 2 CosmosClient then ? Or is there any other recommended approach to this problem.
I looked up google and found out https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/performance-tips-dotnet-sdk-v3?tabs=trace-net-core#sdk-usage . This recommends me to have 1 cosmos client per app. But in my case, I have to set read region differently per scenario.
If the concern is about Consistency, then the SessionToken might help (Session Token from the Write on a Read call even if they are different client instances). If you have other scenarios where the logic is different and you, for whatever reason, want to change the read endpoint, yes, there is no way to flip it on a running client or say for a particular request, that you want it to go to another reagion.
As mentioned in the Microsoft documentation there is support to increase/decrease the provisioned RU of cosmos containers using cosmosDB Java SDK but when I am trying to perform the steps I am getting below error:
com.azure.cosmos.CosmosException: {"innerErrorMessage":"\"Operation 'PUT' on resource 'offers' is not allowed through Azure Cosmos DB endpoint. Please switch on such operations for your account, or perform this operation through Azure Resource Manager, Azure Portal, Azure CLI or Azure Powershell\"\r\nActivityId: 86fcecc8-5938-46b1-857f-9d57b7, Microsoft.Azure.Documents.Common/2.14.0, StatusCode: Forbidden","cosmosDiagnostics":{"userAgent":"azsdk-java-cosmos/4.28.0 MacOSX/10.16 JRE/1.8.0_301","activityId":"86fcecc8-5938-46b1-857f-9d57b74c6ffe","requestLatencyInMs":89,"requestStartTimeUTC":"2022-07-28T05:34:40.471Z","requestEndTimeUTC":"2022-07-28T05:34:40.560Z","responseStatisticsList":[],"supplementalResponseStatisticsList":[],"addressResolutionStatistics":{},"regionsContacted":[],"retryContext":{"statusAndSubStatusCodes":null,"retryCount":0,"retryLatency":0},"metadataDiagnosticsContext":{"metadataDiagnosticList":null},"serializationDiagnosticsContext":{"serializationDiagnosticsList":null},"gatewayStatistics":{"sessionToken":null,"operationType":"Replace","resourceType":"Offer","statusCode":403,"subStatusCode":0,"requestCharge":"0.0","requestTimeline":[{"eventName":"connectionAcquired","startTimeUTC":"2022-07-28T05:34:40.472Z","durationInMicroSec":1000},{"eventName":"connectionConfigured","startTimeUTC":"2022-07-28T05:34:40.473Z","durationInMicroSec":0},{"eventName":"requestSent","startTimeUTC":"2022-07-28T05:34:40.473Z","durationInMicroSec":5000},{"eventName":"transitTime","startTimeUTC":"2022-07-28T05:34:40.478Z","durationInMicroSec":60000},{"eventName":"received","startTimeUTC":"2022-07-28T05:34:40.538Z","durationInMicroSec":1000}],"partitionKeyRangeId":null},"systemInformation":{"usedMemory":"71913 KB","availableMemory":"3656471 KB","systemCpuLoad":"empty","availableProcessors":8},"clientCfgs":{"id":1,"machineId":"uuid:248bb21a-d1eb-46a5-a29e-1a2f503d1162","connectionMode":"DIRECT","numberOfClients":1,"connCfg":{"rntbd":"(cto:PT5S, nrto:PT5S, icto:PT0S, ieto:PT1H, mcpe:130, mrpc:30, cer:false)","gw":"(cps:1000, nrto:PT1M, icto:PT1M, p:false)","other":"(ed: true, cs: false)"},"consistencyCfg":"(consistency: Session, mm: true, prgns: [])"}}}
at com.azure.cosmos.BridgeInternal.createCosmosException(BridgeInternal.java:486)
at com.azure.cosmos.implementation.RxGatewayStoreModel.validateOrThrow(RxGatewayStoreModel.java:440)
at com.azure.cosmos.implementation.RxGatewayStoreModel.lambda$toDocumentServiceResponse$0(RxGatewayStoreModel.java:347)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:106)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:200)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:119)
Message says to switch on such operations for your accounts but I could not find any page to do that. Can I use Azure functions to do the same thing at a specific time?
Code snippet:
CosmosAsyncContainer container = client.getDatabase("DatabaseName").getContainer("ContainerName");
ThroughputProperties autoscaleContainerThroughput = container.readThroughput().block().getProperties();
container.replaceThroughput(ThroughputProperties.createAutoscaledThroughput(newAutoscaleMaxThroughput)).block();
This is because disableKeyBasedMetadataWriteAccess is set to true on the account. You will need to contact either your subscription owner or someone with DocumentDB Account Contributor to modify the throughput using PowerShell or azure cli, links to samples. You can also do this by redeploying the ARM template or Bicep file used to create the account (be sure to do a GET first on the resource so you don't accidentally change something.
If you are looking for a way to automatically scale resources up and down on a schedule, please refer to this sample here, Scale Azure Cosmos DB throughput by using Azure Functions Timer trigger
To learn more about the disableKeyBasedMetadataWriteAccess property and it's impact to control plane operations from the data plane SDK's see, Preventing changes from the Azure Cosmos DB SDKs
I've a simple Azure function that writes periodically some data into an Azure Table Storage.
var storageAccount = new CloudStorageAccount(new Microsoft.WindowsAzure.Storage.Auth.StorageCredentials("mystorage","xxxxx"),true);
var tableClient = storageAccount.CreateCloudTableClient();
myTable = tableClient.GetTableReference("myData");
TableOperation insertOperation = TableOperation.Insert(data);
myTable.ExecuteAsync(insertOperation);
The code runs well locally in Visual Studio and all data is written correctly into the Azure located Table Storage.
But if I deploy this code 1:1 into Azure as an Azure function, the code also runs well without any exception and logging shows, it runs through every line of code.
But no data is written in the Table Storage - same name, same credentials, same code.
Is Azure blocking this connection (AzureFunc in Azure > Azure Table Storage) in some way in contrast to "Local AzureFunc > Azure Table Storage)?
Is Azure blocking this connection (AzureFunc in Azure > Azure Table
Storage) in some way in contrast to "Local AzureFunc > Azure Table
Storage)?
No, it's not azure which is blocking the connection or anything of that sort.
You have to await the table operation you are doing with ExecuteAsync as the control in program is moving without that method being completed. Change your last line of code to
await myTable.ExecuteAsync(insertOperation);
Take a look how here on Because this call is not awaited, the current method continues to run before the call is completed.
The problem was the rowkey:
I used DateTime.Now for the rowkey (since autoincrement values are not provided by table storage).
And my local format was "1.1.2019 18:19:20" while the server's format was "1/1/2019 ..."
And "/" seems not to be allowed in the rowkey string.
Now, formatting the DateTime string correct everything works fine.
Is there any option to retrieve Cosmos DB (SQL API) throughput programmatically. I'm using the below code to get the list of DB
DocumentClient client = new DocumentClient(ClientURL, ClientKey,);
var databaseList = client.CreateDatabaseQuery().ToList();
Next I wanted to know the throughput for each of the database.
Please let me know if this is feasible
You could refer to CreateOfferQuery method to get throughout settings of Database or Collections.
Also,please refer to this rest api:https://learn.microsoft.com/en-us/rest/api/cosmos-db/get-an-offer
Is it possible to make a given Region as both Read/Write in CosmosDB Geo Redundant setup.
I am using Azure DocumentDB with Java SDK and tried to overwrite the location preference using the Connection policy as below
ConnectionPolicy policy = new ConnectionPolicy();
policy.setEnableEndpointDiscovery(true);
List<String> locations = new ArrayList<>();
locations.add("West US");
policy.setPreferredLocations(locations);
But i could still see some requests going to East in the Metrics Explorer. Any help here would be greatly appreciated.
TIA
You also need to set enableEndpointDiscovery to be true.
When the value of this property is true, the SDK will automatically
discover the current write and read regions to ensure requests are
sent to the correct region based on the regions specified in the
PreferredLocations property. Default value is true indicating endpoint
discovery is enabled.
SAMPLE CODE