Upscaling/Downscaling provisioned RU for cosmos containers at specific time - azure

As mentioned in the Microsoft documentation there is support to increase/decrease the provisioned RU of cosmos containers using cosmosDB Java SDK but when I am trying to perform the steps I am getting below error:
com.azure.cosmos.CosmosException: {"innerErrorMessage":"\"Operation 'PUT' on resource 'offers' is not allowed through Azure Cosmos DB endpoint. Please switch on such operations for your account, or perform this operation through Azure Resource Manager, Azure Portal, Azure CLI or Azure Powershell\"\r\nActivityId: 86fcecc8-5938-46b1-857f-9d57b7, Microsoft.Azure.Documents.Common/2.14.0, StatusCode: Forbidden","cosmosDiagnostics":{"userAgent":"azsdk-java-cosmos/4.28.0 MacOSX/10.16 JRE/1.8.0_301","activityId":"86fcecc8-5938-46b1-857f-9d57b74c6ffe","requestLatencyInMs":89,"requestStartTimeUTC":"2022-07-28T05:34:40.471Z","requestEndTimeUTC":"2022-07-28T05:34:40.560Z","responseStatisticsList":[],"supplementalResponseStatisticsList":[],"addressResolutionStatistics":{},"regionsContacted":[],"retryContext":{"statusAndSubStatusCodes":null,"retryCount":0,"retryLatency":0},"metadataDiagnosticsContext":{"metadataDiagnosticList":null},"serializationDiagnosticsContext":{"serializationDiagnosticsList":null},"gatewayStatistics":{"sessionToken":null,"operationType":"Replace","resourceType":"Offer","statusCode":403,"subStatusCode":0,"requestCharge":"0.0","requestTimeline":[{"eventName":"connectionAcquired","startTimeUTC":"2022-07-28T05:34:40.472Z","durationInMicroSec":1000},{"eventName":"connectionConfigured","startTimeUTC":"2022-07-28T05:34:40.473Z","durationInMicroSec":0},{"eventName":"requestSent","startTimeUTC":"2022-07-28T05:34:40.473Z","durationInMicroSec":5000},{"eventName":"transitTime","startTimeUTC":"2022-07-28T05:34:40.478Z","durationInMicroSec":60000},{"eventName":"received","startTimeUTC":"2022-07-28T05:34:40.538Z","durationInMicroSec":1000}],"partitionKeyRangeId":null},"systemInformation":{"usedMemory":"71913 KB","availableMemory":"3656471 KB","systemCpuLoad":"empty","availableProcessors":8},"clientCfgs":{"id":1,"machineId":"uuid:248bb21a-d1eb-46a5-a29e-1a2f503d1162","connectionMode":"DIRECT","numberOfClients":1,"connCfg":{"rntbd":"(cto:PT5S, nrto:PT5S, icto:PT0S, ieto:PT1H, mcpe:130, mrpc:30, cer:false)","gw":"(cps:1000, nrto:PT1M, icto:PT1M, p:false)","other":"(ed: true, cs: false)"},"consistencyCfg":"(consistency: Session, mm: true, prgns: [])"}}}
at com.azure.cosmos.BridgeInternal.createCosmosException(BridgeInternal.java:486)
at com.azure.cosmos.implementation.RxGatewayStoreModel.validateOrThrow(RxGatewayStoreModel.java:440)
at com.azure.cosmos.implementation.RxGatewayStoreModel.lambda$toDocumentServiceResponse$0(RxGatewayStoreModel.java:347)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:106)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:200)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:119)
Message says to switch on such operations for your accounts but I could not find any page to do that. Can I use Azure functions to do the same thing at a specific time?
Code snippet:
CosmosAsyncContainer container = client.getDatabase("DatabaseName").getContainer("ContainerName");
ThroughputProperties autoscaleContainerThroughput = container.readThroughput().block().getProperties();
container.replaceThroughput(ThroughputProperties.createAutoscaledThroughput(newAutoscaleMaxThroughput)).block();

This is because disableKeyBasedMetadataWriteAccess is set to true on the account. You will need to contact either your subscription owner or someone with DocumentDB Account Contributor to modify the throughput using PowerShell or azure cli, links to samples. You can also do this by redeploying the ARM template or Bicep file used to create the account (be sure to do a GET first on the resource so you don't accidentally change something.
If you are looking for a way to automatically scale resources up and down on a schedule, please refer to this sample here, Scale Azure Cosmos DB throughput by using Azure Functions Timer trigger
To learn more about the disableKeyBasedMetadataWriteAccess property and it's impact to control plane operations from the data plane SDK's see, Preventing changes from the Azure Cosmos DB SDKs

Related

Restore database on another server in azure with terraform

How do you restore an azure sql database using terraform on another server from a backup?
Terraform docs talk about a create mode "RestoreExternalBackup". How could one use that?
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_database
I researched this issue and found it to be a discrepancy between the Terraform and Azure documentation. I also opened an issue on the GitHub repo with this info.
RestoreExternalBackup is listed as a possible value for CreateMode in the Azure API documentation for databases. However, the create mode documentation doesn't describe how to use it. This option should not be available.
Looking at the Managed Database documentation, it clearly defines how to use the RestoreExternalBackup option. Oddly enough the Terraform documentation doesnt list any create modes for managed databases. https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_managed_database
When trying to use RestoreExternalBackup to create a database the error indicates that option requires a pointer to a storage account with the message "Missing storage container URI". Storage account information is not a valid property when creating a database resource, only for a managed database resource.
Database = https://learn.microsoft.com/en-us/rest/api/sql/2022-05-01-preview/databases/create-or-update?tabs=HTTP
Managed Database Docs = https://learn.microsoft.com/en-us/rest/api/sql/2022-05-01-preview/managed-databases/create-or-update?tabs=HTTP

Azure Databricks move Log Analytics

Databricks VMs are pointing to Default Log Analytics but I want to point them to another one
If I try to move VMs to antoher workpacks it tells me that its locked
Error: cannot perform delete operation because following scope(s) are locked
Unfortunately, you are not allowed to move Log Analytics for the Managed Resource Group created in Azure Databricks using Azure portal.
Reason: By default, you cannot perform any write operation on the managed resource group which created by Azure Databricks.
If you try to modify anything in the managed resource group, you will see this error message:
{"details":[{"code":"ScopeLocked","message":"The scope '/subscriptions/xxxxxxxxxxxxxxxx/resourceGroups/databricks-rg-chepra-d7ensl75cgiki' cannot perform write operation because following scope(s) are locked: '/subscriptions/xxxxxxxxxxxxxxxxxxxx/resourceGroups/databricks-rg-chepra-d7ensl75cgiki'. Please remove the lock and try again."}]}
Possible way: You can specify tags as key-value pairs when while creating/modifying clusters, and Azure Databricks will apply these tags to cloud resources.
Possible way: Configure your Azure Databricks cluster to use the monitoring library.
This article shows how to send application logs and metrics from Azure Databricks to a Log Analytics workspace. It uses the Azure Databricks Monitoring Library.
Hope this helps.

Accessing Azure Storage Blob from an AKS cluster

A little context: I'm having to migrate a project from AWS, where I'm currently using ECS, to Azure, where I'll be using AKS since their ACS (ECS equivalent) is deprecated.
This is a regular Django app, with its configuration variables being fetched from a server-config.json hosted on a private S3 bucket, the EC2 instance has the correct role with S3FullAccess,
I've been looking into reproducing that same behavior but with Azure Blob Storage instead, having achieved no success whatsoever :-(.
I tried using the Service Principal concept and adding it to the AKS Cluster with Storage Blob Data Owner roles, but that doesn't seem to work. Overall it's been quite the frustrating experience - maybe I'm just having a hard time grasping the right way to use the permissions/scopes. The fact that the AKS Cluster creates its own resource group is something unfathomable - but I've attempted attaching the policies to it as well, to no avail. I then moved onto a solution indicated by Microsoft.
I managed to bind my AKS pods with the correct User Managed Identity through their indicated solution aad-pod-identity, but I feel like I'm missing something. I assigned Storage Blob Data Owner/Contributor to the identity, but still, when I enter the pods and try to access a Blob (using the python sdk), I get a resource not found message.
Is what I'm trying to achieve possible at all? Or will I have to change to a solution using Azure Keyvault/something along those lines?
first off all, you can use AKS Engine which is more or less ACS for Kubernetes now.
As for the access to the blob storage, you dont have to use Managed Service Identity, you can just use account name\key ( which is a bit less secure, but a lot less error prone and more examples exist ). The fact that you are getting resource not found error most likely means your auth part is fine, you just dont have access to the resource, according to this storage blob contributor should be fine if you assigned it at a proper scope. For this to work 100% just give your identity contributor access at subscription level, this way its guaranteed to work.
I've found an example of using python with MSI (here). You should start with that (and grant your identity contributor access) and verify you can list resource groups. when that works making reading blobs working should be trivial.

Unable to deploy the index and grammar file in KES

I'm using Knowledge Exploration Service by Azure. I've prepared a grammar and an index file. Since, the size of it was small I was able to run it on my local machine and on a Azure VM.
But now, I want to deploy this service. Issue is when I run the command kes deploy_service it is unable to download the blob from Azure Storage. Even when I try to provide the file from my local machine.
Followed the same steps on a Azure VM and I receive the same errors.
>kes deploy_service Some.grammar Some.index kes-example
00:00:00 Index: Some.index
00:00:00 ERROR: Invalid value for index parameter: 'Some.index' is not a blob URI.
>kes deploy_service Some.grammar https://storagename.blob.core.windows.net/containername/Some.index kes-example
00:00:00 Index: https://storagename.blob.core.windows.net/containername/Bell.index
00:00:02 ERROR: ResourceNotFound: The storage account 'storagename' was not found.
The container has public access. I can download the file via the browser and even via Azure CLI.
What am I missing here?
EDIT: Adding a sample index file which I've uploaded on Azure Storage with public access. This index file was generated using the Academic example in the documentation.
>kes describe_index https://kesstorage.blob.core.windows.net/kess/Academic.index
ERROR: ResourceNotFound: The storage account 'kesstorage' was not found.
kes.exe is using the old Service Management API. It is querying the API for Storage Accounts in your subscription, but this API predates Azure Resource Manager (ARM), and therefore has no knowledge of ARM Storage Accounts. You will need to use a Classic Storage Account instead.
For how to create a Classic storage account tutorial, refer to this link: https://learn.microsoft.com/en-us/azure/storage/common/storage-create-storage-account#create-a-storage-account

Cannot connect to DocumentDb directly after having deployed the DocumentDb account

I have an ARM template that I use to deploy a DocumentDB as well as other Azure reosurces to a resource group. I want my ARM template to setup a Stream Analytics job that uses the DocumentDB as output. In order to do this the DocumentDB account created by the ARM template needs to have a database and a collection setup as well. I cannot find a way to do this from an ARM template so I have written a Powershell CmdLet to create the database and collction for me.
The Stream Analytics job cannot be created by the first ARM template since it depends on having the database and collection created first. Instead I have to divide the deployment into two ARM templates, the first setting up the DocDb account and the second setting up the SA job.
The problem is that I cannot create a database in the DocDB account directly after having deployed the account via the ARM template. I get an exception with the following message: "The remote name could not be resolved: 'test.documents.azure.com'" when I try to execute the CreateDatabaseAsync method with the DocDbEndpoint and AuthKey I get back from the ARM template deployment.
Are there any timing issues after having deployed Azure resources using a ARM template before you can access them programatically? This do not seem to be a problem with other Azure reosurces created this way.
Any help on this matter is highly appreciated as well as what is a good practice for working with ARM templates with DocumentDB and Stream Analytic jobs.
Update 2016-03-23
Code for setting up the connection to the DocumentDB to create the database.
Uri endpointUri = new Uri(documentDbEndPoint);
DocumentClient client = new DocumentClient(endpointUri, authKey);
var db = await client.CreateDatabaseAsync(new Database { Id = databaseId });
return db;
Where the documentDbEndPoint is in the form of: https://name.documents.azure.com:443/ and name is the name of my DocDB account just created by the ARM template deployment.
I have the code in a library which I can either call from a Console application or from a Powershell script by loading the library with:
Add-Type -Path <path to library dll file>
No matter if I use powershell or console application I get the same error if I try to create a database just after having created the DocDB account using the ARM template. If I wait like an hour or so both the powershell script and console application works and can create a database in the account.
Seems like there is some kind of timing issue in order for Azure to setup dns records for the newly created DocDB account so that it can be accessed using the DocDB API.
Update 2 2016-03-23
Just tried to create a DocDB account directly from the portal and doing this instead of creating it from an ARM template makes it possible to create a database in the account using my powershell script and console application immediately.
This timing issue has been fixed now and you should be able to use it from the ARM template now.

Resources