AzureSQL: What is the automation/script name for PremiumRS service tiers? - azure

In AzureSQL, what is the name for PremiumRS service tiers?
My automation script upsizes my database from Standard S3 to Premium P1 each morning at 6am but I wish to change this to upsize to PremiumRS PS1.
This is the script I'm using:
https://gallery.technet.microsoft.com/scriptcenter/Azure-SQL-Database-e957354f

Service tiers are PRS1, PRS2, PRS4 and PRS6. I would also recommend to you double checking availability regions.
For service tiers:
http://blog.nilayparikh.com/azure/sql/update-microsoft-azure-sql-database-increase-storage-up-to-4tb/
For regions:
https://azure.microsoft.com/en-us/blog/sql-database-4tb-premium-and-premium-rs-preview/

See the Create Database (Azure SQL Database) topic at https://msdn.microsoft.com/en-us/library/dn268335.aspx for the syntax for create.
Similarly, see the Alter Database (Azure SQL Database) topic at https://msdn.microsoft.com/en-us/library/mt574871.aspx

Related

Upscaling/Downscaling provisioned RU for cosmos containers at specific time

As mentioned in the Microsoft documentation there is support to increase/decrease the provisioned RU of cosmos containers using cosmosDB Java SDK but when I am trying to perform the steps I am getting below error:
com.azure.cosmos.CosmosException: {"innerErrorMessage":"\"Operation 'PUT' on resource 'offers' is not allowed through Azure Cosmos DB endpoint. Please switch on such operations for your account, or perform this operation through Azure Resource Manager, Azure Portal, Azure CLI or Azure Powershell\"\r\nActivityId: 86fcecc8-5938-46b1-857f-9d57b7, Microsoft.Azure.Documents.Common/2.14.0, StatusCode: Forbidden","cosmosDiagnostics":{"userAgent":"azsdk-java-cosmos/4.28.0 MacOSX/10.16 JRE/1.8.0_301","activityId":"86fcecc8-5938-46b1-857f-9d57b74c6ffe","requestLatencyInMs":89,"requestStartTimeUTC":"2022-07-28T05:34:40.471Z","requestEndTimeUTC":"2022-07-28T05:34:40.560Z","responseStatisticsList":[],"supplementalResponseStatisticsList":[],"addressResolutionStatistics":{},"regionsContacted":[],"retryContext":{"statusAndSubStatusCodes":null,"retryCount":0,"retryLatency":0},"metadataDiagnosticsContext":{"metadataDiagnosticList":null},"serializationDiagnosticsContext":{"serializationDiagnosticsList":null},"gatewayStatistics":{"sessionToken":null,"operationType":"Replace","resourceType":"Offer","statusCode":403,"subStatusCode":0,"requestCharge":"0.0","requestTimeline":[{"eventName":"connectionAcquired","startTimeUTC":"2022-07-28T05:34:40.472Z","durationInMicroSec":1000},{"eventName":"connectionConfigured","startTimeUTC":"2022-07-28T05:34:40.473Z","durationInMicroSec":0},{"eventName":"requestSent","startTimeUTC":"2022-07-28T05:34:40.473Z","durationInMicroSec":5000},{"eventName":"transitTime","startTimeUTC":"2022-07-28T05:34:40.478Z","durationInMicroSec":60000},{"eventName":"received","startTimeUTC":"2022-07-28T05:34:40.538Z","durationInMicroSec":1000}],"partitionKeyRangeId":null},"systemInformation":{"usedMemory":"71913 KB","availableMemory":"3656471 KB","systemCpuLoad":"empty","availableProcessors":8},"clientCfgs":{"id":1,"machineId":"uuid:248bb21a-d1eb-46a5-a29e-1a2f503d1162","connectionMode":"DIRECT","numberOfClients":1,"connCfg":{"rntbd":"(cto:PT5S, nrto:PT5S, icto:PT0S, ieto:PT1H, mcpe:130, mrpc:30, cer:false)","gw":"(cps:1000, nrto:PT1M, icto:PT1M, p:false)","other":"(ed: true, cs: false)"},"consistencyCfg":"(consistency: Session, mm: true, prgns: [])"}}}
at com.azure.cosmos.BridgeInternal.createCosmosException(BridgeInternal.java:486)
at com.azure.cosmos.implementation.RxGatewayStoreModel.validateOrThrow(RxGatewayStoreModel.java:440)
at com.azure.cosmos.implementation.RxGatewayStoreModel.lambda$toDocumentServiceResponse$0(RxGatewayStoreModel.java:347)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:106)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:200)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:119)
Message says to switch on such operations for your accounts but I could not find any page to do that. Can I use Azure functions to do the same thing at a specific time?
Code snippet:
CosmosAsyncContainer container = client.getDatabase("DatabaseName").getContainer("ContainerName");
ThroughputProperties autoscaleContainerThroughput = container.readThroughput().block().getProperties();
container.replaceThroughput(ThroughputProperties.createAutoscaledThroughput(newAutoscaleMaxThroughput)).block();
This is because disableKeyBasedMetadataWriteAccess is set to true on the account. You will need to contact either your subscription owner or someone with DocumentDB Account Contributor to modify the throughput using PowerShell or azure cli, links to samples. You can also do this by redeploying the ARM template or Bicep file used to create the account (be sure to do a GET first on the resource so you don't accidentally change something.
If you are looking for a way to automatically scale resources up and down on a schedule, please refer to this sample here, Scale Azure Cosmos DB throughput by using Azure Functions Timer trigger
To learn more about the disableKeyBasedMetadataWriteAccess property and it's impact to control plane operations from the data plane SDK's see, Preventing changes from the Azure Cosmos DB SDKs

How to create Azure databricks cluster using Service Principal

I have azure databricks workspace and I added service principal in that workspace using databricks cli. I have been trying to create cluster using service principal and not able to figure it. Can any help me?
I am able to create cluster using my account but I want to create using Service Principal and want it to be the owner of the cluster not me.
Also, it there a way I can transfer the ownership of my cluster to Service Principal?
First, answering the second question - no, you can't change the owner of the cluster.
To create a cluster that will have Service Principal as owner you need to execute creation operation under its identity. To do this you need to perform following steps:
Prepare a JSON file with cluster definition as described in the documentation
Set DATABRICKS_HOST environment variable to an address of your workspace:
export DATABRICKS_HOST=https://adb-....azuredatabricks.net
Generate AAD token for Service principal as described in documentation and assign its value to DATABRICKS_TOKEN or DATABRICKS_AAD_TOKEN environment variables (see docs).
Create Databricks cluster using databricks-cli providing name of JSON file with cluster specification (docs):
databricks clusters create --json-file create-cluster.json
P.S. Another approach (really recommended) is to use Databricks Terraform provider to script your Databricks infrastructure - it's used by significant number of Databricks customers, and much easier to use compared with command-line tools.

how to get the fully qualified instance id from data which is stored in storage account table in azure?

I want to get the fully qualified instance id(Ex-:"/subscriptions/9xxxxxx5-6xxe-4xxc-8xx4-2xxxxxxxxx5/resourceGroups/test/providers/Microsoft.Compute/virtualMachines/vm-test")which is stored in storage account table in Azure.
I have enabled guest level monitoring in my virtual machine and exported metrics to a Storage account table. In that table, instance id column (PARTITIONKEY) shows like below.
":002Fsubscriptions:002F9xxxxxx5:002D6xxe:002D4xxc:002D8xx4:002D2xxxxxxxxx5:002FresourceGroups:002Ftest:002Fproviders:002FMicrosoft:002ECompute:002FvirtualMachines:002Fvm:002Dtest"
Not sure how to convert instance id column PARTITIONKEY into like a instance Id.
However, for your purpose to get vm memory related metrics. It's recommended to use Log Analytics. Search Log Analytics workspace resource in the Azure portal then narrow down to your specific VM scope then run the query language.
Perf
| where ObjectName == "Memory"
Or, you can execute an Analytics query using Query - Get
For more information, you could read these docs.
https://learn.microsoft.com/en-us/azure/azure-monitor/log-query/get-started-portal
https://learn.microsoft.com/en-us/azure/azure-monitor/log-query/log-query-overview
Hope this could help you.

Azure Databricks move Log Analytics

Databricks VMs are pointing to Default Log Analytics but I want to point them to another one
If I try to move VMs to antoher workpacks it tells me that its locked
Error: cannot perform delete operation because following scope(s) are locked
Unfortunately, you are not allowed to move Log Analytics for the Managed Resource Group created in Azure Databricks using Azure portal.
Reason: By default, you cannot perform any write operation on the managed resource group which created by Azure Databricks.
If you try to modify anything in the managed resource group, you will see this error message:
{"details":[{"code":"ScopeLocked","message":"The scope '/subscriptions/xxxxxxxxxxxxxxxx/resourceGroups/databricks-rg-chepra-d7ensl75cgiki' cannot perform write operation because following scope(s) are locked: '/subscriptions/xxxxxxxxxxxxxxxxxxxx/resourceGroups/databricks-rg-chepra-d7ensl75cgiki'. Please remove the lock and try again."}]}
Possible way: You can specify tags as key-value pairs when while creating/modifying clusters, and Azure Databricks will apply these tags to cloud resources.
Possible way: Configure your Azure Databricks cluster to use the monitoring library.
This article shows how to send application logs and metrics from Azure Databricks to a Log Analytics workspace. It uses the Azure Databricks Monitoring Library.
Hope this helps.

No default service level objective found of edition "GeneralPurpose"

I am getting this error No default service level objective found of edition "GeneralPurpose" in SSMS when creating database in Azure SQL
Please download the latest SQL Server Management Studio version from here. Version 18.0 has many fixes related to Azure Managed Instances.
It is a limitation of the free subscription you are using at this time. ""'Free Trial subscriptions can provision Basic, Standard S0 through S3 databases, up to 100 eDTU Basic or Standard elastic pools and DW100 through DW400 data warehouses"
You can also try to create the database using T-SQL as shown below.
CREATE DATABASE Testdb
( EDITION = 'Standard', SERVICE_OBJECTIVE = 'S3' );
GO
In my case it was because I had wrong connection string in my app settings(.NET).
To find your connection string you need to go to your db on azure and in overview you need to find "connection string".

Resources