Autoscaling Azure SQL Database - azure

We have an application that uses Azure SQL for the database backend. Under normal load/conditions this database can successfully run on a Premium 1 plan. However, during the early morning hours we have jobs that run that increase database load. During these few hours we need to move to a Premium 3 plan. The cost of a Premium 3 is about 8 times more, so obviously we do not want to pay the costs of running on this plan 24/7.
Is it possible to autoscale the database up and down? Cloud services offer an easy way to scale the number of instances in the Azure Portal, however, nothing like this exists for Azure SQL databases. Can this be done programmatically with the Azure SDK? I have been unable to locate any documentation on this subject.

After digging through the articles in #ErikEJ's answer (Thanks!) I was able to find the following, which appears to be newly published with the release of the Elastic Scale preview:
Changing Database Service Tiers and Performance Levels
The following REST APIs are now newly available as well, which let you do pretty much whatever you want to your databases:
REST API Operations for Azure SQL Databases
And for my original question of scaling service tiers (ex. P1 -> P3 -> P1):
Update Database REST API
With these new developments I am going to assume it's only a matter of time before autoscaling is also available as a simple configuration in the Azure Portal, much like cloud services.

Another way to do it is using Azure automation and using run book below:
param
(
# Desired Azure SQL Database edition {Basic, Standard, Premium}
[parameter(Mandatory=$true)]
[string] $Edition,
# Desired performance level {Basic, S0, S1, S2, P1, P2, P3}
[parameter(Mandatory=$true)]
[string] $PerfLevel
)
inlinescript
{
# I only care about 1 DB so, I put it into variable asset and access from here
$SqlServerName = Get-AutomationVariable -Name 'SqlServerName'
$DatabaseName = Get-AutomationVariable -Name 'DatabaseName'
Write-Output "Begin vertical scaling script..."
# Establish credentials for Azure SQL Database server
$Servercredential = new-object System.Management.Automation.PSCredential("yourDBadmin", ("YourPassword" | ConvertTo-SecureString -asPlainText -Force))
# Create connection context for Azure SQL Database server
$CTX = New-AzureSqlDatabaseServerContext -ManageUrl “https://$SqlServerName.database.windows.net” -Credential $ServerCredential
# Get Azure SQL Database context
$Db = Get-AzureSqlDatabase $CTX –DatabaseName $DatabaseName
# Specify the specific performance level for the target $DatabaseName
$ServiceObjective = Get-AzureSqlDatabaseServiceObjective $CTX -ServiceObjectiveName "$Using:PerfLevel"
# Set the new edition/performance level
Set-AzureSqlDatabase $CTX –Database $Db –ServiceObjective $ServiceObjective –Edition $Using:Edition -Force
# Output final status message
Write-Output "Scaled the performance level of $DatabaseName to $Using:Edition - $Using:PerfLevel"
Write-Output "Completed vertical scale"
}
Ref:
Azure Vertically Scale Runbook
Setting schedule when u want to scale up/down.
For me, I used 2 schedules with input parameters, 1 for scaling up and another one for scaling down.
Hope that help.

Yes, that feature has is available: Azure SQL Database Elastic Scale
https://learn.microsoft.com/en-gb/azure/sql-database/sql-database-elastic-scale-introduction

In some cases the easiest option might be to just run SQL query as described in msdn.
For example:
ALTER DATABASE [database_name] MODIFY (EDITION = 'standard', SERVICE_OBJECTIVE = 'S3', MAXSIZE = 250 GB)

Related

Find Azure SQL Database Geo-Replication Secondaries via PowerShell

I created an Azure SQL Database and configured geo-replication to a second server in a different region. In the Azure Portal, I can click on either of the databases, and see details about the regions being replicated to:
I want to use PowerShell to find this same information, but cannot find a cmdlet or property that exposes this information:
# Get database object
$database = Get-AzSqlDatabase -ResourceGroupName test-rg -ServerName testsql-eastus -DatabaseName TestDB
# Find if geo-replication is enabled?
The goal is to be able to pull all SQL databases in a subscription, and take different action on them depending if they have geo-replication enabled.
Please ref these document Get-AzSqlDatabaseFailoverGroup:
Gets a specific Azure SQL Database Failover Group or lists the
Failover Groups on a server. Either server in the Failover Group may
be used to execute the command. The returned values will reflect the
state of the specified server with respect to the Failover Group.
Example:
You can run Get-AzSqlDatabaseFailoverGroup -ResourceGroupName 'rg' -ServerName 'servername' to see if the databases in the Azure SQL server has configured geo-replication. If no failovergroup name return, then the database didn't enable the geo-replication.

Alternative to New-AzSqlDatabaseCopy on Hyperscale Database

I am working on Microsoft Azure, in which I have a group of resources for a test environment and a production environment, in both I have an Azure SQL Databases database server with its respective database.
I am creating a Runbook of Automation Accounts in Powershell in another Microsoft Azure account (Important Note) that is responsible for "Copying" the production database to tests. I know that there is the New-AzSqlDatabaseCopy command, however, this command does not It works with Hyperscale databases.
Is there an alternative to this command in Hyperscale? or in this second account it is possible to create a. Bacpac remotely with Azure commands for Powershell, all I have seen are for working on the same account, but the database account is different from the automation account due to work rates.
Thank you in advance for your help and comments.
I already tried to use the New-AzureRmSqlDatabaseExport command, but it seems to work only in the same Azure Account, and I can't specify "Azure Account for backup" and "Azure account for storage". Am I right?
Like Alberto Morillo says in his comment New-AzSqlDatabaseCopy it's currently not available for Azure SQL HyperScale. at least at the moment of this answer.
So i try to use New-AzureRmSqlDatabaseExport with 2 Azure Accounts and it's tottally possible, you need to login with the Azure Account of the origin database Connect-AzureRmAccount then you need to call the New-AzureRmSqlDatabaseExport command with the following parameters.
New-AzureRmSqlDatabaseExport
-ResourceGroupName $RGName # Resource group of the source database
-ServerName $Server # Server name of the source database
-DatabaseName $Database # Name of the source database
-AdministratorLogin $User # Administrator user of the source database
-AdministratorLoginPassword $Pwd # Password of the source database
-StorageKeytype "StorageAccessKey" # Key type of the destination storage account (The one of the another azure account)
-StorageKey $StorageKey # Key of the destination storage account(The one of the another azure account)
-StorageUri $StorageFileFullURI # The full file uri of the destination storage (The one of the another azure account)
# The format of the URI file is the following:
# https://contosostorageaccount.blob.core.windows.net/backupscontainer/backupdatabasefile.bacpac
unfortunately, this command is not enabled for hyperscale, so I get the following error message:
New-AzureRmSqlDatabaseExport : 40822: This feature is not available for the selected database's edition (Hyperscale).
I used the same command with a database that was not Hyperscale and it worked perfectly.
Finally, I think I will have to perform the manual process for at least a few months, have Microsoft launch the update for HyperScale
Database copy is currently not available for Azure SQL Hyperscale but you may see it in public preview in a few months.

Find who created table constraint in azure sql database

I have a azure sql database. Is it possible to find out who created the constraint on table? Or at least when it was added? If yes, how can I do that? Is there any scripts/tools for that purposes?
thanks in advance
Azure SQL has a feature named AUDITING. If enabled either on the server and/or database you can define a storage account to send the "Server Audit" and "Database Audit" logs to. In Azure storage, auditing logs are saved as a collection of blob files within a container named sqldbauditlogs. Use Power BI for example you can view audit log data.
If this features is not enabled your will struggle I think to find your user unless the database is accessed using Azure AD identities.
Please note Advanced Threat Detection will alert you on unusual access patterns. Least privilege approach to access is recommend.
Ref:
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-auditing
Maybe you can use below query to find out when the constraint created from all the SQL execution records.
SELECT TOP 1000
QS.creation_time,
SUBSTRING(ST.text,(QS.statement_start_offset/2)+1,
((CASE QS.statement_end_offset WHEN -1 THEN DATALENGTH(st.text)
ELSE QS.statement_end_offset END - QS.statement_start_offset)/2) + 1
) AS statement_text,
ST.text,
QS.total_worker_time,
QS.last_worker_time,
QS.max_worker_time,
QS.min_worker_time
FROM
sys.dm_exec_query_stats QS
CROSS APPLY
sys.dm_exec_sql_text(QS.sql_handle) ST
WHERE ST.text LIKE '%constraint_name%'
ORDER BY
QS.creation_time DESC
This query will take a few time.
Hope this helps.
If you enable Azure SQL Auditing you can try the following using PowerShell.
Set-AzureRmSqlDatabaseAuditing `
-State Enabled `
-ResourceGroupName "resourcegroupname" `
-ServerName "ssqlinstancename" ` #ssqlinstancename.database.windows.net
-StorageAccountName "strageaccountname" `
-DatabaseName "dbname" `
-AuditActionGroup 'SCHEMA_OBJECT_CHANGE_GROUP' `
-RetentionInDays 8 `
-AuditAction "CREATE ON schema::dbo BY [public]"

Delete old Windows Azure Diagnostics data from table storage (performance counters, etc.)

I have several Windows VMs running on Azure that are configured to collect performance counters and event logs.
All of this is configured in the "Diagnostic settings..." on the VM resource inside Azure Portal. There's a Windows Azure Diagnostics agent that collects this data on the VM and stores it into a storage account (inside Table Storage).
All of this collected data (performance counters, metrics, logs, etc.) doesn't have any retention policy and there doesn't seem to be any way of setting it up. So it just accumulates in the storage account's table storage forever.
This is where my problem is -- there's now too much data in these tables (several terabytes in my case) and it's costing a lot of money just to keep it. And it's only going to keep increasing over time.
The relevant storage account tables are tables like:
WADMetrics* (Windows Azure Diagnostics Metrics Table)
WADPerformanceCountersTable (Windows Azure Diagnostics Performance Counters Table)
WASWindowsEventLogsTable (Windows Azure Diagnostics Windows Event Logs Table)
Is there some way how to delete old data in these tables so it wouldn't break anything? Or even better, is there some way to configure retention policy or set it up so that it doesn't keep accumulating forever?
Is there some way how to delete old data in these tables so it
wouldn't break anything?
You would need to do this manually. The way this would work is that you will first query the data that needs to be deleted and then once you get the data you will delete it. PartitionKey attribute of the entities stored in these tables actually represents a date/time value (in ticks prepended with zeroes to make it an equal length string) so you would need to take the from and to date/time values, convert them to ticks, make it a 19 character long string (by prepending appropriate number of zeroes) and query the data. Once you get the data on the client side, you will send delete request back to table storage.
To speed up the whole process, there are a few things you could do:
When you query the data, use query projection to return only PartitionKey and RowKey attributes as only these two attributes are needed for deletion.
For deletion, you could use entity batch transaction. This could speed up the deletion operation considerably.
For faster deletes, you can spin up a VM in the same region as that of your storage account. That way you are not paying for data egress charges.
I wrote a blog post some time ago that you may find helpful: https://gauravmantri.com/2012/02/17/effective-way-of-fetching-diagnostics-data-from-windows-azure-diagnostics-table-hint-use-partitionkey/.
Or even better, is there some way to configure retention policy or set
it up so that it doesn't keep accumulating forever?
Unfortunately there isn't at least as of today. There's a retention setting but that's only for blobs.
Just came across this issue as I was tracking down what costs the most in the subscription.
One useful tool is the Azure Storage Explorer. You can browse to a table, inspect its contents, use the Table Statistics button to count table rows, multi-select and delete rows.
For one small VM that's been running since 2016, I found that the WADMetrics tables seem to roll every 10 days, but the others do not. A sample WADMetrics table contained 5724 entries. The WASWindowsEventLogsTable contained 10,022 entries. I cancelled the WADPerformanceCountersTable count when it reached 5 million entries. It costs more to store the statistics than the VM's VHD.
This article summarizes useful information about PowerShell commands for manipulating tables. Unfortunately, the Azure Cloud Shell doesn't yet support commands for working inside a table e.g. Get-AzTableRow (see this report). I assume that would work if you set up the latest Az PowerShell commands locally. Then you could select with a filter and use Remove-AzTableRow to delete some of the rows. In my case, the machine has been decommissioned so I just needed a way to delete lots of tables without having to click on each one in the dashboard. Here are some sample commands to start from:
$location = "uswest"
$resourceGroup = "myRG"
$storageAccountName = "myData"
$storageAccount = get-AzStorageAccount -ResourceGroupName $resourceGroup -Name $storageAccountName
$ctx = $storageAccount.Context
# List all tables in storage account
Get-AzStorageTable -Context $ctx
# Count the WADMetrics tables
(Get-AzStorageTable -Context $ctx -Name "WADMetrics*").count
# Count the WADMetrics tables with "2018" in their name
(Get-AzStorageTable -Context $ctx -Name "WADMetrics*2018*").count
# Remove all WADMetrics tables with "2018" in their name without confirmation, then re-count
# Only Get- supports wilcards, so pipe to Remove-AzStorageTable command
Get-AzStorageTable -Context $ctx -Name "WADMetrics*2018*" | Remove-AzStorageTable -Force
(Get-AzStorageTable -Context $ctx -Name "WADMetrics*2018*").count
# Remove the big tables. Confirmation takes a long time, so suppress it.
Remove-AzStorageTable -Context $ctx -Name "WADWindowsEventLogsTable" -Force
Remove-AzStorageTable -Context $ctx -Name "WADPerformanceCountersTable" -Force
# The following do NOT work in Azure Cloud Shell as of 07/16/2019. See
# https://github.com/MicrosoftDocs/azure-docs/issues/28608
# Count the rows in WADWindowsEventLogsTable
$tableName = "WADWindowsEventLogsTable"
$cloudTable = (Get-AzStorageTable -Context $ctx -Name $tableName).CloudTable
$cloudTableResults = Get-AzTableRow -table $cloudTable -columnName "RowKey"
$cloudTableResults.count
Another solution is to write a small C# program to erase the Windows Azure Diagnostic (WAD) data.
The following article gives you a more or less out-the-box solution for methods that can erase both the WADMetrics* tables and rows contained in WADDiagnosticInfrastructureLogsTable, WADPerformanceCountersTable and WADWindowsEventLogsTable
Using the two methods DeleteOldTables() and DeleteOldData() it is easy to write a small program that can be executed monthly to cleanup the WAD data. Note that the code uses the WindowsAzure.Storage NuGet package, so that would need to installed in your project.
https://mysharepointlearnings.wordpress.com/2019/08/20/managing-azure-vm-diagnostics-data-in-table-storage/

Puzzled about Credential Details for Vertically Scaling SQL Azure using Azure Automation?

I am trying to upscale and downscale my SQL Azure instances using Azure Automation. I am using a gallery runbook called "Set-AzureSqlDatabaseEdition.ps1" which has been created by Joseph Idziorek.
The link is: SQL Azure vertical scale Runbook
The parameter examples are:
.EXAMPLE for Set-AzureSqlDatabaseEdition
-SqlServerName bzb98er9bp
-DatabaseName myDatabase
-Edition Premium
-PerfLevel P1
-Credential myCredential
However I am confused what should go into "Crediential". Is this the SQLServer Admin Username or something else? Is it something I create in Azure Automation Assets?
Thanks.
One needs to create a credential asset using the "SQLServer" credentials not AD, and then use the name of this for the Credential parameter value.

Resources