SQL Azure Database scale operation from P3 Premium to S2 Standard failed - azure

I'm trying to scale down the SQL Azure Database from P3 to S2 but I'm getting error below.
Database scale operation from P3 Premium to S2 Standard failed for xxDB.
ErrorCode: undefined
ErrorMessage: The edition 'Standard' does not support the database max size '500 GB'.
What's the best way to scale in?

it seems you have to first change db size .. and then do your degradation of tiers.below is the entire syntax for changing,i dont have azure DB to play with..
ALTER DATABASE database_name
{
MODIFY NAME =new_database_name
| MODIFY ( <edition_options> [, ... n] )
| COLLATE collation_name
| SET { <db_update_options> }
| ADD SECONDARY ON SERVER <partner_server_name>
[WITH (<add-secondary-option>::= [, ... n] ) ]
| REMOVE SECONDARY ON SERVER <partner_server_name>
| FAILOVER
| FORCE_FAILOVER_ALLOW_DATA_LOSS
}
<edition_options> ::=
{
MAXSIZE = { 100 MB | 500 MB |1 | 5 | 10 | 20 | 30 … 150 … 500 } GB
| EDITION = { 'web' | 'business' | 'basic' | 'standard' | 'premium' }
| SERVICE_OBJECTIVE =
{ 'S0' | 'S1' | 'S2' | 'S3'| 'P1' | 'P2' | 'P3' | 'P4'| 'P6' | 'P11'
{ ELASTIC_POOL (name = <elastic_pool_name>) }
}
}
Then try degrading
Refereences:
https://azure.microsoft.com/en-us/documentation/articles/sql-database-scale-up/
in the above link look for below section:
NOTE:
Changing your database pricing tier does not change the max database size. To change your database max size use Transact-SQL (T-SQL) or PowerShell.

You should be able to scale down a database and change edition (e.g. from Premium to Standard) without first changing the MaxSize property. Can you share how you did this (e.g. T-SQL, Azure portal, PowerShell, REST API call, C# client SDK). Is it possible that in your code you took the existing database MaxSize property and submitted that in your database update request - because that would cause the error you saw (Standard doesn't support 500GB). The answer indicating that you need to separately scale and change MaxSize is correct currently when you scale up from Standard to Premium and want to get the benefit of the greater storage allocation with Premium, but that apparently wasn't what you were doing.
We are looking to make the behavior more intelligent here, but in the meantime it's possible you hit a bug.

Related

how to get Azure VM last reboot using azure resource graph

I'm using azure resource graph to create dashboard and need the VM last reboot or Power-Off date.
Need your helps please.
Thank you
I tried to reproduce the same in my environment:
Graph query:
Resources
| where type == 'microsoft.compute/virtualmachines'
| summarize count() by PowerState = tostring(properties.extended.instanceView.powerState.code)
Checked the powerstate :
Tried below query :
resources
| where type has 'microsoft.compute/virtualmachines/extensions'
| where name has 'MicrosoftMonitoringAgent' or name has 'AzureMonitorWindowsAgent'
| extend AzureVM = extract('virtualMachines/(.*)/extensions',1,id), ArcVM = extract('machines/(.*)/extensions',1,id)
|summarize count() by name=tolower(AzureVM), ArcVM=tolower(ArcVM), subscriptionId, resourceGroup, AgentType=name
| extend hasBoth = iff(count_ > 0, 'Yes', 'No')
| join
(
resources
| where type =~ 'Microsoft.Compute/virtualMachines'
| project name, properties.extended.instanceView.powerState.displayStatus,
properties.extended.instanceView.powerState.code,
created_ = properties.timeCreated
| order by name desc
) on name
where i got created time of azure vm running and deallocation time.
If you want the alert when the vm stpped you can check this : azureportal - Azure alert to notify when a vm is stopped - Stack Overflow
Reference: resource-graph-samples | Microsoft Learn

KQL Load Balancer Bytes

Hi I've been trying to convert a query from bytes to GB, however I'm getting some strange results. I also get the same component showing up incrementally increasing in size, which makes sense as we are getting the bytes used, which we wold expect to see increased, but I would only like the latest set of data (when the query was run). Can anyone see where I'm going wrong?
AzureMetrics
| where TimeGenerated >=ago(1d)
| where Resource contains "LB"
| where MetricName contains "ByteCount"
| extend TotalGB = Total/1024
| summarize by Resource, TimeGenerated, TotalGB, MetricName, UnitName
| sort by TotalGB desc
| render piechart
In the table below, it shows the loadbal-1 reporting several times in a short window and the same for loadbal2 and 13. I'd like to capture these all in a single line, also I think I might have messed up the query for "TotalGB" (converting bytes to GB)
I would only like the latest set of data (when the query was run)
you can use the arg_max() aggregation function.
for example:
AzureMetrics
| where TimeGenerated >= ago(1d)
| where Resource has "LoadBal"
| where MetricName == "ByteCount"
| summarize arg_max(TimeGenerated, *) by Resource
I've been trying to convert a query from bytes to GB, however I'm getting some strange results... I think I might have messed up the query for "TotalGB" (converting bytes to GB)
If the raw data is in bytes, then you need to divide it by exp2(30) (or 1024*1024*1024) to get the value as GBs.
Or, you can use the format_bytes() function instead
for example:
print bytes = 18027051483.0
| extend gb_1 = format_bytes(bytes, 2),
gb_2 = bytes/exp2(30),
gb_3 = bytes/1024/1024/1024
bytes
gb_1
gb_2
gb_3
18027051483
16.79 GB
16.7889999998733
16.7889999998733

Grafana azure log analytics transfer query from logs

I have this query that works in Azure logs when i set the scope to the specific application insights I want to use
let usg_events = dynamic(["*"]);
let mainTable = union pageViews, customEvents, requests
| where timestamp > ago(1d)
| where isempty(operation_SyntheticSource)
| extend name =replace("\n", "", name)
| where '*' in (usg_events) or name in (usg_events)
;
let queryTable = mainTable;
let cohortedTable = queryTable
| extend dimension =tostring(client_CountryOrRegion)
| extend dimension = iif(isempty(dimension), "<undefined>", dimension)
| summarize hll = hll(user_Id) by tostring(dimension)
| extend Users = dcount_hll(hll)
| order by Users desc
| serialize rank = row_number()
| extend dimension = iff(rank > 5, 'Other', dimension)
| summarize merged = hll_merge(hll) by tostring(dimension)
| project ["Country or region"] = dimension, Counts = dcount_hll(merged);
cohortedTable
but trying to use the same in grafana just gives an error.
"'union' operator: Failed to resolve table expression named 'pageViews'"
Which is the same i get in azure logs if i dont set the scope to the specific application insights resource. So my question is. how do i make it so grafana targets this specific scope inside the logs? The query jsut gets the countries of the users that log in
As far as I know, Currently, there is no option/feature to add Scope in Grafana.
The Scope is available only in the Azure Log Analytics Workspace.
If you want the Feature/Resolution, please raise a ticket in Grafana Community where all the issues are officially addressed.

Azure Resource Graph Explorer - Query Azure VM descriptions, OS, sku - I need to join to columns (OS and sku in one)

I have a issue. I want to know how can I join two columns in one.
I want to join the "OS" and "sku" columns in one with the name "OS"
This is my KQL:
Kusto Query on Azure Resource Graph
Resources
| where type == "microsoft.compute/virtualmachines"
| extend OS = properties.storageProfile.imageReference.offer
| extend sku = properties.storageProfile.imageReference.sku
| project OS, sku, name, nic = (properties.networkProfile.networkInterfaces)
| mvexpand nic
| project OS, sku, name, nic_id = tostring(nic.id)
| join (
    Resources 
    | where type == "microsoft.network/networkinterfaces" 
    | project nic_id = tostring(id), properties) on nic_id
    | mvexpand ipconfig = (properties.ipConfigurations)
    | extend subnet_resource_id = split(tostring(ipconfig.properties.subnet.id), '/'), ipAddress = ipconfig.properties.privateIPAddress
    | order by name desc
| project vmName=(name), OS, sku, vnetName=subnet_resource_id[8], subnetName=subnet_resource_id[10], ipAddress
This is my result:
I need like this:
Can anyone help me, thanks so much.
I've tried to use the "union" operator, but I can't make it work.
I have used these reference link:
Azure Docs Link 1
Azure Docs Link 2
Azure Docs Link 3
If you want to combine two strings - you can use strcat() function:
Resources
| where type == "microsoft.compute/virtualmachines"
| extend OS = properties.storageProfile.imageReference.offer
| extend sku = properties.storageProfile.imageReference.sku
| project OS, sku, name, nic = (properties.networkProfile.networkInterfaces)
| mvexpand nic
| project OS, sku, name, nic_id = tostring(nic.id)
| join (
Resources
| where type == "microsoft.network/networkinterfaces"
| project nic_id = tostring(id), properties) on nic_id
| mvexpand ipconfig = (properties.ipConfigurations)
| extend subnet_resource_id = split(tostring(ipconfig.properties.subnet.id), '/'), ipAddress = ipconfig.properties.privateIPAddress
| order by name desc
| project vmName=(name), OS = strcat(OS, ' ', sku), vnetName=subnet_resource_id[8], subnetName=subnet_resource_id[10], ipAddress

Excluding data in KQL SLA charts

We are showing SLA charts for URLs, VPN and VMs for that if there is any planned scheduled maintenance we want to exclude that timings in KQL SLA charts as its known downtime.
We are disabling Alerts via powershell during this time we are passing below columns to Loganalytics custom table.
"resourcename": "$resourcename",
"Alertstate": "Enabled",
"Scheduledmaintenance" : "stop",
"Environment" : "UAT",
"timestamp": "$TimeStampField",
Now we want to use join condition SLA charts queries with custom table data and exclude the time range in SLA charts during scheduled maintenance.
Adding query as per request
---------------------------
url_json_CL
| where Uri_s contains "xxxx"
| extend Availablity = iff(StatusCode_d ==200,1.000,0.000)
| extend urlhit = 1.000
| summarize PassCount = sum(Availablity), TestCount = sum(urlhit) by Uri_s ,ClientName_s
| extend AVLPERCENTAGE = ((PassCount / TestCount ) * 100)
| join kind=leftouter
( scheduledmaintenance2_CL
| where ResourceName_s == "VMname"
| where ScheduledMaintenance_s == "start"
| extend starttime = timestamp_t)
on ClientName_s
| join kind= leftouter
(scheduledmaintenance2_CL
| where ResourceName_s == "VMname"
| where ScheduledMaintenance_s == "stop"
| extend stoptime = timestamp_t )
on ClientName_s
| extend excludedtime=stoptime - starttime
| project ClientName_s, ResourceName_s, excludedtime, AVLPERCENTAGE , Uri_s
| top 3 by ClientName_s desc
You can perform cross-resource log queries in Azure Monitor
Using Application Insights explorer we can query Log analytics workspace custom tables as well.
workspace("/subscriptions/xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx/resourcegroups/rgname/providers/Microsoft.OperationalInsights/workspaces/workspacename").Event | count
Using Log Analytics logs explorer you can query the Application Insights Availability Results
app("applicationinsightsinstancename").availabilityResults
You can use any of the above options to query the required tables and join the tables. Please refer to this documentation on joins.
Additional documentation reference.
Hope this helps.

Resources