Diskspace unaccounted for on azure managed mariadb - azure

i have a managed Mariadb database (10.3.23) on Azure.
Azure tells me that 2.4tb of disk space is used yet when i check the databases i don't reach that figure.
select sum(data_length/1024/1024/1024) as data_length_gb, sum(data_free/1024/1024/1024) as data_free_gb from information_schema.tables
returns
840.374844471924 and 257.385742187500 respectively.
what am i missing?

Related

Upscaling/Downscaling provisioned RU for cosmos containers at specific time

As mentioned in the Microsoft documentation there is support to increase/decrease the provisioned RU of cosmos containers using cosmosDB Java SDK but when I am trying to perform the steps I am getting below error:
com.azure.cosmos.CosmosException: {"innerErrorMessage":"\"Operation 'PUT' on resource 'offers' is not allowed through Azure Cosmos DB endpoint. Please switch on such operations for your account, or perform this operation through Azure Resource Manager, Azure Portal, Azure CLI or Azure Powershell\"\r\nActivityId: 86fcecc8-5938-46b1-857f-9d57b7, Microsoft.Azure.Documents.Common/2.14.0, StatusCode: Forbidden","cosmosDiagnostics":{"userAgent":"azsdk-java-cosmos/4.28.0 MacOSX/10.16 JRE/1.8.0_301","activityId":"86fcecc8-5938-46b1-857f-9d57b74c6ffe","requestLatencyInMs":89,"requestStartTimeUTC":"2022-07-28T05:34:40.471Z","requestEndTimeUTC":"2022-07-28T05:34:40.560Z","responseStatisticsList":[],"supplementalResponseStatisticsList":[],"addressResolutionStatistics":{},"regionsContacted":[],"retryContext":{"statusAndSubStatusCodes":null,"retryCount":0,"retryLatency":0},"metadataDiagnosticsContext":{"metadataDiagnosticList":null},"serializationDiagnosticsContext":{"serializationDiagnosticsList":null},"gatewayStatistics":{"sessionToken":null,"operationType":"Replace","resourceType":"Offer","statusCode":403,"subStatusCode":0,"requestCharge":"0.0","requestTimeline":[{"eventName":"connectionAcquired","startTimeUTC":"2022-07-28T05:34:40.472Z","durationInMicroSec":1000},{"eventName":"connectionConfigured","startTimeUTC":"2022-07-28T05:34:40.473Z","durationInMicroSec":0},{"eventName":"requestSent","startTimeUTC":"2022-07-28T05:34:40.473Z","durationInMicroSec":5000},{"eventName":"transitTime","startTimeUTC":"2022-07-28T05:34:40.478Z","durationInMicroSec":60000},{"eventName":"received","startTimeUTC":"2022-07-28T05:34:40.538Z","durationInMicroSec":1000}],"partitionKeyRangeId":null},"systemInformation":{"usedMemory":"71913 KB","availableMemory":"3656471 KB","systemCpuLoad":"empty","availableProcessors":8},"clientCfgs":{"id":1,"machineId":"uuid:248bb21a-d1eb-46a5-a29e-1a2f503d1162","connectionMode":"DIRECT","numberOfClients":1,"connCfg":{"rntbd":"(cto:PT5S, nrto:PT5S, icto:PT0S, ieto:PT1H, mcpe:130, mrpc:30, cer:false)","gw":"(cps:1000, nrto:PT1M, icto:PT1M, p:false)","other":"(ed: true, cs: false)"},"consistencyCfg":"(consistency: Session, mm: true, prgns: [])"}}}
at com.azure.cosmos.BridgeInternal.createCosmosException(BridgeInternal.java:486)
at com.azure.cosmos.implementation.RxGatewayStoreModel.validateOrThrow(RxGatewayStoreModel.java:440)
at com.azure.cosmos.implementation.RxGatewayStoreModel.lambda$toDocumentServiceResponse$0(RxGatewayStoreModel.java:347)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:106)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:200)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:119)
Message says to switch on such operations for your accounts but I could not find any page to do that. Can I use Azure functions to do the same thing at a specific time?
Code snippet:
CosmosAsyncContainer container = client.getDatabase("DatabaseName").getContainer("ContainerName");
ThroughputProperties autoscaleContainerThroughput = container.readThroughput().block().getProperties();
container.replaceThroughput(ThroughputProperties.createAutoscaledThroughput(newAutoscaleMaxThroughput)).block();
This is because disableKeyBasedMetadataWriteAccess is set to true on the account. You will need to contact either your subscription owner or someone with DocumentDB Account Contributor to modify the throughput using PowerShell or azure cli, links to samples. You can also do this by redeploying the ARM template or Bicep file used to create the account (be sure to do a GET first on the resource so you don't accidentally change something.
If you are looking for a way to automatically scale resources up and down on a schedule, please refer to this sample here, Scale Azure Cosmos DB throughput by using Azure Functions Timer trigger
To learn more about the disableKeyBasedMetadataWriteAccess property and it's impact to control plane operations from the data plane SDK's see, Preventing changes from the Azure Cosmos DB SDKs

Unable to Create Storage pool on Azure VM 2016

I have created 6 disks of 256GB each on 2 windows server 2016 VMs. I need to implement Active-Active SQL failover cluster on these 2 VMs using S2D.
I am getting error while creating storage pool for 3 disks , below is the error
Cluster resource 'Cluster Pool 1' of type 'Storage Pool' in clustered role xxxxxx failed. The error code was '0x16' ('The device does not recognize the command.').
Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it. Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet
[Problem start date and time]
S2D is new in Windows Server 2016. You can check what to have before you process with building your failover cluster. It's strongly to validate cluster first and then enable S2D following Configure the Windows Failover Cluster with S2D.
this error is appearing because i tried to create storage pool again..basically, enable-s2dcluster has created the pool already for me..i didnt notice it and was trying to create the pool using Failove cluster manager
In order to achieve an active-active solution, you should configure a host/VM per location. For Azure, S2D does not work between two locations. It requires RDMA support for the performance that cannot be configured in Azure. So, to get HA for SQL FCI to check StarWind vSAN free that can be configured between sites replicating/mirroring storage. https://www.starwindsoftware.com/resource-library/installing-and-configuring-a-sql-server-failover-clustered-instance-on-microsoft-azure-virtual-machines
I see the following configuration: Storage Spaces provides disk redundancy configuring mirror or parity for each VM and StarWind distributes HA storage on top of underlying Storage Spaces.

Unable to start Azure VM after size change

I have Azure VM (Windows Server 2012R2 with SQL Server).
Since I was changed the size I cannot start the VM, When I'm trying to start I got the following failed error:
Provisioning state Provisioning failed. One or more errors occurred while preparing VM disks. See disk instance view for details.. DiskProcessingError
DISKS
MyVM_OsDisk_1_47aaea403b8948fb8d0e3ba0e81e2fas Provisioning failed. Requested operation cannot be performed because storage account type 'Premium_LRS' is not supported for VM size 'Standard_D2_v3'.. VMSizeDoesntSupportPremiumStorage
MyVM_disk2_ccc04be996a5471688d357bf6f955fab Provisioning failed. Requested operation cannot be performed because storage account type 'Premium_LRS' is not supported for VM size 'Standard_D2_v3'.. VMSizeDoesntSupportPremiumStorage
What Is the problem and how can I solve it please?
Thanks!
As the error details shows, this is because Premium disk is not supported for D2_V3 VM Size.
Solution :
If you want to use SSD premium Disk for your VM , you can Resize your VM size to DS-series, DSv2-series, GS-series, Ls-series, and Fs-series VMs.
If you don't mind using Standard HDD disk, but want to use D2_V3 VMsize. You can Change the Disk type to Standard (If your disks are managed).
Deallocate your VM > Disk > Choose the disk > Change the Account type to standard > save
Additional, I assume that your disks are managed. If not, you'd better resize your VM rather than change back to standard disk.

Dynamically created volumes from Kubernetes not being auto-deleted on Azure

I have a questions about kubernetes and the default reclaim behavior of dynamically provisioned volumes. The reclaim policy is "delete" for dynamically created volumes in Azure, but after the persistent volume claim and persistent volume have been deleted using kubectl, the page blob on the vhd still exists and is not going away.
This is an issue because every time I restart the cluster, I get an new 1 Gib page blob I now have to pay for, and the old one, which is unused, does not go way. They show up as unleased in the portal and I am able to manually delete them in the storage account. However, will not delete themselves. According to "kubectl get pv" and "kubectl get pvc," they do not exist.
According to all the documentation I can find, they should go away upon deletion using "kubectl":
http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
Any help on this issue would be much appreciated.
EDIT: I have found that this issue appears only when you delete the persistent volume before you delete the persistent volume claim.I know that is not intended behavior but it should be fixed or throw an error.

Azure DIsk Management

I just started using azure virtual machines and I must admit I still have a few questions regarding the disk management:
I manage my machines via the Node JS API in the following way:
azure vm create INSTANCE b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-12_10-amd64-server- 20130227-en-us-30GB azureuser XXXXXX --ssh --location "West US" -t ./azure.pem
azure vm start INSTANCE
//do whatever
azure vm shutdown INSTANCE
azure vm delete INSTANCE
After deleting the instance I still have a buch of disks left, which are not deleted but which I am charged (i.e. deducted from my free trial). Are they not deleted by default?
Is there an API call to delete them (only found the corresponding REST calls, but kind of unwilling to mix NODE JS and Rest api calls).
Can I specify one of those existing disks when starting a new instance?
Thanks for your answers!
Jörg
After deleting the instance I still have a buch of disks left, which are not deleted but which I am charged (i.e. deducted from my free trial). Are they not deleted by default? Is there an API call to delete them (only found the corresponding REST calls, but kind of unwilling to mix NODE JS and Rest api calls).
Yes, the disks are not deleted by default. I believe the reason for that is to reuse those disks to spin off new VMs. To delete the disk (which is a page blob stored in Windows Azure Blob Storage) you could possibly use Azure SDK for Node: https://github.com/WindowsAzure/azure-sdk-for-node.
Can I specify one of those existing disks when starting a new
instance?
Yes, you can. For that you would need to find the disk image and then use the following command:
azure vm create myVM myImage myusername --location "West US"
Where "myImage" is the name of the image. For more details, please visit: http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/command-line-tools/#VMs
Yes when a VM is deleted the disk is left behind. Within the portal you can apply this disk image to a new VM instance on creation. There's some specific guidance on creating VMs from the API with existing disk images here:
http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/command-line-tools/#VMs

Resources