We are already on azure IaaS model since last 3 years and currently planning to spin up a VM in production subscription which will host SQL server 2016 enterprise edition and we are comparing between these two series E20ds_v4 and E20S_V3 with in E series.The only difference I see is the temp storage & it’s throughput and a $79 price difference/mo and rest all specs are same.Can some please share your thoughts on what is the major difference between "E20ds_v4" and "E20S_V3" VM's ? what does "ds" stands for ? for a production scale oltp what would be a better choice ?
Es, Eas, Ds and Das Series offers the optimum memory to vCPU ratio required for OLTP workload performance as you can read here. DSv_3 and Es_v3-series are hosted on general purpose hardware with Intel Haswell or Broadwell processors.
Use VM sizes with 4 or more vCPU like E4S_v3 or higher, or DS12_v2 or higher as a best practice for SQL Server VMs on Azure.
M Series offers the highest memory to vCPU ratio required for mission critical performance and is ideal for data warehouse workloads. M-series offers the highest vCPU count and memory for the largest SQL Server workloads and hosted on memory optimized hardware with Skylake processor family.
Use HammerDB to measure performance and scalability of each SQL VM option.
Use premium SSDs for the best price/performance advantages. Configure ReadOnly cache for data files and no cache for the log file. Use Ultra Disks if less than 1 ms storage latencies are required by the workload. Meanwhile, premium file shares are recommended as shared storage for a SQL Server failover cluster instance. Premium file shares do not support caching, and offer limited performance compared to premium SSD disks. Standard storage is only recommended for development and test purposes or for backup files and should not be used for production workloads. Use a minimum of 2 premium SSD disks (1 for log file and 1 for data files).Enable read only caching on the disk(s) hosting the data files. Stripe multiple Azure data disks to get increased storage throughput. Place TempDB on the local SSD D:\ drive for mission critical SQL Server workloads
Related
I am confused with azure SQL database backup plan (short term backup retention).
As far as i understood,
In DTU purchasing model, no extra charge for backup storage, you only pay for redundancy type (such as LRS,ZRS)
In vCore purchase model, you will have to pay for backup storage.
am i right ?
does that mean , i will not have any backups if do not subscribe to backup storage in vCore ?
further, in azure pricing calculator, in vCore, General purpose option, you have two redundancy drop down options (i am not talking about long term retention plan) , what is the difference between them ?
Thanks.
i will not have any backups if do not subscribe to backup storage in vCore ?
Yes, in vCore, if you do not allocate a storage account for backups, you will not be able to perform backup operations, either manually or automatically. If you believe you do not need backups, then you might be a fool ;), Azure will maintain access to your database according to the standard SLAs but the infrastructure will not provide a way for you to point-in-time restore the state of your database, only backups can adequately do that for you. But the storage costs are usually a very minimal component of your overall spend. Once the backup operation is complete you can download the backup for local storage and then clear the blob, making this aspect virtually cost free, but you will need a storage account to complete the backup process at all.
in azure pricing calculator, in vCore, General purpose option, you have two redundancy drop down options
Are you referring to the Computer Redundancy:
Zone redundancy for Azure SQL Database general purpose tier
The zone redundant configuration utilizes Azure Availability Zones to replicate databases across multiple physical locations within an Azure region. By selecting zone redundancy, you can make your serverless and provisioned general purpose single databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes of the application logic. This configuration offers 99.995% availability SLA and RPO=0. For more information see general purpose service tier zone redundant availability.
In the other tiers, these redundancy modes are referred to as LRS (Locally Redundant) and ZRS (Zone Redundant). Think of this your choice on what happens when your data centre is affected by some sort of geological or political event that means the server cluster, pod or whole data centre is offline.
Locally Redundant offers redundancy only from a geographically local (often the same physical site). In general this protects from local hardware failures but not usually against scenarios that take the whole data center off-line. This is the minimal level of redundancy that Azure requires for their hardware management and maintenance plans.
Zone Redundant offers redundancy across multiple geographically independent zones but still within the same Azure Region. Each Azure availability zone is an individual physical location with its own independent networking, power, and cooling. ZRS provides a minimum of 99.9999999999% durability for objects during a given year.
There is a third type of redundancy offered in higher tiers: Geo-Redundant Storage (GRS). This has the same Zone level redundancy but configures additional replicas in other Azure regions around the world.
In the case of Azure SQL DB, these terms for Compute (So the actual server and CPU) have almost identical implications as that of Storage Redundancy. So with regard to available options, the pricing calculator is pretty well documented for everything else, use the info tips for quick info and go to the reference pages for the extended information:
The specifics are listed here: Azure Storage redundancy but redundancy in Azure is achieved via replication. That means that an entire workable and usable version of your database is maintained so that in the event of failure, the replica takes the load.
A special feature of replication is that you can actively utilise the replicated instance for Read Only workloads, which gives us as developers and architects some interesting performance opportunities for moving complex reporting and analytic workloads out of the transactional data manipulations OOTB, traditionally this was a non-trivial configuration.
The RA prefix on redundancy options is an acronym for Read Access.
Recently we have upgraded our SSAS resources. Currently our SSAS is on Azure VM costing us based on this VM type 'Standard E32-8s_v3'.
I am looking for a way to save more cost by selecting a better option.
What can be a good option to save cost and at the same time have better efficiency.
what factors/ differences can be considered if we go to Azure analysis services instead of SSAS on Azure VM.
Our SQL server is also on Azure VM.
We have our reports on Power BI report server and SSRS.
Data is coming from different resources like SAP, external parties etc using SSIS.
Can you please Advice/ Suggest a better options for our data architecture.
Thank you.
Your VM is 8 cores and 256GB RAM.
One factor in pricing you haven’t mentioned is SQL licensing. You didn’t specify whether you are renting the SQL license with the VM or bringing your own license and what that costs. I’m going to assume you are renting it. With Azure Analysis Services the license is included in the price.
In Azure Analysis Services 100QPU is roughly equivalent to 5 cores. So 200QPU (an S2) would be an equivalent amount of CPU and a similar price but only has 50GB RAM.
To get the equivalent amount of RAM the S8 would get you close (200GB RAM) but for substantially more cost.
If you have one large model which (at least during peak usage or processing) uses most of the 256GB RAM then it may be tough to move to Azure Analysis Services for a similar price. If you have several models on that one server then you could split them across several smaller Azure Analysis Services servers and it may be a reasonable price for you. Or you could scale up for processing when RAM is needed most and scale down for the rest of the day to save cost.
I'm trying to get my Azure VM resource group to only use 100GiB of going out bandwidth in a given time frame, is there any way inside of the azure portal to set these limits?
Thanks!
There is no way to set these limits on the Azure portal, but you can select VM sizes. Basically, Larger virtual machines are allocated relatively more bandwidth than smaller virtual machines. You have available options for your VM types such as Compute optimized, Memory optimized, Storage optimized, GPU, High performance compute on the Azure portal.
Expected outbound throughput and the number of network interfaces
supported by each VM size is detailed in Azure Windows and Linux VM
sizes. Select a type, such as General purpose, then select a
size-series on the resulting page, such as the Dv2-series. Each series
has a table with networking specifications in the last column titled,
Max NICs / Expected network performance (Mbps).
The throughput limit applies to the virtual machine. Throughput is unaffected by the number of network interfaces.
Refer to the document: Virtual machine network bandwidth
We have a VM setup to run SQL server in Azure. we are seeing the disk write is doing like 0.6MBps with WRITE THROUGH during testing. We have tried numerous different this from changing Azure Vm types D series L series etc. We have also created different RAID based disks. is this a limit in Azure that it can do only certain rate for non cached disks than what is advertised as 500MBps.. any help to improve the WRITE THROUGH rate?
In order to prevent the blocking issues and to improve the IO performance, we need to:
1.Prevent VM level throttling at all cost.
2.Prevent disk level throttling if the application has dependent blocking issue due to the software design. Adding more disks to create a storage pool may help.
More information about Azure VM Storage Performance and Throttling, we can refer to:
Azure VM Storage Performance and Throttling
According to your issue, as you are using SQL Server, here is an article about how to optimize SQL Server performance in Microsoft Azure Virtual Machine: Performance best practices for SQL Server in Azure Virtual Machines
The footnotes for Standard_D15_v2, Standard_G5, Standard_L32s Azure instance types in the official documentation says this "Instance is isolated to hardware dedicated to a single customer".
Can these be considered to be the equivalent of AWS Dedicated instance?
Yes..as per this link
Announcing: New Dv2-series virtual machine size
A larger virtual machine size has joined the Dv2-series. This new size is the Standard_D15_v2 with the following specifications: 20 CPU cores, 140 GB of memory, 1,000 GB of temporary solid-state drive (SSD), up to 8 virtual network interface cards (NICs), up to 40 data disks, and very high network bandwidth.
Each Standard_D15_v2 instance is isolated to hardware dedicated to a single customer, to provide a high degree of isolation from other customers. This addition and the Standard_G5 are the two available sizes that are on hardware dedicated to a single customer. The Standard_D15_v2 is available in all locations that support the Dv2-series, as described on the Azure services by region page. This size is available for virtual machines that use the Azure Resource Manager deployment model and custom OS images published by Canonical, CoreOS, OpenLogic, Oracle, Puppet Labs, Red Hat, SUSE, and Microsoft.