We have a VM setup to run SQL server in Azure. we are seeing the disk write is doing like 0.6MBps with WRITE THROUGH during testing. We have tried numerous different this from changing Azure Vm types D series L series etc. We have also created different RAID based disks. is this a limit in Azure that it can do only certain rate for non cached disks than what is advertised as 500MBps.. any help to improve the WRITE THROUGH rate?
In order to prevent the blocking issues and to improve the IO performance, we need to:
1.Prevent VM level throttling at all cost.
2.Prevent disk level throttling if the application has dependent blocking issue due to the software design. Adding more disks to create a storage pool may help.
More information about Azure VM Storage Performance and Throttling, we can refer to:
Azure VM Storage Performance and Throttling
According to your issue, as you are using SQL Server, here is an article about how to optimize SQL Server performance in Microsoft Azure Virtual Machine: Performance best practices for SQL Server in Azure Virtual Machines
Related
Recently we have upgraded our SSAS resources. Currently our SSAS is on Azure VM costing us based on this VM type 'Standard E32-8s_v3'.
I am looking for a way to save more cost by selecting a better option.
What can be a good option to save cost and at the same time have better efficiency.
what factors/ differences can be considered if we go to Azure analysis services instead of SSAS on Azure VM.
Our SQL server is also on Azure VM.
We have our reports on Power BI report server and SSRS.
Data is coming from different resources like SAP, external parties etc using SSIS.
Can you please Advice/ Suggest a better options for our data architecture.
Thank you.
Your VM is 8 cores and 256GB RAM.
One factor in pricing you haven’t mentioned is SQL licensing. You didn’t specify whether you are renting the SQL license with the VM or bringing your own license and what that costs. I’m going to assume you are renting it. With Azure Analysis Services the license is included in the price.
In Azure Analysis Services 100QPU is roughly equivalent to 5 cores. So 200QPU (an S2) would be an equivalent amount of CPU and a similar price but only has 50GB RAM.
To get the equivalent amount of RAM the S8 would get you close (200GB RAM) but for substantially more cost.
If you have one large model which (at least during peak usage or processing) uses most of the 256GB RAM then it may be tough to move to Azure Analysis Services for a similar price. If you have several models on that one server then you could split them across several smaller Azure Analysis Services servers and it may be a reasonable price for you. Or you could scale up for processing when RAM is needed most and scale down for the rest of the day to save cost.
I am trying to accomplish one task which is below.
What I am doing it.
All my users are on Premises.
Application is hosted on Azure VM IaaS.
Question =>
Azure cloud application talk with Internet and download huge packages and share with client which is on- Primes. So I am trying to understand the Risk and latency matrix between on-Prime users and Azure cloud application.
If any one has done some sort of thing and encounter latency issues and what will be possible fixes for that?
Note=> I can't Migrate user to Azure cloud as of now.
To encounter latency issues, please try the following:
To reduce the latency between on premises client and azure cloud application make use of Azure HPC cache.
Azure HPC Cache reduces latency for applications where data may be tethered to existing infrastructure because of dataset sizes and operational scale.
Azure HPC caches active data automatically that is present in both on-premises and in Azure.
You can make use of Accelerated networking where communication will be done more fast.
Try eliminating network congestion.
Try reducing number of network nodes needed to traverse from one stage to another.
Make use of Azure ExpressRoute and Azure Analysis Services to reduce Network latency.
Azure ExpressRoute creates a private connection between on-premises sources and the Azure.
Azure Analysis Services avoids the need for an on-premises data gateway and generally eliminates network latency.
For more in detail, please refer below links:
https://azure.microsoft.com/en-us/blog/azure-hpc-cache-reducing-latency-between-azure-and-on-premises-storage/
https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-intro/
https://viniciusdeschamps.com.br/3-ways-to-reduce-network-latency-in-azure/#how-can-I-measure-network-latency
I'm currently looking into a high-availability approach for a file server within Azure in which I will need to be deploying VMs for. The data on the file server will be constantly changing. From what I read so far, I will need at least 2 VMs and grouping them together into a shared availability set along with creating a cloud service. Although this will address the application and server aspect, what about the storage and the data on them?
I understand that I can't attach a single disk to multiple VMs so I'm a bit lost on how to proceed with this. Any thoughts or ideas on how to move forward with this?
In short, I have a VM with direct data disk attached to it that I'm looking to provide high-availability in the event that the VM goes offline; either through an outage, host patching, hardware maintenance, etc...
Have a look into Azure Blob Storage - don't worry about disks etc, just let the Azure fabric handle the data redundancy and scalability for you!
Here's an "all you need" introduction to WIndows Azure Storage:
I need to make sure the availability of my database is high. working with SQL Azure does not make that clear.
Is there a way to run multi servers (one will take over if one server fails? ) under SQL Azure, above that is there something equivalent to increasing memory on the DB server to speed up the Database processing ?
Read High Availability on the Intro the Azure SQL and then read Business Continuity in Windows Azure SQL Database. To summarize:
Data durability and fault tolerance is enhanced by maintaining
multiple copies of all data in different physical nodes located across
fully independent physical sub-systems such as server racks and
network routers. At any one time, Windows Azure SQL Database keeps
three replicas of data running—one primary replica and two secondary
replicas.
Right now there is no way to specify hardware configuration for SQL Azure Databases. It's totally out of your control and from SAAS perspective that makes sense. The backend management services are responsible making sure you get the best performance possible.
If you need dedicated and reserved hardware for your SQL deployment you may take a look at IAAS offerings in Azure and start a VM with SQL installed however you need to make sure you know the main differences between a IAAS and PAAS offering.
I do not know what your high availability requirements are, but you should look at the SLAs provided by Microsoft. SQL Database offers 99.9% monthly availability.
How do I see if an SQL Azure database is being throttled?
I want to see data like: what percentage of time it was throttled, the count of throttles, the top reasons of throttles.
See https://stackoverflow.com/questions/2711868/azure-performance/13091125#13091125
Throttling is the least of your troubles. If you need performance then you would be best served to build your own DB servers using VM roles. I found that the performance of these is vastly improved over SQL Azure. For fault tolerance you can provision a primary and a failover in a different VM in a different region if necessary. Make sure that the DB resides on the local drive.
I don't believe that information is currently available. However, the team does share reasons why you could be throttled and how to handle it (see here).