AWS Dedicated instance equivalent in Azure - azure

The footnotes for Standard_D15_v2, Standard_G5, Standard_L32s Azure instance types in the official documentation says this "Instance is isolated to hardware dedicated to a single customer".
Can these be considered to be the equivalent of AWS Dedicated instance?

Yes..as per this link
Announcing: New Dv2-series virtual machine size
A larger virtual machine size has joined the Dv2-series. This new size is the Standard_D15_v2 with the following specifications: 20 CPU cores, 140 GB of memory, 1,000 GB of temporary solid-state drive (SSD), up to 8 virtual network interface cards (NICs), up to 40 data disks, and very high network bandwidth.
Each Standard_D15_v2 instance is isolated to hardware dedicated to a single customer, to provide a high degree of isolation from other customers. This addition and the Standard_G5 are the two available sizes that are on hardware dedicated to a single customer. The Standard_D15_v2 is available in all locations that support the Dv2-series, as described on the Azure services by region page. This size is available for virtual machines that use the Azure Resource Manager deployment model and custom OS images published by Canonical, CoreOS, OpenLogic, Oracle, Puppet Labs, Red Hat, SUSE, and Microsoft.

Related

Azure E series (E20S_V3 and E20DS_V4)

We are already on azure IaaS model since last 3 years and currently planning to spin up a VM in production subscription which will host SQL server 2016 enterprise edition and we are comparing between these two series E20ds_v4 and E20S_V3 with in E series.The only difference I see is the temp storage & it’s throughput and a $79 price difference/mo and rest all specs are same.Can some please share your thoughts on what is the major difference between "E20ds_v4" and "E20S_V3" VM's ? what does "ds" stands for ? for a production scale oltp what would be a better choice ?
Es, Eas, Ds and Das Series offers the optimum memory to vCPU ratio required for OLTP workload performance as you can read here. DSv_3 and Es_v3-series are hosted on general purpose hardware with Intel Haswell or Broadwell processors.
Use VM sizes with 4 or more vCPU like E4S_v3 or higher, or DS12_v2 or higher as a best practice for SQL Server VMs on Azure.
M Series offers the highest memory to vCPU ratio required for mission critical performance and is ideal for data warehouse workloads. M-series offers the highest vCPU count and memory for the largest SQL Server workloads and hosted on memory optimized hardware with Skylake processor family.
Use HammerDB to measure performance and scalability of each SQL VM option.
Use premium SSDs for the best price/performance advantages. Configure ReadOnly cache for data files and no cache for the log file. Use Ultra Disks if less than 1 ms storage latencies are required by the workload. Meanwhile, premium file shares are recommended as shared storage for a SQL Server failover cluster instance. Premium file shares do not support caching, and offer limited performance compared to premium SSD disks. Standard storage is only recommended for development and test purposes or for backup files and should not be used for production workloads. Use a minimum of 2 premium SSD disks (1 for log file and 1 for data files).Enable read only caching on the disk(s) hosting the data files. Stripe multiple Azure data disks to get increased storage throughput. Place TempDB on the local SSD D:\ drive for mission critical SQL Server workloads

Are you able to set limits on the network bandwidth with Azure VMs?

I'm trying to get my Azure VM resource group to only use 100GiB of going out bandwidth in a given time frame, is there any way inside of the azure portal to set these limits?
Thanks!
There is no way to set these limits on the Azure portal, but you can select VM sizes. Basically, Larger virtual machines are allocated relatively more bandwidth than smaller virtual machines. You have available options for your VM types such as Compute optimized, Memory optimized, Storage optimized, GPU, High performance compute on the Azure portal.
Expected outbound throughput and the number of network interfaces
supported by each VM size is detailed in Azure Windows and Linux VM
sizes. Select a type, such as General purpose, then select a
size-series on the resulting page, such as the Dv2-series. Each series
has a table with networking specifications in the last column titled,
Max NICs / Expected network performance (Mbps).
The throughput limit applies to the virtual machine. Throughput is unaffected by the number of network interfaces.
Refer to the document: Virtual machine network bandwidth

attaching more than two virtual disks to a virtual machine in Azure

I'm installing OSISoft on a single windows 2008 VM in Azure and part of the instructions recommends having 4 drives for each application. However Azure will only allow 2 disks be attached to a VM. What alternatives do I have?
You need to use data disks and choose the different VM size - the VM size determines the amount of data disks that can be connected.
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-sizes/
Tutorial:
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-classic-attach-disk/

Azure: How to add >1TB disks to a virtual machine without changing the size of VM

I see there are some limitations on Azure:
1. On number of disks to be attached to VM;
2. The size of each disk/storage blob is limited by 1TB;
Is there any hack or workaround to attach larger disks/several disks to the same VM without increasing the processing power of VM as my application doesn't need high computing capacities, but needs plenty of space.
May be it's possible by contacting their billing department?
Currently I'm using A1 Standard VM instance with 2 disks (2 TB it total) attached to it already. The goal is to attach 5 TB total disk space to the same VM without upgrading the VM size to a larger instance.
You will need to change your VM size to attach more disks. One option is to look at Basic tier instead of using Standard tier A Series VMs to optimize your cost. Since you do not need a lot of computing power, basic tier VMs may work fine for you. You will want to look at Basic A3 which will allow you to attach 8 maximum data disks of 1 TB each. See more information here (https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-size-specs/)
Thanks,
Aung
I found a solution to attach 5TB folders as Azure File Sharing service.
It's possible via creating File Sharing containers through Azure Portal, then mounting the folder under Linux via CIFS (SMB3.0).
For those who are interested, there is an issue with mounting Windows File Sharing folders within CentOS 6.X under Azure. It works only with CentOS 7.X (keep it in mind).
You can use Storage Spaces in Azure to increment capacity and performance. The limit of the VHD is 1 TB per disk, using Storage Spaces you can pass this limitation. You need to have in mind that there is a limitation of disk to attach to the VM based on type you choose.
Sample explanation on:
https://blogs.msdn.microsoft.com/dfurman/2014/04/27/using-storage-spaces-on-an-azure-vm-cluster-for-sql-server-storage/

how to create windows virtual machine with 16gb ram

I am totally new to cloud services, and using Windows Azure, I need a web server and a database server, each with 16gb of RAM. However, the extra large windows virtual machines only have 14gb of RAM. How would I go about adding 2gb of RAM to each of these servers, or do I need to do something else, such as incorporate a SQL database? I don't need to know the specifics of installation, all I need to know right now is what needs to be paid for, as I am just trying to figure out the price for everything. Thank you.
The Extra Large (XL) VM size provides 14GB available RAM. This applies to both Virtual Machines (IaaS) and web/worker roles (PaaS). There are no other VM sizes that provide more RAM than that. There's nothing you can do to add 2 extra GB.
UPDATE April 16, 2013: There are now two new sizes: 28GB/4-core and 56GB-8-core, available to Virtual Machines (not for Cloud Services e.g. web & worker roles). Announcement here. There's also a new SharePoint template in the Virtual Machine image gallery (since you mentioned using SharePoint) as well as a SQL Server template.
UPDATE APRIL 30, 2013: The new 28GB/56GB sizes are now available with Cloud Services, coincident with the release of Azure SDK 2.0 for .NET. Details here.
Just to add a bit, regarding web servers: Unlike on-premises servers where it's typically economical to buy the largest machine possible, it's better in Windows Azure to go with smaller VMs and have more of them. So, for a web server, go with the smallest VM size that would still run your software. Then, to handle additional traffic, scale to more web instances. As traffic ebbs, reduce the instance count. Load will be distributed amongst all of of the web servers (which are stateless - no user affinity to a specific instance).

Resources