Does Azure Machine Learning charge for Compute Instances even when they are stopped? - azure-machine-learning-service

I created a new Azure Machine Learning Workspace and Compute Instance to work through some Python ML tutorials. I was stuck on this issue for a few days. While I was waiting for assistance, I stopped the compute instance.
Looking through the Cost Analysis for this Resource Group, it looks like I'm being charged even though the Compute Instance has been stopped for a few days.
Is there a pay-as-you-go version of AML Compute Instance so I don't get charged when the instance is turned off?
EDIT:
Hm. It looks like the bulk of the cost is coming from a Load Balancer and Storage not the Compute Instance (assuming this is the "VM" shown). The Compute Instance was stopped in the AML Studio.
Its unclear to me which Azure Resource the Load Balancer represents.
Also the only Storage account in this Resource Group has 3 empty Containers...
Maybe these costs were associated with setting up the AML Workspace?

The answer is yes. As seen in the docs here, there is a Load Balancer resource that is provisioned as part of the AML Workspace. This resource is not visible in the Resource Group (which was throwing me off). It appears to accumulate cost even with the Compute Resource turned off.

Related

is there any way to run aks in azure dev/test labs?

I am looking a way to run aks or k8s cluster in dev/test labs but I couldn't find an official way. I guess Azure has allow using production services in Dev/Test Lab however they haven't published yet a document to achieve this. I need rich memory VMs such as 128/256 gb though AKS doesn't support that vm on cluster. And AutoShutdown option will be cost saving for these VMs. So I have to build this in dev/test lab. Any suggestion would be helpful. Thanks!
AKS is a managed service and you can't run it on you own VMs or the ones from Dev/Test Labs. Why are you saying that you can't use 128/256GB RAM VMs? When selecting your VMs size in the portal, make sure to select the Memory Optimized family.
If I understand correctly, your goal is to save money running these high cost VMs. One possible way you can achieve this is create your cluster with a single instance of a smaller VM and create a a second node pool with the larger VMs. You can then create and destroy that second pool on demand.

Cannot create Compute Instance Microsoft Azure ML

I am new at working with Microsoft Azure and I am trying to open a Notebook from the Azure Machine learning studio.
Every time I try to create a new compute it says Creation failed so I cannot work. My region is francecentral and I have tried different Virtual Machine size
Your reason might be explained here:
As demand continues to grow, if we are faced with any capacity
constraints in any region during this time, we have established clear
criteria for the priority of new cloud capacity. Top priority will be
going to first responders, health and emergency management services,
critical government infrastructure organizational use, and ensuring
remote workers stay up and running with the core functionality of
Teams.
If you qualify for this category, you should reach out to Azure Support or your Microsoft representative. If not, you need to keep retrying (might work better at night) or try a different region.

Contradictory information between Azure Portal and az command getting available SKUs for VMs

Since yesterday, I'm trying to create new VM and I'm not able because the majority of the sizes are not available for my region, West Europe. Using Azure Portal I get all D series greyed:
I tried Azure Reservations with same result:
I suppose there are issues in this and another regions.
But then I tried to get availability using CLI tool az, following this reference. The, executing referenced command I get this list of available sizes:
It seems contradictory information, because I see some D series VM.
May it be that they are available in general, not taking into account current occupation?
Is there any az command to get actual available sizes in my region?
Da/Das series VMs use AMD CPU. However, based on my test, they are not available in location West Europe.
The normal D series VM should be available in West Europe. There are several reasons which may prevent you from choosing it:
You have reached your CPU resource limitation in that region. To solve this, you may request an increase in CPU quota limits per Azure VM series
Some Azure subscriptions (trial, MSDN developer, student trial and so on) can only create limited resources in limited locations. To solve this, you may update your subscription to a pay-as-you-go one.
Other reasons. You may directly contact the Azure Support team by submitting a support request on the Azure portal.

Turning off ServiceFabric clusters overnight

We are working on an application that processes excel files and spits off output. Availability is not a big requirement.
Can we turn the VM sets off during night and turn them on again in the morning? Will this kind of setup work with service fabric? If so, is there a way to schedule it?
Thank you all for replying. I've got a chance to talk to a Microsoft Azure rep and documented the conversation in here for community sake.
Response for initial question
A Service Fabric cluster must maintain a minimum number of Primary node types in order for the system services to maintain a quorum and ensure health of the cluster. You can see more about the reliability level and instance count at https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-cluster-capacity/. As such, stopping all of the VMs will cause the Service Fabric cluster to go into quorum loss. Frequently it is possible to bring the nodes back up and Service Fabric will automatically recover from this quorum loss, however this is not guaranteed and the cluster may never be able to recover.
However, if you do not need to save state in your cluster then it may be easier to just delete and recreate the entire cluster (the entire Azure resource group) every day. Creating a new cluster from scratch by deploying a new resource group generally takes less than a half hour, and this can be automated by using Powershell to deploy an ARM template. https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-creation-via-arm/ shows how to setup the ARM template and deploy using Powershell. You can additionally use a fixed domain name or static IP address so that clients don’t have to be reconfigured to connect to the cluster. If you have need to maintain other resources such as the storage account then you could also configure the ARM template to only delete the VM Scale Set and the SF Cluster resource while keeping the network, load balancer, storage accounts, etc.
Q)Is there a better way to stop/start the VMs rather than directly from the scale set?
If you want to stop the VMs in order to save cost, then starting/stopping the VMs directly from the scale set is the only option.
Q) Can we do a primary set with cheapest VMs we can find and add a secondary set with powerful VMs that we can turn on and off?
Yes, it is definitely possible to create two node types – a Primary that is small/cheap, and a ‘Worker’ that is a larger size – and set placement constraints on your application to only deploy to those larger size VMs. However, if your Service Fabric service is storing state then you will still run into a similar problem that once you lose quorum (below 3 replicas/nodes) of your worker VM then there is no guarantee that your SF service itself will come back with all of the state maintained. In this case your cluster itself would still be fine since the Primary nodes are running, but your service’s state may be in an unknown replication state.
I think you have a few options:
Instead of storing state within Service Fabric’s reliable collections, instead store your state externally into something like Azure Storage or SQL Azure. You can optionally use something like Redis cache or Service Fabric’s reliable collections in order to maintain a faster read-cache, just make sure all writes are persisted to an external store. This way you can freely delete and recreate your cluster at any time you want.
Use the Service Fabric backup/restore in order to maintain your state, and delete the entire resource group or cluster overnight and then recreate it and restore state in the morning. The backup/restore duration will depend entirely on how much data you are storing and where you export the backup.
Utilize something such as Azure Batch. Service Fabric is not really designed to be a temporary high capacity compute platform that can be started and stopped regularly, so if this is your goal you may want to look at an HPC platform such as Azure Batch which offers native capabilities to quickly burst up compute capacity.
No. You would have to delete the cluster and recreate the cluster and deploy the application in the morning.
Turning off the cluster is, as Todd said, not an option. However you can scale down the number of VM's in the cluster.
During the day you would run the number of VM's required. At night you can scale down to the minimum of 5. Check this page on how to scale VM sets: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-scale-up-down/
For development purposes, you can create a Dev/Test Lab Service Fabric cluster which you can start and stop at will.
I have also been able to start and stop SF clusters on Azure by starting and stopping the VM scale sets associated with these clusters. But upon restart all your applications (and with them their state) are gone and must be redeployed.

Unknown "Azure Premium Storage" item in bill

I've just checked my current Azure usage, and I'm seeing an item called...
Premium Storage - Page Blob/P10 (Units) - US West
This item is quite expensive, but I don't know what it is, or how it's been allocated, or how to remove it.
I've checked my storage accounts, vm's, and databases, and confirmed that they're all in Australia.
This is the first time I've noticed this "US West" storage item.
The only new thing I've added to Azure recently is an "S0 Standard (10 DTUs)" database account. However this says it's located in Australia, and it currently has nothing in it. I've checked the resource cost, and it's currently showing $0.
How do I go about figuring out what this mystery storage resource is?
I think I've traced it. It's the disk for one of my VM's. I don't know why the location for the storage is "US West", when the location for the VM is Australia. The VM was created as an experiment and isn't currently being used. I was testing out the new ARM deployment type. My fault for not fully understanding the pricing before creating the resource. I assumed storage cost would be negligible, and that the VM wouldn't cost much if it was deallocated most of the time. Anyway I'll delete the VM, and do more reading about pricing before trying it again.

Resources