why managed instance taking more time to create?
It has been almost two days managed instance creation started and it is still showing deployment under progress. This is the first time I'm creating MI. Does anyone know how long will it take to create?
Very basic specification: Gen4 8 core 256 memory location: south central us
I don't see any error yet.
Creating first instance within a subnet takes few hours as Managed Instance is customer VNet-injected service and it takes time to provision the whole dedicated cluster - it's much more work than taking random pre-provisioned VM and spinning up few processes on it.
That said, anything that takes more than 6 hours (at the moment, subject of improvement) indicates some sort of issue.
I'd suggest opening support ticket, or you can contact me via private message with more details for the specific instance.
Two days is excessive, and likely indicates a bug in Azure's back-end scripts. When provisioning large amounts of objects in your Azure subscription, you can run out of resources within your 'Ring', which is a pre-allocated set of resources you have to work with. In the event you've exhausted these resources, it does take some time for Azure to provision more before the managed instance can be created.
I've deployed 10 or so Managed Instances and have seen deployment time take anywhere from
1 to 8 hours for a healthy deploy, and over a day for a deploy that ran into a bug.
Related
I'm using the docker compose command to spin up 2 containers in Azure Container Instances, by using ACI docker context.
Sometimes, it takes only a while (below 1 min) to get the containers up and running. However, often it takes much longer (up to 5 minutes I would say). Does somebody have an idea why the speed of ACI creation and making the containers run can be slow? Can it be improved for example by running the containers in a resource group belonging to a different Azure "location"?
Thank you very much for any ideas.
Nobody here will be able to tell you exactly why there is a difference, could be anything, starting from the time it takes to find a slot in an underlying compute cluster to the time it takes to pull the image from your container registry. As far as I know there is no SLA on the startup time.
So yes, you could try different Azure regions, to maybe get lucky in finding a region which is less busy on the ACI side. But this might or might not always help. (The resource group has nothing to do with it, as it is just a logical container)
I'm using Azure REST API to create, deploy and start a Cloud Service (classic) (cspkg hosted in Azure Storage) with hundreds of instances. I'm noticing that time Azure takes to provision and start the requested instances is really heterogeneous. First instances might start in 6-7 minutes but last ones might take up to 15-20 minutes, about 10 minutes longer than first ones. So my questions are:
Is this the expected behaviour? If so, what's the logic behind this? Could I do anything to speed things up?
How is Azure billing this? Is it counting the total count of instances since the very initial time when Cloud Service is deployed? or is it taking into account the specific timing on each individual instance?
UPDATE: I've been testing more scenarios and I've found a puzzling surprise. If I replace all the processes that my Cloud Service instances should run by a simple wait for some minutes (run .bat file with timeout command) then all the instances start almost at the same time (about 15 seconds between fastest and slowest instance). It was not just luck and random behaviour, I've proved that this behavior is repeatable and I can't even try to explain the root reason.
I also checked this a few weeks ago, and the startup time, depends on the size of the machine, if it is large it has more resources, so the boot time is faster, and also, if there is any error, exception on startup the VM will recycle till it can successfully start. I googled it, but did not find any solution to speed this up, so I don't think it is possible to do anything about the startup time. In the background every time when you deploy something, it will create a Windows Server, and boot it up and deploy your package on it and puts your web roles behind load balancer, this is why it takes so long, because a lot of things are happening.
The billing part is also not the best for the classic cloud services, you have to pay for it even during the startup and recycle, and even when it is turned off, so if you are done with your update, you should delete the VMs from your staging slot or scale it down, because you will pay for it even if it is turned off.
Hi I need to buy a subscription in azure with two DB and 1 Basic App.
Anyway I don't understand what stand for instance * hours.
I wrote an B2B site in aspnet that has to be run in this subscripotion.
How can I calculate or know how many instances I need?
And how do you calculate the hours for instance?
Is possible set it scalable?
For starters, there are 4 question in this question.
Well, instance * hours means you will have to pay X of some currency for every hour or every instance working. So 1 instance working for 20 hours = 20 hours billed, 5 instances working for 20 hours = 100 hours billed.
Hours are calculated pretty straight forward, once you've created App Service Plan you are being billed constantly, until you delete it.
Yes, WebApp's are scalable. And for the How can I calculate or know how many instances I need? we can't help you. It depends on your load. You would need to do some performance testing emulating actual load.
Also, check out the pricing calculator.
edit in Instances: So when you create a WebApp it has 1 instance initially. Instance is a VM hosting IIS that is hosting your WebApp. When you scale it, you create additional instances (VM's) that host additional IIS instances that host copies of your WebApp.
I am testing autoscaling features on azure with service bus queue messages and worker role.
Simple scenario in autoscale is , for more than 10 messages in the queue per instance, autoscale happens. however, during testing, noticed that even though I had pushed more than 200 messages in the queue, even after half n hour,
1)Only one instance was scaled up(started with 1 , it became 2)
2)None of the two instance were stable i.e in "Running" state.
This has me confused, are following possible reasons for inconsistent showing?
1)My subscription is company msdn subscription with capped limit per month (which is of course only meant for dev work).
2)I had pushed 200 messages within space of few seconds.. obviously this can be a production scenario..but does it hamper..
What could be possibilities?
Azure's auto-scaling works on 60min aggregate periods. Once it does kick in, it usually adds 1 instance at a time and it takes 10-12minutes to add an instance to a cloud service (which is what I'm assuming you have).
If you want a ton more control and options when it comes to auto-scaling, consider 3rd party products that specialize in this, like CloudMonix which is a successor of AzureWatch - both of which I'm associated with
Special note as to why your instances were both non-Ready during the scaling period:
It is because you started with 1 instance and went to 2. If you were to start with 2 instances and go to 3+, your first two instances would be fine. This is a special issue with Azure's load balancer and I forget the explanation that Microsoft gave regarding it, but it's here somewhere on the forums if you look
Service bus will eat 200 messages in a few seconds. Try sending more like 20,000.
Here is a sample, it uses F#, but same concept.
http://indiedevspot.com/2015/03/14/mocking-iot-telemetry-data-with-azure/
I subscribed to free 90 days azure trail offered by MS. I was excited and talked about it everywhere(including my blog http://techibee.com/windows-2012/free-try-windows-server-2012-in-azure-for-90-days/1876) about the free service offered by MS and how to make use of it. Well, my excitement lasted only for 7-8 days. Today I got a message from Azure team that my subscription disabled as my computer hours exceed the monthly limit.
I am just wondering how these compute hours are calculated in my case. I configured 2 VMs(2 medium) and using them to explore stuff. I never shutdown them since creation. Anyone has idea how these two VMs constituted to limits.
Another question I have is, since the subscription is disabled for this month, I am considering purchasing few more compute hours(pay-as-you-go). If I do that now, should I shutdown the VMs when I am not using them actively? will it stop the compute hours from increasing or they will continue to charge me for even shutdown hours. All I want is, I should get billed only when I am actively using it, when I am not connected to that host, I shouldn't. Looks like this is not happened in the trail program and their calculations seems different. Can anyone here given me some clarity?
From http://www.windowsazure.com/en-us/pricing/details/#header-3
Compute hours are charged whenever the Virtual Machine is deployed,
irrespective of whether it is running or not.
That's where all your hours went. You need to delete your VMs to prevent them using compute time.
With the free trial account you can configure only 1 VMs medium. Probably your offered expired early becouse you configured two.
Be aware that if you create a VM and you turn it off you will be charged the same as indicated when your turn off a VM.