There is probably an easy solution to this, but my searching has been unable to find it.
In my Azure solution, I have a worker role with two instances that are pulling messages off a queue for processing. For debugging purposes, I want to temporarily stop those instances.
If I click on Cloud Services, and then click Instances, I see my two instances which are running, but there doesn't appear to be any way to pause/stop/turn them off. Any ideas as to how I can?
there doesn't appear to be any way to pause/stop/turn them off
You're correct in your observation because you can't pause/stop/turn off a specific instance. You could stop or turn off an entire cloud service but not an individual instance. You could however delete a particular instance but that's not something you have in mind if I understand correctly.
Any ideas as to how I can?
Do take a look at this blog post: http://alexandrebrisebois.wordpress.com/2013/09/29/temporarily-taking-a-cloud-service-role-instances-off-of-the-load-balancer/. Basically the trick is to make an instance Busy so that Azure load balancer does not send request to that instance.
Related
I am learning Azure cloud services.
I deployed the Contosco cloud service (https://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-get-started/)
to a staging slot.
The Ads worker role instance is having problems (busy status).
Any tips on troubleshooting ? Clicking on the instance just shows high level info.
In a non-azure application, looking at event log would be useful. Should I following this instruction : https://www.opsgility.com/blog/2011/09/20/windows-event-logs-with-windows-azure-diagnostics-and-powershell/
Thanks,Peter
You should first check this link to understand the various lifecycles a cloud service goes through. My understanding is something written in onstart() method might be the cause, but without any code in the question I can't be sure. Then you can Enable Diagnostics (logs) using this and this links. So, that you clearly see, what line of code has executed and what is keeping the cloud service busy.
I am seeking to improve the autoscaling behavior of an existing CPU-intensive app implemented in .NET and hosted as an Azure Worker Role. Currently, auto-scaling is controlled by the app itself through the Management API by adjusting the number of role instances. However, I do not control which instance gets de-allocated. In order to be more efficient, I would like to specify which instance is intended to be de-allocated, instead of just killing one instance at random.
Does anyone know how to achieve this?
answer that was wrong removed, see correct links below in comments from #Joannes. not deleting as I don't want to remove his comment.
I have a cloud service with multiple instances. I'm trying to find a way to notify instance A if instance B goes down. Is there a way to hook into the azure notifications to do this rather than have each instance poll the others?
Polling is simplest approach. Do be aware that it takes some time for Azure to detect if an instance goes down (depends on how outage occurs and how it is manifested). In fact, if your app crashes, Azure may not even detect an outage at all.
I'm involved with a product called CloudMonix # http://cloudmonix.com , it provides sophisticated monitoring/automation for Azure and has a set of features where-by it can call your APIs when it detects issues in an environment. Perhaps it could be used to detect an issue with your instance and post to an API method that you implement?
HTH
I have a web service hosted on azure as a web role.
In this web role I override the Run() method and perform some db operations in the following order to basically act as a worker role.
go to blob storage to pull a small set of data.
Cache this data for future use.
Kill some blobs that are too old.
Thread.Sleep(900000);
Basically the operation repeats every 15 minutes in the background.
It runs fine when I run on DevFabric but when deployed the azure portal get stuck in a loop of stabilizing role and then preparing node.
Never actually starting up either instance.
I have enabled diagnostics and it isn't showing me anything to suggest there is a problem.
I'm at a loss for why this could be happening.
Sounds like an error is being thrown in the OnStart. Do you have any way of doing try/catch around the whole function and dumping the error into an EventViewer? From there you would be able to remote into the instance and investigate the error
Most likely your configuration deployed to cloud is different from the one running in an emulator (SQL Azure firewall permissions, pointers to Local Dev storage instead of ATS, etc). Also, make sure that your diagnostics is pointing to a real Azure account instead of local Dev storage.
I would suggest moving this code to Run(). In OnStart(), your role instance isn't yet visible to the load balancer, and since you're introducing a very long (ok, infinite delay) into OnStart(), this is likely why you're seeing the messages about the role trying to stabilize (more info on these messages are here.)
I don't typically like to answer my own question when someone else has made an effort to help me however I feel that the approach I used to solve this should be documented.
I enabled intellitrace when deploying to Azure and I was able to see all the exceptions being thrown and investigate the cause of the exceptions.
Intellisense was critical in solving my deployment issues. I would recommend it to anyone seeing an inconsistency between deploying devfabric and deploying to azure.
I already have a hosted service deployment which has 2 worker role instances, I want to add another 3 worker role instances to the same hosted service and at the same time, I don't want existing 2 worker role instances to be restarted.
While I can create another hosted service and put the new 3 instances into it, But, I have heard that Azure supports only 6 hosted services per account. Is that true?
As my application will use cloud drive function, so I will only create 1 instance per role.
#Igor and #Mike have already given you great answers. Let me add a bit of detail to deal with your issue of Azure Drives.
You seem to be designing a single-instance-per-role configuration, just so you can have one writeable drive per instance. If that's indeed the case, this really doesn't scale well: it requires you to modify your project (and deploy a new package) every time you want to scale out or in. As an alternative, just create a unique drive per instance of the same role. Come up with a naming scheme based on instance ID (for a simple example: /Drives/Instance0.vhd). Upon instance startup, have the instance glean its ID and create a drive (or mount an existing one). The ID is available via RoleEnvironment.CurrentRoleInstance.Id.
I'm not sure I am fully understanding your question here, but here are some ideas. 1) As has been noted, if you want to add instances of existing worker roles that can be done, and you should (I believe even in Java) be able to keep the existing instances from restarting.
2) If you do want to create new roles (new definitions) it is possible to do this, and as suggested it is possible to go past the 'limit' of 6 services. And if you want to add new roles without restarting everything it is also possible - see http://blogs.msdn.com/b/windowsazure/archive/2011/10/19/announcing-improved-in-place-updates.aspx for more information.
3) You also mention a desire to run a single instance of each role because of the use of the 'cloud drive' I'm not sure I understand the issue here. If you are mounting the same drive it does not matter if it is mounted from a single instance of a role or many, or mounted by different roles - in any case it is only possible to mount it from one instance for writing, but you can mount it from multiple instances for reading.
I hope this is helpful. Please let me know if I have misunderstood your issues/questions and we'll see if we can find the right answer!
Thanks and good luck!
Instead of trying to add new roles into production slot, consider deploying new set of roles into staging slot, test and perform VIP swap.
Windows Azure allows you to add instances on the fly without rebooting your other instances. You will need to let Azure know about this during handling of RoleEnvironment_Changing event.
It is also true that by default, Windows Azure caps your limit of hosted services to 6 per account. I believe you can increase that limit by calling their support and having the restriction lifted after they do a credit check... However, the proper pattern to scale out is NOT to add additional services, but to add additional instances.
Here is an example that makes sure that your instances are not rebooted after new ones have been added:
public override bool OnStart()
{
RoleEnvironment.Changing += RoleEnvironmentChanging;
return base.OnStart();
}
private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
// If a configuration setting is changing
if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))
{
// Set e.Cancel to true to restart this role instance
e.Cancel = false;
}
}
Coincidentally, if you're looking to setup automatic scaling by adding or removing instances to your service, as demand increases or decreases, you can look into a 3rd party service called AzureWatch at http://www.paraleap.com