I am new to azure cloud and I have deployed my first Worker Role.
The process that takes 30 minute to complete in my local system is taking more than 1 hour on azure worker role.
To find the issue I have taken access of remote desktop of worker role. I checked the task manager and found that the worker role process (WaWorkerHost.exe) was using just 12% of the CPU even if no other process was running. If I run the same code in my local system it is taking 24-25% of CPU.
I think that is why the worker role is taking double time then in my local system.
The VM size of my worker role is Extra Large (8 Cores, 14336MB).
I think as there is no other process on worker role my process should get more CPU usage. But I am not able to find the way by which I can increase the CPU usage of worker role process (WaWorkerHost.exe) from azure portal.
Please with this.
Thanks.
CPU consumption only is not enough to determine if a machine is working hard or hardly working. Your assumption "CPU percentage is low, so the machine isn't busy (enough)" is way too simplistic. You should take more resources into consideration like disk access, memory usage and network access.
As you can imagine it's relatively simple to create an application that doesn't use up all your CPU, while it completely freezes your machine. Just have it read and write files from all over the disk, for instance.
EDIT:
Expanding in the first paragraph: what is it the Worker Role actually does? Does it connect to some type of storage? Or maybe an internally hosted database or files? Is it putting messages on a queue or calling external services?
All of these things might be the reason the Worker Role is taking longer to complete the task than your local machine does. For instance because of network latency. And while it seem like it isn't too busy if you only look at the CPU, it might be very busy waiting for an answer from an external resource.
Related
We have a simple worker that picks up messages from a queue and runs a few queries. We don't ever write to the disk ourselves but we do have diagnostics turned on in the roll settings.
Once in a while the the disk write/read spikes and the worker becomes unresponsive. What is the role trying to write to the disk? On the surface it doesn't appear to be a crash dump because those tables and blobs are still empty. Are our diagnostics configured improperly?
Here's an example of a spike we saw recently. It was writing for over an hour!
Try enabling remote desktop support in the role configuration in the Azure portal.
Once the problem resurfaces, log in via RDP and start Resource Monitor. The Disk tab should be able to pinpoint disk IO usage by process and by file.
Enabling storage logs should tell you exactly what are those reads and writes on the disk.
So, this is a very open ended question and is very hard to predict. Your Cloud Services are ultimately Windows machines and what's happening on Windows can (usually) only be monitored by something inside Windows.
It is very possible that a Windows Update related task was running. Those may cause spikes in disk R/W
We typically advice users who use CloudMonix and want to know what causes CPU/Memory/other issues to install CloudMonix agent on their machines as it captures running processes, their memory and CPU utilizations and can show process that caused a spike. Usually spikes in disk R/W are correlated to spikes in CPU usage.
Note, if the spike was caused by your own code, you'll need to use profilers such as RedGate's ANTS performance profiler or Jetbrains dotTrace or some such to determine the ultimate root cause.
HTH
I have create a windows service having multiple threads (approx 4-5 threads). In this service thread created at specific internal and abort. Once thread is created it performs some I/O operations & db operation.
I have a GUI for this service to provide configuration which is required by this service. In this GUI I want to add one more functionality which shows me the performance of windows service with respect to all threads. I want show CPU utilization (if multicore processor is available than all the processors utilization) with its memory utilization.
If you look at Windows Task Manager it shows CPU (Per core basis) + Memory Utilization, I want to build the same thing but only for threads running by my windows service.
Can anybody help me out how to get CPU% and memory utilization per thread?
I think you cannot get the CPU and Memory utilization of Threads. Instead you can get the same for your service.
My question is, why would you require to build your own functionality, where SysInternals Process explorer gives more details for you? Any specific needs?
If you need to monitor the thread activities, you could better log some information using Log4net or other logging tools. This will get you an idea about the threads and what they are doing.
To be more specific, you could publish the logs using TelNetAppender, which can be received by your application. This will help you to look into the Process in real time.
I'm using the Worker Role machines (Medium -> 2 Cores with 3,5 GB of Ram) to do massive work, and I'm able to use 100% of the CPU (of both cores) and 85% of RAM.
During this work, each takes around 20 minutes/ 40 minutes the Azure thinks the machine is unhealthy and stops all my work.
In the Portal I see my worker instance are getting the message "Waiting for the status (Role cannot be reached by the host system).
Can anyone know a work around that doesn't include:
1) Use a more power full Role with cores that I will not use
2) Try to reduce the CPU usage by my application (100% CPU usage is what we want to use)
Thanks in advance
Rui
try this:
Thread.CurrentThread.Priority = ThreadPriority.BelowNormal
maybe some other things(processes, threads) need lower priority's also but this should keep the cpu utilization at 100%
for (external) processes start them with the following code(this is vb but you should be able to covert it to your language
Dim myprocess As New System.Diagnostics.Process()
myprocess.StartInfo.FileName = "C:\the\path\to\the\the\process.exe"
myprocess.Start()
myprocess.PriorityClass = ProcessPriorityClass.BelowNormal
you could set the priority of the current process of the worker role but this might be dependent of other processes so watch out, its better to set the priority of the demanding process lower this won't slow it down unless there is other work to be proformed
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.AboveNormal
This is something that is affecting a service I'm running in a Windows Azure as well.
I have just tried manually setting the Priority of WaAppAgent to High. Hopefully that helps.
But really this is shouldn't be my problem. Sometimes my database is running at 100% CPU and really this is the WORST possible time for a restart.
I really don't want to over provision resources just so some heart beat will be happy. Do the VM instances have a heart beat event as well? Maybe the solution is to switch to using a VM instead of using a PaaS role?
Is there a way to stop the worker process by itself? I already coded in console application, which uses REST API to start and stop worker process and delete cloud service deployment. In the latest announcement, stopping worker processes will not cost anything, it is free now.
Can I make the worker process to stop itself? Is there any event in the worker process to stop itself? Please let me know.
So I think you're referring to Worker Roles, right? A worker process would simply be something you run in your app (like a thread, a method, something). Azure Worker Roles are full VMs.
Assuming that's what you meant: The new announcement about stopping VMs does not apply to Web / Worker Role instances; it applies to Virtual Machines. And those can be stopped easily via REST call (or much easier via PowerShell that wraps the REST call). You could make that call from a Virtual Machine, which would effectively shut itself down, but I'm not so sure that's a sound idea. If you take that approach, it will be very hard for you to track the role-stop progress, since you would have just stopped the VM that made the call.
I would like to create an application that holds large amount of volatile data in memory. Only small part of this data needs to be persisted when host machine shuts down, or in case of maintenance. Outages should be rare, this in memory data needs to be accessible for most of the time, but rare restrats of service is bearable.
If I have been developing for a server, I would create a WindowsService, which runs reliably while the machine is up, and I would persist a fraction of the data in the OnStop() method.
I'm thinking of moving this whole thing to the cloud. The question is that if a Worker Role is similiar to a Windows Service from this point of view? Does it run most of the time with rare outages, or is it recycled / restarted from time to time or when it is idle?
Like Windows Service, Worker role is meant for processing background tasks. However one thing you would need to keep in mind that your worker role can go down any time. It may be because of hardware failure or software updates. Thus you can't always assume this to be highly available. That's why Windows Azure recommends deploying multiple instances of your application.
What you could do is have multiple instances of your worker role running and all of them sharing a common cache where you would put volatile data. Do take a look at Windows Azure Caching (http://msdn.microsoft.com/en-us/library/windowsazure/gg278356.aspx) where you could either dedicate some memory of a VM (i.e. an instance) for caching purpose or have a full VM dedicated for caching. That way you'll have your volatile data somewhere outside of your worker roles and thus making it available to all instances.