Role cannot be reached by the host system Azure- WorkerRole - azure

I'm using the Worker Role machines (Medium -> 2 Cores with 3,5 GB of Ram) to do massive work, and I'm able to use 100% of the CPU (of both cores) and 85% of RAM.
During this work, each takes around 20 minutes/ 40 minutes the Azure thinks the machine is unhealthy and stops all my work.
In the Portal I see my worker instance are getting the message "Waiting for the status (Role cannot be reached by the host system).
Can anyone know a work around that doesn't include:
1) Use a more power full Role with cores that I will not use
2) Try to reduce the CPU usage by my application (100% CPU usage is what we want to use)
Thanks in advance
Rui

try this:
Thread.CurrentThread.Priority = ThreadPriority.BelowNormal
maybe some other things(processes, threads) need lower priority's also but this should keep the cpu utilization at 100%
for (external) processes start them with the following code(this is vb but you should be able to covert it to your language
Dim myprocess As New System.Diagnostics.Process()
myprocess.StartInfo.FileName = "C:\the\path\to\the\the\process.exe"
myprocess.Start()
myprocess.PriorityClass = ProcessPriorityClass.BelowNormal
you could set the priority of the current process of the worker role but this might be dependent of other processes so watch out, its better to set the priority of the demanding process lower this won't slow it down unless there is other work to be proformed
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.AboveNormal

This is something that is affecting a service I'm running in a Windows Azure as well.
I have just tried manually setting the Priority of WaAppAgent to High. Hopefully that helps.
But really this is shouldn't be my problem. Sometimes my database is running at 100% CPU and really this is the WORST possible time for a restart.
I really don't want to over provision resources just so some heart beat will be happy. Do the VM instances have a heart beat event as well? Maybe the solution is to switch to using a VM instead of using a PaaS role?

Related

How do I reduce cpu percentage in portal azure?

Right now my website is slow and when I see the xxxx-cd-hp it looks like picture below. CPU: 90, 36%. Is this still normal?
Apparently at certain times, CPU percentage increased. Maybe because many users have access
How can I solve this problem?
CPU time or process time is an indication of how much processing time on the CPU, a process has used since the process has started and CPU Percentage = Process time/Total CPU Time* 100
Suppose If the process has been running for 5 hours and the CPU time is 5 hours, and it is a single core machine, then that means that the process has been utilizing 100% of the resources of the CPU. This may either be a good or bad thing depending on whether you want to keep resource consumption low or want to utilize the entire power of the system.
App Service Diagnostics is an intelligent and interactive experience to help you troubleshoot your app with no configuration required. When you run into issues with your app, App Service Diagnostics points out what’s wrong to guide you to the right information to more easily troubleshoot and resolve issues. To access App Service diagnostics, navigate to your App Service web app in the Azure portal. In the left navigation, click on Diagnose and solve problems.

how to find reason for 100% CPU utilization on AWS EC2?

I've fleet of EC2 instances : A and B (both are in same AWS account, same Linux OS version, same region, but different AZ and under different load balances ).
when i give same load to fleet of EC2 instances A and B ; both behave differently.
EC2 A works normally with average CPU utilization upto 60% ; on other hand EC2 B shows spike in CPU utilization upto 100% then it start again from 0 and same effort found in other instances in fleet.
Anyone experienced this in past?
ssh to the host B, see the system activity via top, look for the process consuming most of the CPU.
also you can inspect the process with "lsof" command or
ps -fp "PID of the process"
After analysis it was found that couple of security patches getting executed; which was causing these spikes.
This has happened to me twice now with a MS Server instance running EC2. In both cases it was the MS Update Service which had taken 100% of CPU and burnt off all my CPU credits.
The only way to get back on and fix it was to set "Instance T2/T3 Unlimited" and stop/disable the MS Update Service.

Azure Cloud Worker Role not getting enough CPU usage

I am new to azure cloud and I have deployed my first Worker Role.
The process that takes 30 minute to complete in my local system is taking more than 1 hour on azure worker role.
To find the issue I have taken access of remote desktop of worker role. I checked the task manager and found that the worker role process (WaWorkerHost.exe) was using just 12% of the CPU even if no other process was running. If I run the same code in my local system it is taking 24-25% of CPU.
I think that is why the worker role is taking double time then in my local system.
The VM size of my worker role is Extra Large (8 Cores, 14336MB).
I think as there is no other process on worker role my process should get more CPU usage. But I am not able to find the way by which I can increase the CPU usage of worker role process (WaWorkerHost.exe) from azure portal.
Please with this.
Thanks.
CPU consumption only is not enough to determine if a machine is working hard or hardly working. Your assumption "CPU percentage is low, so the machine isn't busy (enough)" is way too simplistic. You should take more resources into consideration like disk access, memory usage and network access.
As you can imagine it's relatively simple to create an application that doesn't use up all your CPU, while it completely freezes your machine. Just have it read and write files from all over the disk, for instance.
EDIT:
Expanding in the first paragraph: what is it the Worker Role actually does? Does it connect to some type of storage? Or maybe an internally hosted database or files? Is it putting messages on a queue or calling external services?
All of these things might be the reason the Worker Role is taking longer to complete the task than your local machine does. For instance because of network latency. And while it seem like it isn't too busy if you only look at the CPU, it might be very busy waiting for an answer from an external resource.

Azure VM pricing - Is it better to have 80 single core machines or 10 8-core machines?

I am limited by a piece of software that utilizes a single core per instance of the program run. It will run off an SQL server work queue and deposit results to the server. So the more instances I have running the faster the overall project is done. I have played with Azure VMs a bit and can speed up the process in two ways.
1) I can run the app on a single core VM, clone that VM and run it on as many as I feel necessary to speed up the job sufficiently.
OR
2) I can run the app 8 times on an 8-core VM, ...again clone that VM and run it on as many as I feel necessary to speed up the job sufficiently.
I have noticed in testing that the speed-up is roughly the same for adding 8 single core VMs and 1 8-core VM. Assuming this is true, would it better better price-wise to have single core machines?
The pricing is a bit of a mystery, whether it is real cpu usage time, or what. It is a bit easier using the 1 8-core approach as spinning up machines and taking them down takes time, but I guess that could be automated.
It does seem from some pricing pages that the multiple single core VM approach would cost less?
Side question: so could I do like some power shell scripts to just keep adding VMs of a certain image and running the app, and then start shutting them down once I get close to finishing? After generating the VMs would there be some way to kick off the app without having to remote in to each one and running it?
I would argue that all else being equal, and this code truly being CPU-bound and not benefitting from any memory sharing that running multiple processes on the same machine would provide, you should opt for the single core machines rather than multi-core machines.
Reasons:
Isolate fault domains
Scaling out rather than up is better to do when possible because it naturally isolates faults. If one of your small nodes crashes, that only affects one process. If a large node crashes, multiple processes go down.
Load balancing
Windows Azure, like any multi-tenant system, is a shared resource. This means you will likely be competing for CPU cycles with other workloads. Having small VMs gives you a better chance of having them distributed across physical servers in the datacenter that have the best resource situation at the time the machines are provisioned (you would want to make sure to stop and deallocate the VMs before starting them again to allow the Azure fabric placement algorithms to select the best hosts). If you used large VMs, it would be less likely to find a suitable host with optimal contention to accommodate many virtual cores.
Virtual processor scheduling
It's not widely understood how scheduling a virtual CPU is different than scheduling a physical one, but it is something worth reading up on. The main thing to remember is that hypervisors like VMware ESXi and Hyper-V (which runs Azure) schedule multiple virtual cores together rather than separately. So if you have an 8-core VM, the physical host must have 8 physical cores free simultaneously before it can allow the virtual CPU to run. The more virtual cores, the more unlikely the host will have sufficient physical cores at any given time (even if 7 physical cores are free, the VM cannot run). This can result in a paradoxical effect of causing the VM to perform worse as more virtual CPU cores are added to it. http://www.perfdynamics.com/Classes/Materials/BradyVirtual.pdf
In short, a single vCPU machine is more likely to get a share of the physical processor than an 8 vCPU machine, all else equal.
And I agree that the pricing is basically the same, except for a little more storage cost to store many small VMs versus fewer large ones. But storage in Azure is far less expensive than the compute, so likely doesn't tip any economic scale.
Hope that helps.
Billing
According to Windows Azure Virtual Machines Pricing Details, Virtual Machines are charged by the minute (of wall clock time). Prices are listed as hourly rates (60 minutes) and are billed based on total number of minutes when the VMs run for a partial hour.
In July 2013, 1 Small VM (1 virtual core) costs $0.09/hr; 8 Small VMs (8 virtual cores) cost $0.72/hr; 1 Extra Large VM (8 virtual cores) cost $0.72/hr (same as 8 Small VMs).
VM Sizes and Performance
The VMs sizes differ not only in number of cores and RAM, but also on network I/O performance, ranging from 100 Mbps for Small to 800 Mbps for Extra Large.
Extra Small VMs are rather limited in CPU and I/O power and are inadequate for workloads such as you described.
For single-threaded, I/O bound applications such as described in the question, an Extra Large VM could have an edge because of faster response times for each request.
It's also advisable to benchmark workloads running 2, 4 or more processes per core. For instance, 2 or 4 processes in a Small VM and 16, 32 or more processes in an Extra Large VM, to find the adequate balance between CPU and I/O loads (provided you don't use more RAM than is available).
Auto-scaling
Auto-scaling Virtual Machines is built-into Windows Azure directly. It can be based either on CPU load or Windows Azure Queues length.
Another alternative is to use specialized tools or services to monitor load across the servers and run PowerShell scripts to add or remove virtual machines as needed.
Auto-run
You can use the Windows Scheduler to automatically run tasks when Windows starts.
The pricing is "Uptime of the machine in hours * rate of the VM size/hour * number of instances"
e.g. You have a 8 Core VM (Extra Large) running for a month (30 Days)
(30 * 24) * 0.72$ * 1= 518.4$
for 8 single cores it will be
(30 * 24) * 0.09 * 8 = 518.4$
So I doubt if there will be any price difference. One advantage of using smaller machines and "scaling out" is when you have more granular control over scalability. An Extra-large machine will eat more idle dollars than 2-3 small machines.
Yes you can definitely script this. Assuming they are IaaS machines you could add the script to windows startup, if on PaaS you could use the "Startup Task".
Reference

IIS, multiple CPU cores, application pools and worker processes - best configuration for a single site?

We use Kentico CMS and I've exchanged emails with them about a web garden deployment.
We have a single site running on a server with 8 cpu cores. In line with Kentico's advice, we have not altered the application pool web garden setting from the default i.e. it is set to a maximum number of worker processes of 1.
Our experience is that the site only uses one of the cpu cores - the others are idling. When I emailed them about this, their response was that the OS/IIS would handle this and use other cores as necessary even though the application pool only has a single worker process.
Now, I've a lot of respect for the guys at Kentico, but this doesn't seem right to me?
Surely, if we want to use all cores, we need to permit eight worker processes (and implement session state storage in SQL server)?
Many thanks
Tony
I would suggest running perfmon for a 24 hours and see if you can determine what resources are being used. Indeed they might already be running on all cores . . . Also, if their web app is a heavily threaded system, then it will take full advantage of multiple cores(at least ours does). Threads, not worker processes, are what actually count for processor utilization.
Not sure if you got an answer on ServerFault, at any rate ASP.NET is multi-threaded and in a single worker process there are several threads, each serving a single request.

Resources