Single-threaded app with multiple vCPUs? - node.js

As I learned from different websites like this one, vCPU represents a portion of time of a physical CPU core(s) that is assigned to a VM, where some process is running. And also:
As threads execute (vCPUs are used) they are cycled around the
physical CPUs.
Does that mean that if I assign more vCPUs to my VM then my app (node.js) thread will be able to use all of this vCPUs and run faster despite that node.js is single-threaded?
p.s. If I picked the wrong place (forum) to ask this question please tell me the forum where I can get an answer.

Related

How to ensure node.js process will run on different thread?

So for example there would be service 1 that runs on http://127.0.0.1:5000 that runs on thread 1.
And I would like to run service 2 that would run on http://127.0.0.1:5001 that would run on any thread but not on thread 1.
Is it possible to do something like that?
First off, I think you meant to say "CPU core" instead of "thread". Code runs in a thread and a thread runs on a CPU core when it is running. A process may contain one or more threads. In fact, a nodejs process contains several threads, one thread for running your Javascript, but other threads are involved in running the overall nodejs process.
Which CPU core a given thread runs on is up to the operating system.
Normally with a multi-core CPU, two processes that are trying to run at the same time will be assigned to different CPU cores. This is a dynamic thing inside the OS and can change from time to time as different threads/processes are time sliced. Processes of any kind (including nodejs processes) are not hard bound to a particular core and threads within those processes are not hard bound to a particular core either.
The operating system will decide based on which threads in which processes are vying for time to run how to allocate CPU cores to each thread and it is a dynamically changing assignment depending upon demand. If more threads are trying to run than there are cores, then the threads will each get slices of time on a CPU core and they will all share the CPU cores, each making progress, but not getting to hog a CPU core all to themselves.
If your two services, one running on port 5000 and one running on port 5001 are both nodejs apps, then the operating system will dynamically allocate CPU cores upon demand to each of them. Neither of those two service processes are bound to a specific core. If they are both heavily busy at the same time and you have a multi-core CPU and there's not a lot else in computer also contending for CPU time, then each service's main thread that runs your Javascript will have a different CPU core to run on.
But, keep in mind that this is a dynamic assignment. If you have a four core CPU and all of a sudden several other things start up on your computer and are also contending for CPU resources, then the CPU cores will be shared across all the threads/processes contending for CPU resources. The sharing is done via rotation in small time slices and can incorporate a priority system too. The specific details of how that works vary by operating system, but the principle of "time-sharing" the available CPU cores among all those threads requesting CPU resources is the same.

Tuning of Server Thread pool

I was just playing with threads and see how much CPU they consume. I have checked in two scenarios.
In first scenario I created four threads and started them with infinite loop. Soon those threads consumed my all 4 CPU cores. After checking performance monitor in task manager I found CPU consumption is 100%.
In second scenario when I tried it with web application and in rest controller(using tomcat server 8.5 version) I have run infinite loop. So that if I request url 4 times with browser(with different tabs obviously). My CPU consumption should be 100%. I couldn't see 100% CPU consumption.
Why is there difference?
My Second question is: how would I tune the server thread pool. I have to use more than 4 threads because it might be possible few of them are waiting for IO operation. I am using hibernate as ORM that maintains connection pooling. So how many threads I should use in thread pool as well as connection pool. How would I decide?
We can't answer the first part of your question without seeing your code. But I suspect the problem is in the way that you have implemented the threads in the webapp case. (Because what you report shouldn't happen ...)
The answer to the second part is "trial and error". More specifically:
Make the pool sizes tunable parameters
Develop a benchmark that is representative of your expected system load.
Run benchmark with different settings, measure performance and graph results.
Based on the graph (and other criteria) pick a settings that are the best compromise between performance and resource (e.g. memory) utilization.
Thread pools and connection pools are different, and have different resource implications. The first is (largely) about memory; i.e. thread stacks and temporary objects used by the threads while they are active. The second is (largely) about resources associated with connections (active or idle).

Role cannot be reached by the host system Azure- WorkerRole

I'm using the Worker Role machines (Medium -> 2 Cores with 3,5 GB of Ram) to do massive work, and I'm able to use 100% of the CPU (of both cores) and 85% of RAM.
During this work, each takes around 20 minutes/ 40 minutes the Azure thinks the machine is unhealthy and stops all my work.
In the Portal I see my worker instance are getting the message "Waiting for the status (Role cannot be reached by the host system).
Can anyone know a work around that doesn't include:
1) Use a more power full Role with cores that I will not use
2) Try to reduce the CPU usage by my application (100% CPU usage is what we want to use)
Thanks in advance
Rui
try this:
Thread.CurrentThread.Priority = ThreadPriority.BelowNormal
maybe some other things(processes, threads) need lower priority's also but this should keep the cpu utilization at 100%
for (external) processes start them with the following code(this is vb but you should be able to covert it to your language
Dim myprocess As New System.Diagnostics.Process()
myprocess.StartInfo.FileName = "C:\the\path\to\the\the\process.exe"
myprocess.Start()
myprocess.PriorityClass = ProcessPriorityClass.BelowNormal
you could set the priority of the current process of the worker role but this might be dependent of other processes so watch out, its better to set the priority of the demanding process lower this won't slow it down unless there is other work to be proformed
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.AboveNormal
This is something that is affecting a service I'm running in a Windows Azure as well.
I have just tried manually setting the Priority of WaAppAgent to High. Hopefully that helps.
But really this is shouldn't be my problem. Sometimes my database is running at 100% CPU and really this is the WORST possible time for a restart.
I really don't want to over provision resources just so some heart beat will be happy. Do the VM instances have a heart beat event as well? Maybe the solution is to switch to using a VM instead of using a PaaS role?

How to achieve maximum concurrency on a Azure Worker role

Is it possible to find out how many concurrent threads we can start on a worker role considering the current environment we are running on. It seems that we can't really rely on the numbers of cores as the resources sharing is not directly tied to one physical core.
Taking an example of a small instance how many concurrent background worker can I have running simultaneously ( CPU bound off course ! ).
Is there any way we can do that dynamically as well (if we choose to scale up to another type of instance)
Thanks

IIS, multiple CPU cores, application pools and worker processes - best configuration for a single site?

We use Kentico CMS and I've exchanged emails with them about a web garden deployment.
We have a single site running on a server with 8 cpu cores. In line with Kentico's advice, we have not altered the application pool web garden setting from the default i.e. it is set to a maximum number of worker processes of 1.
Our experience is that the site only uses one of the cpu cores - the others are idling. When I emailed them about this, their response was that the OS/IIS would handle this and use other cores as necessary even though the application pool only has a single worker process.
Now, I've a lot of respect for the guys at Kentico, but this doesn't seem right to me?
Surely, if we want to use all cores, we need to permit eight worker processes (and implement session state storage in SQL server)?
Many thanks
Tony
I would suggest running perfmon for a 24 hours and see if you can determine what resources are being used. Indeed they might already be running on all cores . . . Also, if their web app is a heavily threaded system, then it will take full advantage of multiple cores(at least ours does). Threads, not worker processes, are what actually count for processor utilization.
Not sure if you got an answer on ServerFault, at any rate ASP.NET is multi-threaded and in a single worker process there are several threads, each serving a single request.

Resources