Threading in .net Core Worker Services - multithreading

What is the intended usage of Worker Services in .net Core 3 with respect to threading. Is it expected that the need for multiple threads be handled as background threads within Workers or is there a pattern for adding multiple Workers (similar question) with in CreateHOstBuilder() and respected by all of its extension methods?

Related

When is better using clustering or worker_threads?

I have been reading about multi-processing on NodeJS to get the best understanding and try to get a good performance in heavy environments with my code.
Although I understand the basic purpose and concept for the different ways to take profit of the resources to handle the load, some questions arise as I go deeper and it seems I can't find the particular answers in the documentation.
NodeJS in a single thread:
NodeJS runs a single thread that we call event loop, despite in background OS and Libuv are handling the default worker pool for I/O asynchronous tasks.
We are supossed to use a single core for the event-loop, despite the workers might be using different cores. I guess they are sorted in the end by OS scheduler.
NodeJS as multi-threaded:
When using "worker_threads" library, in the same single process, different instances of v8/Libuv are running for each thread. Thus, they share the same context and communicate among threads with "message port" and the rest of the API.
Each worker thread runs its Event loop thread. Threads are supposed to be wisely balanced among CPU cores, improving the performance. I guess they are sorted in the end by OS scheduler.
Question 1: When a worker uses I/O default worker pool, are the very same
threads as other workers' pool being shared somehow? or each worker has its
own default worker pool?
NodeJS in multi-processing:
When using "cluster" library, we are splitting the work among different processes. Each process is set on a different core to balance the load... well, the main event loop is what in the end is set in a different core, so it doesn't share core with another heavy event loop. Sounds smart to do it that way.
Here I would communicate with some IPC tactic.
Question 2: And the default worker pool for this NodeJS process? where
are they? balanced among the rest of cores as expected in the first
case? Then they might be on the same cores as the other worker pools
of the cluster I guess. Shouldn't it be better to say that we are balancing main threads (event loops) rather than "the process"?
Being all this said, the main question:
Question 3: Whether is better using clustering or worker_threads? If both are being used in the same code, how can both libraries agree the best performance? or they
just can simply get in conflict? or at the end is the OS who takes
control?
Each worker thread has its own main loop (libuv etc). So does each cloned Node.js process when you use clustering.
Clustering is a way to load-balance incoming requests to your Node.js server over several copies of that server.
Worker threads are a way for a single Node.js process to offload long-running functions to a separate thread, to avoid blocking its own main loop.
Which is better? It depends on the problem you're solving. Worker threads are for long-running functions. Clustering makes a server able to handle more requests, by handling them in parallel. You can use both if you need to: have each Node.js cluster process use a worker thread for long-running functions.
As a first approximation for your decision-making: only use worker threads when you know you have long-running functions.
The node processes (whether from clustering or worker threads) don't get tied to specific cores (or Intel processor threads) on the host machine; the host's OS scheduling assigns cores as needed. The host OS scheduler minimize context-switch overhead when assigning cores to runnable processes. If you have too many active Javascript instances (cluster instances + worker threads) the host OS will give them timeslices according to its scheduling algorithms. Other than avoiding too many Javascript instances, there's very little point in trying second-guess the OS scheduler.
Edit Each Node.js instance, with any worker threads, uses a single libuv thread pool. A main Node.js process shares a single libuv thread pool with all its worker threads. If your Node.js program uses many worker threads, you may, or may not, need to set the UV_THREADPOOL_SIZE environment variable to a value greater than the default 4.
Node.js's cluster functionality uses the underlying OS's fork/exec scheme to create a new OS process for each cluster instance. So, each cluster instance has its own libuv pool.
If you're running stuff at scale, lets say with more than ten host machines running your Node.js server, then you can spend time optimizing Javascript instances.
Don't forget nginx if you use it as a reverse proxy to handle your https work. It needs some processor time too, but it uses fine-grain multithreading so you won't have to worry about it unless you have huge traffic.

Do web workers work properly if the client only has a one core CPU?

Pure curiosity, I'm just wondering if there is any case where a webworker would manage to execute a separate thread if only one thread is available in the CPU, maybe with some virtualization, using the GPU?
Thanks!
There seem to be two premises behind your question: firstly, that web workers use threads; and secondly that multiple threads require multiple cores. But neither is really true.
On the first: there’s no actual requirement that web workers be implemented with threads. User agents are free to use processes, threads or any “equivalent construct” [see the web worker specification]. They could use multitasking within a single thread if they wanted to. Web worker scripts are run concurrently but not necessarily parallel to browser JavaScript.
On the second: it’s quite possible for multiple threads to run on a single CPU. It works a lot like concurrent async functions do in single threaded JavaScript.
So yes, in answer to your question: web workers do run properly on a single core client. You will lose some of the performance benefits but the code will still behave as it would in a multi core system.

Migrating a running process/thread to different core

Is there any way to migrate a currently running process to a different cpu core by triggering the migration from another process.
Here is what i am trying to do in more detail.
I am working on a heterogeneous processor system. I have a multi-threaded application which runs on the system. I want to migrate one of the thread to different cores (with different capabilities) whenever my manager process decides.
Can my manager process trigger the thread migration for the particular tid of the target application pid?
If so, can it be done instantaneously i.e the running thread be immediately migrated to another core (say from core 0 to core 1) upon triggered by my manager process?
I guess this should be possible (if you are using the POSIX threads API) using pthread_setaffinity_np(3):
The pthread_setaffinity_np() function sets the CPU affinity mask of
the thread thread to the CPU set pointed to by cpuset. If the call is
successful, and the thread is not currently running on one of the CPUs
in cpuset, then it is migrated to one of those CPUs.

Info about thread count in OSGi

Is there a command in OSGi to get information about the thread pool? E.g. minimum number of threads, current number of threads ... etc.
An OSGi framework does not know anything about thread pools. A framework implementation has some threads for asynchronous task like event dispatching but otherwise does not create threads/thread pools for the bundles. Any threads/thread pools created by bundles are unknown to the framework.

using asynchbeans instead of native jdk threads

are there any performance limitations using IBM's asynchbeans?
my apps jvm core dumps are showing numerous occurences of orphaned threads. Im currently using native jdk unmanaged threads. Is it worth changing over to managed threads?
In my perspective asynchbeans are a workaround to create threads inside Websphere J2EE server. So far so good, websphere lets you create pool of "worker" threads, controlling this way the maximum number of threads, typical J2EE scalability concern.
I had some problems using asynchbeans inside websphere on "unmanaged" threads (hacked callbacks from JMS Listener via the "outlawed" setMessageListener). I was "asking for it" not using MDBs in the first place, but I have requisites that do not feet MDB way.

Resources