I am checking nodejs internal architecture and was curious about the event loop. So I cloned the node repo and modified the libuv code a little and added printf statements to understand the loop better. Then I build the code and ran an empty js file.
I added a printf statement before uv__io_poll function call
printf("polling io with thread %d\n", tid);
where tid is the thread id.
After running the program, I got the following output
polling io with thread 2242953
polling io with thread 2242949
polling io with thread 2242949
polling io with thread 2242949
Why there are 2 different thread ids?
Related
server coroutine and task_loop coroutine in same process
I want to create a program, which contain a server receiving message from other process through socket or unix-socket , this program also contain a main loop to keep donig tasks.
I am thinking about the two implements:
one is
1.a thread for the server
2.a thread for the main loop
3.communication between server and main task loop by queue
two is
1.a coroutine for the server
2.a coroutine for the main loop
the first would be more complicated because to coordinate threads in case of deadlock
the second one , the server would be fail to receive message if main loop are keep running . if there any possibility let the server coroutine run when there is message through the socket?
What actually happens behind the scenes with asynchronous functions?
Does it open a new thread and let the OS start and run it?
If so, can it cause deadlocks or other thread problems?
Here's an example of a async method:
var fs = require('fs')
var file = process.argv[2]
fs.readFile(file, function (err, contents) {
var lines = contents.toString().split('\n').length - 1
console.log(lines)
})
In fs.readFile(file,callback).This is a non-blocking call which means.
node's main thread stores the callback in event-table and
associate it with an event which will be emitted whenever file
reading process is done.
By the same time node has several internal threads(thread pool) from
which node's main thread assign file reading task to one of the
thread.
After this assignment the command is returned to main thread and
main thread continues with the other tasks and file reading process
is being done in background by other thread(not main thread).
Whenever file reading process is completed the event associated with
the callback is emitted along with the data from file and that
callback is pushed into task-queue where event loop tries to push
each task to the main thread(stack).
And when main thread(stack) becomes available and and there is no
task present before the callback's task this callback is pushed to
main thread's stack by the event-loop.
Please read event-loop for more info.
So the thread which is responsible for file reading doesnt cause Deadlock to othere threads.
It simply emit exception or success which is later handled by the callback
As I understand , a Node JS server continues to listen on a port for any incoming requests which means the thread is continuously busy ? When does it break from this continuous never ending loop and check if there are any events to be processed from the call back queue?
2) When Node JS starts executing code of callback functions, the server is essentially stopped? It is not listening for further requests? I mean since only single thread is going to do both the task only one can be done at a time?
Is this understanding correct or there's more to it?
Yes, you are right. Everything in nodejs runs on the main thread except the I/o calls and fs file calls which go into to OS kernel and thread pool respectively for execution. All the synchronous code runs on the main thread and while this code is running, nodejs cannot process any further requests. That is why it is not advisable to use a long for loop because it may block the main thread for much time. You need to make child threads to handle that.
Node thread keeps an event loop and whenever any task get completed, it fires the corresponding event which signals the event listener function to get executed. The event loop simply iterate over the event queue which is basically a list of events and callbacks of completed operations. there is generally a main loop that listens for events, and then triggers a callback function when one of those events is detected.
(source: abdelraoof.com)
Similar event loop questions are here:
Node.js Event loop
Understanding the Event Loop
Source:
http://abdelraoof.com/blog/2015/10/28/understanding-nodejs-event-loop/
http://www.tutorialspoint.com/nodejs/nodejs_event_loop.htm
http://chimera.labs.oreilly.com/books/1234000001808/ch03.html#chap3_id35941348
I have 3rd party library, which is nonblocking and has its own event loop, it accepts pointer of callback function and executes it in same thread. What I want is to post event from this thread to nginx main thread, something like ngx_add_timer but without time option, to safely add event to nginx main event loop.
So very late to the party here, but I found this thread in my research and wanted to post the solution I came up with
Nginx has a mechanism to post from a worker thread - one that is perhaps running another event loop - to the main thread of the nginx worker process. That is 'ngx_post_event' which lets you post an event handler which will be invoked at some point in the future by the main thread.
You have to choose an event queue to post it on, but whatever you're doing, the answer is certainly &ngx_posted_events.
Here we come to the problem (and a solution): if you do this, your event handler will not get invoked in a timely manner because the main nginx worker process thread is waiting on i/o. It won't even deign to look at the posted events queue until it has some 'real' work to do from i/o.
The solution that's working for me currently (and bear in mind this is only on Linux), is to send the main thread a signal which will wake it up from its epoll_wait reverie so it can get to work on the pipeline coming from the other thread.
So here's what worked:
First grab the id of the worker process main thread and hold it in some process-global state:
// In you 'c' source:
static pthread_t nginx_thread;
// In some code that runs once at nginx startup (in my case the module's preconfiguration step)
nginx_thread = pthread_self();
Now when to post your callback you use the ngx_post_event call I mentioned earlier, then send a SIGIO signal to the main thread to wake up the epoll_wait operation
// Post the event and wake up the Nginx epoll event loop
ngx_post_event( event_ptr, &ngx_posted_events );
pthread_kill( nginx_thread, SIGIO );
The SIGIO event is handled in the main Nginx signal handler - and is ignored (well that's what the log says), but crucially, causes the posted events to be processed immediately.
That's it - and it seems to be working so far... please do point out anything stupid I've done.
To complete the story, you'll need the following #includes:
#include <pthread.h>
#include <signal.h>
I have a thread with a TCP Socket that connects to a server and waits for data in a while loop, so the thread never ends. When the socket receives data, it is parsed, and based on the opcode of the packet, should call x function. Whats the fastest/best way to go about that?
I read around that doing some kind of task/message queue system is a way of doing it, but not sure if there is any better options.
Should mention that I can not use boost:
Edit: Sorry, half asleep haha.
Here is the loop from thread x:
while (Running)
{
if (client.IsConnected())
{
Recieve();
}
FPlatformProcess::Sleep(0.01);
}
In the Receive function, it parses the data, and based on the packet opcode, I need to be able to call a function on the main thread (the GUI thread), because a lot of the packets are to spawn GUI objects, and I can't create GUI objects from any other thread than the main one.
So basically: I have a main thread, that spawns a new thread that enters a loop, listens for data, and I need to be able to call a function from the 2nd thread that runs on the main thread.