I'm writing a network test program.
The idea is to have 2 threads. The client and the server.
I want to add some eventing between the 2 threads.
Basicly
Main thread runs the server and creates a new thread for client.
Server thread waits for client to connect.
Client thread sends some data and notifies the server thread.
Client Thread waits.
Server reads the data. Checks if data is intact and notifies the client thread to send more data.
Repeat a number of times in a loop, but server does not know how many times.
After all the data has been send, client thread notifies server thread to exit his loop. Server thread ( main thread joins the client thread. )
I have implemented this using a global std::condition_variable and global variables and it works. I'm writing multiple of these tests functions. Each test function does what I described above but with different data.
Here are some questions that I have:
I found std::promise and std::future. I like the fact that it waits for a value to be set in another thread. Could i use this instead of std::condition_variable? In general, what are use cases for using one method over another when waiting for a variable to be set? Differences, advantages/disadvantages?
Would it be better to declare the std::condition_variable and the variables locally in each test function and pass references to the thread instead of using global variables? For some reason I don't like using global variables.. What would be a better practice?
Do you need to join a thread if you are certain it will end before the main thread? My client thread notifies the server thread ( main thread ) when it is done sending and will exit, So really the server thread is waiting for the client thread to exit. Do i still need to join it in the main thread?
Related
Nodejs is single threaded but from my understanding it uses libuv for some types of asyncronous task. By default there are only 4 libuv threads.
In the below scenario I would like to konw what gets executed in main eventloop thread and what gets executed in libuv threads.
When I call an API it calls in turn calls Elastic search, RDS database, and another api. Once data is received them all these resources it combines the information in a json format as a response.
I know that for dns resolve it uses libuv.
For sending out requests does it use libuv or it is done from the main eventpool thread.
When nodejs creates calls another api and it takes time to respond is there any thread involved to keep the connection open while it waits for conneciton
When doing the compute to combine all this data does it run on main event loop thread for on libuv thread?
NodeJs is single-threaded means it has one main thread that cares about our operations and for the rest of asynchronous, it gives the task to another thread.
So as per understanding callback results in offloading the current task to a separate thread.
app.post("/get",(req,res)=>{
res.status(200).json({message:"Success"})
})
Does the above statement execute in another thread?
Everything in a Javascript program runs in one thread (unless you use workers, which your example does not).
Javascript runs with a main loop. The main loop reads a queue of events and processes them one by one. All sorts of things turn into events in that queue: incoming web requests, timeouts, intervals, promises awaiting resolution, messages, you name it. It's more complex than this, of course.
It's good that it's all one thread: you don't have to worry about thread concurrency.
There's a downside: if you write a long-running method, its execution will block the main loop until it's finished. This makes web pages janky and nodejs programs have slow response. Use workers (separate threads) for that stuff.
The method in a .post() like yours is run in response to an event on the queue announcing an incoming web request.
Read about the main loop.
I am learning Node Js, I understand the heart of node js is the reactor pattern which is based on the event loop.
When any event occurs it goes to the event queue and then gets picked up by the stack once it's running tasks get over, this happens if the event is a non- blocking one but if it is a blocking request then event loop passes it to a thread from a thread pool of libuv.
Now my doubts are:
Once the execution is over does libuv thread passes the request back to event queue or event loop? , different tutorial has a different scenario.
Thread pool in libuv has 3 threads more, now suppose 10 users tries to login and everyone at the same time (some app like facebook or so), how only, and the threads are blocked as they want to connect to DB, so how only three threads will handle so much load?
I am really confused and did not get a good explanation of these doubts anywhere, any help will be appreciated.
When any event occurs it goes to the event queue and then gets picked up by the stack
event does not get picked up by the stack. it is passed to call stack by event loop.
if it is a blocking request then event loop passes it to a thread from a thread pool of libuv.
There are ONLY FOUR things that use the thread pool - DNS lookup, fs, crypto and zlib. Everything else execute in the main thread irrespective or blocking or not.
So logging is a network request and threadpool does not handle this. Neither libuv nor node has any code to handle this low level operations that are involved with a network request. Instead libuv delegates the request making to the underlying operating system then it just waits on the operating system to emit a signal that some response has come back to the request. OS keeps track of connections in the network stack. But network i/o is handled by your network hardware and your ISP.
Once the execution is over does libuv thread passes the request back to event queue or event loop?
From the node.js point of view does it make a difference?
2) Thread pool in libuv has 3 threads more , now suppose 10 users
tries to login and everyone at the same time (some app like facebook
or so), how only , and the threads are blocked as they want to connect
to DB, so how only three threads will handle so much load ?
libuv uses threat pool but not in a "naive" way. Most asynchronous requests are filesystem/tcp interaction which are handled via select(). You have to worry about the thread pool only when you create a custom C++ module and manually dispatch a CPU/IO blocking task.
I'm implementing a socket server.
All clients (up to 10k) are supposed to stay connected.
Here's my current design:
The main thread creates an event loop (use epoll by default) and a watcher for accepting clients.
The accept callback
Accept fd and set it to non-blocking mode.
Add watcher for the fd to monitor read events.
The read callback
Read data and add a task to thread pool to send response.
Is it OK to move read part to thread pool, or any other better idea?
Thanks.
Hard to say. You don't want 10k threads running in the background. You should keep the read part in the main thread. This way if suddently all clients start asking for things, you pile those resources only in the threadpool queue (You don't end up with 10k threads running at the same time). Also you might get better performance this way because you avoid doing some unnecessary context switches (between your own threads).
On the other hand if your clients are unlikely to send requests at the same time, or if the replies are very simple, it might be simpler to just have one thread per client, and avoid the context switch between the main thread and the thread pool.
Situation:
I'm am using named pipes on Windows for IPC, using C++.
The server creates a named pipe instance via CreateNamedPipe, and waits for clients to connect via ConnectNamedPipe.
Everytime a client calls CreateFile to access the named pipe, the server creates a thread using CreateThread to service that client. After that, the server reiterates the loop, creating a pipe instance via CreateNamedPipe and listening for the next client via ConnectNamedPipe, etc ...
Problem:
Every client request triggers a CreateThread on the server. If clients come fast and furious, there would be many calls to CreateThread.
Questions:
Q1: Is it possible to reuse already created threads to service future client requests?
If this is possible, how should I do this?
Q2: Would Thread Pool help in this situation?
I wrote a named pipe server today using IOCompletion ports just to see how.
The basic logic flow was:
I created the first named pipe via CreateNamedPipe
I created the main Io Completion Port object using that handle: CreateIoCompletionPort
I create a pool of worker threads - as a thumb suck, CPUs x2. Each worker thread calls GetQueuedCompletionStatus in a loop.
Then called ConnectNamedPipe passing in an overlapped structure. When this pipe connects, one of the GetQueuedCompletionStatus calls will return.
My main thread then joins the pool of workers by also calling GetQueuedCompletionStatus.
Thats about it really.
Each time a thread returns from GetQueuedCompletionStatus its because the associated pipe has been connected, has read data, or has been closed.
Each time a pipe is connected, I immediately create a unconnected pipe to accept the next client (should probably have more than one waiting at a time) and call ReadFile on the current pipe, passing an overlapped structure - ensuring that as data arrives GetQueuedCompletionStatus will tell me about it.
There are a couple of irritating edge cases where functions return a fail code, but GetLastError() is a success. Because the function "failed" you have to handle the success immediately as no queued completion status was posted. Conversely, (and I belive Vista adds an API to "fix" this) if data is available immediately, the overlapped functions can return success, but a queued completion status is ALSO posted so be careful not to double handle data in that case.
On Windows, the most efficient way to build a concurrent server is to use an asynch model with completion ports. But yes you can use a thread pool and use blocking i/o too, as that is a simpler programming abstraction.
Vista/Windows 2008 provide a thread pool abstraction.