Can libuv run in (mobile) game client's main loop? - multithreading

I'm searching for a network library for mobile game client side networking.
I'm currently using cocos2d-x with Lua for client side programming.
I'm consider use libuv or luv (a Lua binding for libuv) in my client side networking library (TCP).
Running libuv loop in UV_RUN_ONCE or UV_RUN_NOWAIT mode each frame in game main loop (main thread). When libuv received networking data, it calls my callback Lua functions in game main thread (where the Lua VM runs), and my Lua code decrypt and decode and process the message.
My question is:
will libuv make my game mainloop slow and drop my fps when networking traffic goes up (of course when within a reasonable message size and amount)?
need I put libuv loop in another thread and wrap the callback delay called my Lua functions in main thread?

Yes you can with UV_RUN_NOWAIT, it will run any every queued event in the event loop with it and if there are none, it will just return.
UV_RUN_ONCE for example will bock until there is at least one event in the queue.

Related

in nodejs what runs on the main thread and what runs on libuv thread

Nodejs is single threaded but from my understanding it uses libuv for some types of asyncronous task. By default there are only 4 libuv threads.
In the below scenario I would like to konw what gets executed in main eventloop thread and what gets executed in libuv threads.
When I call an API it calls in turn calls Elastic search, RDS database, and another api. Once data is received them all these resources it combines the information in a json format as a response.
I know that for dns resolve it uses libuv.
For sending out requests does it use libuv or it is done from the main eventpool thread.
When nodejs creates calls another api and it takes time to respond is there any thread involved to keep the connection open while it waits for conneciton
When doing the compute to combine all this data does it run on main event loop thread for on libuv thread?

Node js : how libuv thread pool works?

I am learning Node Js, I understand the heart of node js is the reactor pattern which is based on the event loop.
When any event occurs it goes to the event queue and then gets picked up by the stack once it's running tasks get over, this happens if the event is a non- blocking one but if it is a blocking request then event loop passes it to a thread from a thread pool of libuv.
Now my doubts are:
Once the execution is over does libuv thread passes the request back to event queue or event loop? , different tutorial has a different scenario.
Thread pool in libuv has 3 threads more, now suppose 10 users tries to login and everyone at the same time (some app like facebook or so), how only, and the threads are blocked as they want to connect to DB, so how only three threads will handle so much load?
I am really confused and did not get a good explanation of these doubts anywhere, any help will be appreciated.
When any event occurs it goes to the event queue and then gets picked up by the stack
event does not get picked up by the stack. it is passed to call stack by event loop.
if it is a blocking request then event loop passes it to a thread from a thread pool of libuv.
There are ONLY FOUR things that use the thread pool - DNS lookup, fs, crypto and zlib. Everything else execute in the main thread irrespective or blocking or not.
So logging is a network request and threadpool does not handle this. Neither libuv nor node has any code to handle this low level operations that are involved with a network request. Instead libuv delegates the request making to the underlying operating system then it just waits on the operating system to emit a signal that some response has come back to the request. OS keeps track of connections in the network stack. But network i/o is handled by your network hardware and your ISP.
Once the execution is over does libuv thread passes the request back to event queue or event loop?
From the node.js point of view does it make a difference?
2) Thread pool in libuv has 3 threads more , now suppose 10 users
tries to login and everyone at the same time (some app like facebook
or so), how only , and the threads are blocked as they want to connect
to DB, so how only three threads will handle so much load ?
libuv uses threat pool but not in a "naive" way. Most asynchronous requests are filesystem/tcp interaction which are handled via select(). You have to worry about the thread pool only when you create a custom C++ module and manually dispatch a CPU/IO blocking task.

What role plays the V8 engine in Node.js?

In the past days I've been researching to understand how the Node.js event-based style can handle much more concurrent request than the classic multithreading approach. At the end is all about less memory footprint and context-switchs because Node.js only use a couple of threads (the V8 single thread and a bunch of C++ worker threads plus the main-thread of libuv).
But how can handle a huge number of requests with a few threads, because at the end some thread must be blocked waiting, for example, a database read operation.
I think that the idea is: instead of having both the client thread and the database thread blocked, that only the database thread be blocked and alert the client thread when it ends.
This is how I understand Node.js works.
I've been wondering what gives to Node.js the capability to handle HTTP requests.
Based on what I read until now, I understand that libuv is who does that work:
Handles represent long-lived objects capable of performing certain
operations while active. Some examples: a prepare handle gets its
callback called once every loop iteration when active, and a TCP
server handle get its connection callback called every time there is a
new connection.
So, the thread that is waiting incoming http request is the main-thread of libuv that executes the libuv event loop.
So when we write
const http = require('http');
const hostname = '127.0.0.1';
const port = 1337;
http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello World\n');
}).listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
... I'm putting in the libuv a callback that will be executed in the V8 engine when a request comes in?
The order of the events will then be
A TCP packet arrives
The OS creates an event and sends to the event loop
The event loop handles the event and creates a V8 event
If I execute blocking code inside the anonymous function that handles the request I will be blocking the V8 thread.
In order to avoid this I need to execute non-blocking code that will be executed in another thread. I suppose this "another thread" is the main-thread of libuv where
network I/O is always performed in a single thread, each loop’s thread
This thread will not block because uses OS syscalls that are async.
epoll on Linux, kqueue on OSX and other BSDs, event ports on SunOS
and IOCP on Windows
I also suppose that http.request is using libuv for achive this.
Similary, if I need to do some file I/O without blocking the V8 thread
I will use the FileSystem module of Node. This time the libuv main thread can't handle this in a non-blocking way because the OS doesn't offer this feature.
Unlike network I/O, there are no platform-specific file I/O primitives
libuv could rely on, so the current approach is to run blocking file
I/O operations in a thread pool.
In this case a classic thread pool is needed in order to not block the libuv event-loop.
Now, if I need to query a database all the responsability of not block either the V8 thread and the libuv thread is in hands of the driver developer.
If the driver doesn't use the libuv it will block the V8 engine.
Instead, if it uses the libuv but the underlying database doesn't have async capabilities, then it will block a worker thread.
Finally, if the database gives async capabilities it will only block the database thread. (In this case I could avoid libuv at all and call the driver directly from the V8 thread)
If this conclusions describes correctly, although in a simplistic way, the ways that libuv and the V8 works together in Node.js, I can't see the benefits of use V8 because we could do all the work in libuv directly (unless the objective is to give the developer a language that allows write an event-based code in a simpler way).
From what I know the difference is mainly asynchronous I/O. In traditional process-per-request or thread-per-request servers, I/O, most notably network I/O, is traditionally synchronous I/O. Node.js uses fewer threads than Apache or whatever, and it can handle the traffic mostly because it uses asynchronous network I/O.
Node.js needs V8 to actually interpret the JS code and turn it into machine code. Libuv is needed to do real I/O. I do not know much more than that :)
There is an excellent post about node.js V8 engine: How JavaScript works: inside the V8 engine + 5 tips on how to write optimized code. It explained many deep-detailed aspects of the engine and some great recommendations while using it.
Simply put, what V8 engine (and other javascript engines) does is to execute javascript code. However, the V8 engine obtains a high performance execution compared to the others.
V8 translates JavaScript code into more efficient machine code instead
of using an interpreter. It compiles JavaScript code into machine code
at execution by implementing a JIT (Just-In-Time) compiler ...
The I/O is nonblocking and asynchronous via libuv which underlying uses the OS primitives like epoll or similar depending on platform to make i/o non blocking. The Nodejs event loop gets an event queued back when an event on an fd (tcp socket as an example) happens

Qt 4 GUI application GUI thread slows with QNetworkRequests

I'm developing an application to track online Xbox Live, PSN, and Steam friends. The application performs a series of QNetworkRequests using a QNetworkAccessManager. The Xbox Live and PSN code use a QWebView to simulate a browser environment. The requests are performed correctly, but it slows down the main GUI thread until each request is finished.
Here's some example code:
void Steam::fetchFriends(QString username)
{
QNetworkRequest userXml("http://steamcommunity.com/id/" + username + "/friends/?xml=1");
m_nam->get(userXml);
}
I've created a signal to tell the GUI that friends have been downloaded and processed. Then the friends list is updated in the GUI. Some of my other code is more complex and it's possible that I need to move the processing code to another thread.
Can someone confirm that QNetworkAccessManager and QNetworkRequest are multithreaded or if they should be moved into separate threads?
QNetworkAccessManager is not threaded in its implementation. It is asynchronous and uses the event loop.
Those who claim they have needed to use it in a threaded state for performance reasons have noted that they need to create the instance specifically in the thread, and not try and move it to a thread. Here is a link to someone posting an example of such.
Before creating it in a separate thread, be sure first that you aren't doing any blocking operations that might cause the main thread to slow down.

How node.js works?

I don't understand several things about nodejs. Every information source says that node.js is more scalable than standard threaded web servers due to the lack of threads locking and context switching, but I wonder, if node.js doesn't use threads how does it handle concurrent requests in parallel? What does event I/O model means?
Your help is much appreciated.
Thanks
Node is completely event-driven. Basically the server consists of one thread processing one event after another.
A new request coming in is one kind of event. The server starts processing it and when there is a blocking IO operation, it does not wait until it completes and instead registers a callback function. The server then immediately starts to process another event (maybe another request). When the IO operation is finished, that is another kind of event, and the server will process it (i.e. continue working on the request) by executing the callback as soon as it has time.
So the server never needs to create additional threads or switch between threads, which means it has very little overhead. If you want to make full use of multiple hardware cores, you just start multiple instances of node.js
Update
At the lowest level (C++ code, not Javascript), there actually are multiple threads in node.js: there is a pool of IO workers whose job it is to receive the IO interrupts and put the corresponding events into the queue to be processed by the main thread. This prevents the main thread from being interrupted.
Although Question is already explained before a long time, I'm putting my thoughts on the same.
Node.js is single threaded JavaScript runtime environment. Basically it's creator Ryan Dahl concern was that parallel processing using multiple threads is not the right way or too complicated.
if Node.js doesn't use threads how does it handle concurrent requests in parallel
Ans: It's completely wrong sentence when you say it doesn't use threads, Node.js use threads but in a smart way. It uses single thread to serve all the HTTP requests & multiple threads in thread pool(in libuv) for handling any blocking operation
Libuv: A library to handle asynchronous I/O.
What does event I/O model means?
Ans: The right term is non-blocking I/O. It almost never blocks as Node.js official site says. When any request goes to node server it never queues the request. It take request and start executing if it's blocking operation then it's been sent to working threads area and registered a callback for the same as soon as code execution get finished, it trigger the same callback and goes to event queue and processed by event loop again after that create response and send to the respective client.
Useful link:
click here
Node JS is a JavaScript runtime environment. Both browser and Node JS run on V8 JavaScript engine. Node JS uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node JS applications uses single threaded event loop architecture to handle concurrent clients. Actually its' main event loop is single threaded but most of the I/O works on separate threads, because the I/O APIs in Node JS are asynchronous/non-blocking by design, in order to accommodate the main event loop. Consider a scenario where we request a backend database for the details of user1 and user2 and then print them on the screen/console. The response to this request takes time, but both of the user data requests can be carried out independently and at the same time. When 100 people connect at once, rather than having different threads, Node will loop over those connections and fire off any events your code should know about. If a connection is new it will tell you .If a connection has sent you data, it will tell you .If the connection isn’t doing anything ,it will skip over it rather than taking up precision CPU time on it. Everything in Node is based on responding to these events. So we can see the result, the CPU stay focused on that one process and doesn’t have a bunch of threads for attention.There is no buffering in Node.JS application it simply output the data in chunks.
Though its been answered , i would like to just share my understandings in simple terms
Nodejs uses a library called Libuv , so this Libuv is written in C
language which uses the concept of threads . These threads are called
as workers and these workers take care of the multiple requests from client.
Parallel processing in nodejs is achieved with the help of 2 concepts
Asynchronous
Non blocking IO

Resources