I need to use epoll on top of the GLFW event polling. My first try was to add X11 socket descriptor to the epoll and wait on events. If the descriptor becomes readable, I used glfwPollEvents() to drain X11 events.
But, to my surprise, the X11 file descriptor are readable all the time, which creates busy loop.
The question is, how to use GLFW event polling with some outer event polling interface.
Since glfwPollEvents, poll and epoll_wait are non-blocking (for poll and epoll if timeout is set to zero), why not call them in sequence. GLFW will handle the X11 Events, while poll or epoll_wait will handle the IO Events,
e.g.
while (active) {
glfwPollEvents(); //handle queued only display events
epoll_wait(..., 0); //handle io events
nanosleep(...); //instead of sleep,
//use the timeout functionality
//of the addressed functions (e.g. glfwWaitEventsTimeout)
}
If you have to populate an event queue with your own event structures, build them in the event handlers and push them onto the queue. If you call glfw and poll in sequence, there will be no danger of any asynchronicity.
Related
I have an issue concerning a Thread lacking a message queue at the begin of its life cycle. MSDN explains
The thread to which the message is posted must have created a message queue, or else the call to PostThreadMessage fails. Use one of the following methods to handle this situation:
(1) Call PostThreadMessage. If it fails, call the Sleep function and call PostThreadMessage again. Repeat until PostThreadMessage succeeds.
(2) Create an event object, then create the thread. Use the WaitForSingleObject function to wait for the event to be set to the signaled state before calling PostThreadMessage. In the thread to which the message will be posted, call PeekMessage(&msg, NULL, WM_USER, WM_USER, PM_NOREMOVE) to force the system to create the message queue. Set the event, to indicate that the thread is ready to receive posted messages.
Method (1) solves my issue, the second call to PostThreadMethod() always succeeds in my application.
However, I would like to comprehend the second method and simply don't understand "event object" (certainly not a normal Delphi event?) "to the signalled state" and "set the event to indicate".
QUESTION: Can someone please be so kind as to translate paragraph (2) into a short Delphi code example?
These event objects are synchronization objects, described in MSDN here: Event Objects.
At the bottom of that topic is a link to Using Event Objects which gives example code showing how to create events, set them, wait for them, etc.
In short you use the following functions:
CreateEvent to create the event objects.
CloseHandle to destroy it.
SetEvent and ResetEvent to set and reset the event object.
WaitForSingleObject to wait for it to be signaled.
You can use the TEvent class from the System.SyncObjs unit to wrap all of these low-level API calls. Then the process would become like so:
Create a TEvent object, Event say, in the reset state.
Create your worker thread, passing in Event.
Call Event.WaitFor in the manager thread to wait for the worker thread to signal that its message queue exists.
When the worker thread starts executing (i.e. at the start of its Execute method), have it create its message queue, and then set the event by calling Event.SetEvent.
I hava a high performance server receiving incoming connections, The master process listen/bind on the tcp port, and fork itself to some workers.
The workers then use epoll to watch the incoming connection event, try to accept connection if event happens.
it works well, but when I count the connections each workers handled(or the CPU utils each worker consumed), I found it not balance at all.
for example:
One Busy Worker: handling 10k connections, and consumer 20% CPU;
One Idle Worker: handling 300 conenctions, and consumer 4% CPU;
My server running on a RHEL6.5 OS(2.6.32 kernel).
Would anyone can help me on this issue?
EDITED:
Why
After digging some kernel code(2.6.32.x), I found why the in-balance occures.
1 * MasterProcess:create and bind Listen socket;
n * WorkerPrecess:create epfd and monitor the listen socket from master.
When WorkProcess epoll_ctl(..., listen_sock,...), the kernel add the watch file to a rbtree of the epoll struct(#see fs/eventpoll.c ep_insert) and add the epoll struct to a wait_queue of the listen_sock by a callback (ep_ptable_queue_proc)
static void ep_ptable_queue_proc(struct file *file, wait_queue_head_t *whead,
poll_table *pt) {
...
add_wait_queue(whead, &pwq->wait);
...
}
//whead is the waitqueue of listen socket,
//and the *pt is a container of epfd's releated resource.
when a new connection is incoming(SYN_REC), the listen socket's event has been changed, kernel will iterator the waitqueue to notify the event to all epoll monitor on the socket by a callback given by epoll. the callback is ep_poll_callback (#see fs/eventpoll.c), and the callback will wake up the Process(or Thread) wait on epoll_wait system call.
The sequences of listen socket's waitqueue will not change after the notify process. and the processes wait on the events will get notified with a fixed order. The process wake up early shall have more connections to handle vs the the last process get notified. That causes the in-balance.
FIX
1 * MasterProcess create a epfd for all WorkerProcess;
2 * WorkerProcess wait on the same epfd by epoll_wait;
in this case, we have only one epoll struct at all in the kernel level. When event occurs, only one epoll struct's wait up call_back will be called.
the epoll struct's wake up callback is:
static int ep_poll_callback(wait_queue_t *wait, unsigned mode, int sync, void *key)
Right now, all WorkerProcess's boss thread is wait on epoll_wait, and the ep_poll_callback will only wake up one of them.
While a WorkerProcess wake up by epoll, it will remove itself from the wait_queue, and re-add them self to the tail of wait_queue if epoll_wait is called. So, we can wake up WorkerProcess one by one.
There is no problem.
The purpose of having multiple workers is for at least one to available, even if the others are busy.
But when multiple workers are waiting, it does not matter which one gets the event. A process/thread does not wear out simply because it's running for a longer time.
There is no balance because the kernel does not care. Neither should you.
I have 3rd party library, which is nonblocking and has its own event loop, it accepts pointer of callback function and executes it in same thread. What I want is to post event from this thread to nginx main thread, something like ngx_add_timer but without time option, to safely add event to nginx main event loop.
So very late to the party here, but I found this thread in my research and wanted to post the solution I came up with
Nginx has a mechanism to post from a worker thread - one that is perhaps running another event loop - to the main thread of the nginx worker process. That is 'ngx_post_event' which lets you post an event handler which will be invoked at some point in the future by the main thread.
You have to choose an event queue to post it on, but whatever you're doing, the answer is certainly &ngx_posted_events.
Here we come to the problem (and a solution): if you do this, your event handler will not get invoked in a timely manner because the main nginx worker process thread is waiting on i/o. It won't even deign to look at the posted events queue until it has some 'real' work to do from i/o.
The solution that's working for me currently (and bear in mind this is only on Linux), is to send the main thread a signal which will wake it up from its epoll_wait reverie so it can get to work on the pipeline coming from the other thread.
So here's what worked:
First grab the id of the worker process main thread and hold it in some process-global state:
// In you 'c' source:
static pthread_t nginx_thread;
// In some code that runs once at nginx startup (in my case the module's preconfiguration step)
nginx_thread = pthread_self();
Now when to post your callback you use the ngx_post_event call I mentioned earlier, then send a SIGIO signal to the main thread to wake up the epoll_wait operation
// Post the event and wake up the Nginx epoll event loop
ngx_post_event( event_ptr, &ngx_posted_events );
pthread_kill( nginx_thread, SIGIO );
The SIGIO event is handled in the main Nginx signal handler - and is ignored (well that's what the log says), but crucially, causes the posted events to be processed immediately.
That's it - and it seems to be working so far... please do point out anything stupid I've done.
To complete the story, you'll need the following #includes:
#include <pthread.h>
#include <signal.h>
I want to ask a question about Application architecture1. There will be the main GUI thread for providing user interaction2. A Receive thread based on UDP socket that will receive UDP packets as they arrive (want this to be blocking.3. Another thread for sending event based as well as periodic UDP packets.How do I implement this architecture in Qt, basically i have following questions:1. For the Receive Thread, how do I make it blocking ?I know about readyRead() signal, and I can connect it to some slot that will process the datagram, but how do i loop this so that this thread does this forever. 2. In send Thread I can generate a signal form the GUI thread which will be received by the Sending Thread and a slot here will write some data on the socket, but again how will this thread survive when it has nothing to send, I mean loop, poll over something what ?
Use event loops in the secondary threads.
QThread::exec() starts the thread's event loop which will run until QThread::quit() is called. That should solve your "how to wait until something happens" problem. The default implementation of QThread::run() just calls exec(), so I'd go with that. You could set everything up in your main() method, e.g. for the sender thread:
//Create UI
MainWindow mainWindow;
mainWindow.show();
//set up sender thread and the `QObject` doing the actual work (Sender)
QThread senderThread;
Sender sender; //the object doing the actual sending
sender.moveToThread(&sender); //move sender to its thread
senderThread.start(); //starts the thread which will then enter the event loop
//connect UI to sender thread
QObject::connect(&mainWindow, SIGNAL(sendMessage(QString)), &sender, SLOT(sendMessage(QString)), Qt::QueuedConnection);
...
const int ret = app.exec(); // enter main event loop
`senderThread.quit();` //tell sender thread to quit its event loop
`senderThread.wait();` //wait until senderThread is done
`return ret;` // leave main
Sender would just be a QObject with a sendMessage() slot doing the sending, a QTimer plus another slot for the periodic UDP packages, etc.
My application is going to send huge amount of data over network, so I decided (because I'm using Linux) to use epoll and splice. Here's how I see it (pseudocode):
epoll_ctl (file_fd, EPOLL_CTL_ADD); // waiting for EPOLLIN event
while(1)
{
epoll_wait (tmp_structure);
if (tmp_structure->fd == file_descriptor)
{
epoll_ctl (file_fd, EPOLL_CTL_DEL);
epoll_ctl (tcp_socket_fd, EPOLL_CTL_ADD); // wait for EPOLLOUT event
}
if (tmp_structure->fd == tcp_socket_descriptor)
{
splice (file_fd, tcp_socket_fd);
epoll_ctl (tcp_socket_fd, EPOLL_CTL_DEL);
epoll_ctl (file_fd, EPOLL_CTL_ADD); // waiting for EPOLLIN event
}
}
I assume, that my application will open up to 2000 TCP sockets. I want o ask you about two things:
There will be quite a lot of epoll_ctl calls, won't wit be slow when I will have so many sockets?
File descriptor has to become readable first and there will be some interval before socket will become writable. Can I be sure, that at the moment when socket becomes writable file descriptor is still readable (to avoid blocking call)?
1st question
You can use edge triggered rather then even triggered polling thus you do not have to delete socket each time.
You can use EPOLLONESHOT to prevent removing socket
File descriptor has to become readable first and there will be some interval before socket will become writable.
What kind of file descriptor? If this file on file system you can't use select/poll or other tools for this purpose, file will be always readable or writeable regardless the state if disk and cache. If you need to do staff asynchronous you may use aio_* API but generally just read from file write to file and assume it is non-blocking.
If it is TCP socket then it would be writeable most of the time. It is better to use
non-blocking calls and put sockets to epoll when you get EWOULDBLOCK.
Consider using EPOLLET flag. This is definitely for that case. When using this flag you can use event loop in a proper way without deregistering (or modifying mode on) file descriptors since first registration in epoll. :) enjoy!