I'm writing a Haskell binding to some the library and there is a function void foo(), which calls select() inside. When i call this function from Haskell, that select() call starts to constantly return EINTR. This confuses library code and it starts looping forever.
In the #haskell IRC channel i've been told to run foo() from a bound thread. I've used runInBoundThread for this and now everything seems to work. But in some rare cases i'm getting Alarm clock message in the console (Ok, i've found it means that app catches SIGALRM).
I'm not sure it's proper way to handle this problem and i don't want to depend on Control.Concurrency. What should i do?
The cause of SIGALRM was GHC's runtime using old codepath for managing timer stuff. This old codepath was turning on because GHC's configure script had sort of linuxism in its check for create_timer() function. Fixing it made GHC use the same mechanism that is used on all platforms, and eliminated the error in question.
Relevant commit: https://gitlab.haskell.org/ghc/ghc/-/commit/edc059a425068f9bf4a60520e8d8906bc764e2b5
n.m.'s comment is correct: the code in withRTSSignalsBlocked will hide signals from your ffi'd code: http://hackage.haskell.org/packages/archive/HDBC-mysql/0.6.6.1/doc/html/Database-HDBC-MySQL.html#v:withRTSSignalsBlocked
This should also eliminate the need for runInBoundThread, I think.
Related
So I'm writing a script/application that uses pythons multiprocessing BaseManager class. Now for the most part it works great, the only issue I have is that I am using the serve_forever as a blocking statement and then continue onwards however when I want to terminate or exit out of the serve_forever() function(ality) it automatically exits out and terminates the application, but like I mentioned I have some more things I want to take care of before I completely exit out.
I can exit out of serve_forever() by setting a stop event with stop_event.set(). Now this is all well and dandy however according to the source (https://github.com/python/cpython/blob/3.6/Lib/multiprocessing/managers.py#L147) serve_forever explicitly states sys.exit(0) and is part of the Server class that BaseManager uses within it's definition. Essentially I would like to remove that line (sys.exit(0)). How would I accompolish this?
When I search I'm coming up with results such as monkey patching? Can I just Subclass the Server class, explicitly define serve_forever to be the exact same code but without the sys.exit(0) line and call it a day? Something tells me that is not going to work. Do I subclass Server AND BaseManager?
Thanks!
Attempting to monkey-patch or inherit internal classes will result in code that will not be compatible across Python releases, not even patches.
Atop of that, these solutions will be unnecessarily complex and complicated, and are overall frowned upon.
I highly suggest re-implementing serve_forever() by using the start() method together with an event. Waiting for the event to be called or, if impossible, a loop checking if the manager is still alive, will be much easier and a better solution in almost all aspects that I can think of.
After discussing in chat, we realised the easiest approach is to just suppress the SystemExit being thrown from sys.exit(). I'm opening a bug report on CPython bug tracker accordingly to prevent sys.exit(). Do keep in mind the server will not actually shut down as it is run on a different thread. The whole recommendation of using .server().serve_forever() in the stdlib looks dubious at best.
If you wish to immediately shut down the server, call Server.listener.close() after catcing the exception.
This is what says on http://invisible-island.net/ncurses/ncurses.faq.html#multithread
If you have a program which uses curses in more than one thread, you will almost certainly see odd behavior. That is because curses relies upon static variables for both input and output. Using one thread for input and other(s) for output cannot solve the problem, nor can extra screen updates help. This FAQ is not a tutorial on threaded programming.
Specifically, it mentions it is not safe even if input and output are done on separate threads. Would it be safe if we further use a mutex for the whole ncurses library so that at most one thread can be calling any ncurses function at a time? If not, what would be other cheap workarounds to use ncurses safely in multi-thread application?
I'm asking this question because I notice a real application often has its own event loop but relies on ncurses getch function to get keyboard input. But if the main thread is block waiting in its own event loop, then it has no chance to call getch. A seemingly applicable solution is to call getch in a different thread, which hasn't caused me a problem yet, but as what says above is actually not safe, and was verified by another user here. So I'm wondering what is the best way to merge getch into an application's own event loop.
I'm considering making getch non-blocking and waking up the main thread regularly (every 10-100 ms) to check if there is something to read. But this adds an additional delay between key events and makes the application less responsive. Also, I'm not sure if that would cause any problems with some ncurses internal delay such as ESCDELAY.
Another solution I'm considering is to poll stdin directly. But I guess ncurses should also be doing something like that and reading the same stream from two different places looks bad.
The text also mentions the "ncursest" or "ncursestw" libraries, but they seem to be less available, for example, if you are using a different language binding of curses. It would be great if there is a viable solution with the standard ncurses library.
Without the thread-support, you're out of luck for using curses functions in more than one thread. That's because most of the curses calls use static or global data. The getch function for instance calls refresh which can update the whole screen—using the global pointers curscr and stdscr. The difference in the thread-support configuration is that global values are converted to functions and mutex's added.
If you want to read stdin from a different thread and run curses in one thread, you probably can make that work by checking the file descriptor (i.e., 0) for pending activity and alerting the thread which runs curses to tell it to read data.
In Linux one is able to asynchronously read from a file by calling aio_read(3) from C. A sigevent structure is one of the parameters and there are different options one can specify to be notified when the operation is complete. Let me summarize:
SIGEV_NONE no notification.
The status can be checked with aio_error(3). The operation is async, but completion must be busily awaited in some loop which is not what I want.
SIGEV_SIGNAL a signal is raised to the process.
In theory, this can be caught in Haskell by installing a signal handler via System.Posix.Signals. There is a problem though: the API of SignalInfo doesn't include the crucial si_value that let's one communicate some specifics about the read request, like a StaticPtr. This is unfortunate.
SIGEV_THREAD this would start a new thread, according to the documentation.
I don't know how to represent this in Haskell. My best guess would be an IO () action. I'm not sure how to write the accompanying native code.
How can I use aio_read or something of that sort in Haskell? I will probably not get around using FFI on this (or a library).
I'm using lua as the scripting language for handling events in my application, and I don't want to restrict users to writing short handlers - e.g. someone might want to have one handler run an infinite loop, and another handler would interrupt the first one. Obviously, lua doesn't directly support such behavior, so I'm looking for workarounds.
First of all, I'd like to avoid modifying the engine. Is it possible to set up a debug hook that would yield once the state has reached its quota? Judging by the documentation, it shouldn't be hard at all, but I don't know if there are any caveats to this.
And second, can I use lua_close to terminate a thread as I would in actual multithreading?
I've done something similar in the past. Its completely possible to multi-thread on separate Lua states. Be sure to take a look at luaL_lock() and luaL_unlock() (plus associated setup/cleanup), as you will no doubt need this setup (a simple mutex should do the trick).
After that, it should be a fairly simple matter of creating a lock/wait/interrupt API for your handlers.
In my application I run wglGetCurrentDC() and wglGetCurrentContext() from onThread function
(this function should be called as declared here - EVT_THREAD(wxID_ANY,MyCanvas::onThread))
and I get NULL in both cases. When I run it not from onThread it is ok…
What is work around in order to solve the problem – (I have to run them when getting event from the thread!)
As Alex suggested I changed to wxPostEvent to redirect the event to main thread, which catches the event in its onThread function.In this onThread function I have wglGetCurrentDC() and wglGetCurrentContext() calls ...They still return null.Please explain me what I am doing wrong. And how to solve he problem.
Maybe I'm misunderstanding, but should you not be using wxGLCanvas and wxGLContext rather than the windows-specific code? At the very least it's probably more compatible with other wxWidget code.
Anyway, from the wglGetCurrentDC documentation, the function returns NULL if a DC for the current window doesn't exist. This suggests that either the context was destroyed somehow or you're not calling it from the window you think you're calling it from (perhaps because of your threading?). I would reiterate what Alex said; don't call UI code from any thread besides the main one.
If you could post some code showing how you're returning from the thread it might help identify the problem. It seems likely that you're doing UI stuff from the thread and just not realizing it. (Hard to tell without seeing any code, though.)
Don't touch any UI-related stuff from a worker thread. This is general requirement for all UI frameworks. Use wxPostEvent to redirect a work to the main application thread.