How to exit a program when using blocking calls - multithreading

I need to do a project where the application monitors incoming connections and apply some rules as defined in a xml document. The rules are either filtering (blocking or permitting) connections or redirect traffic on a certain port. In order to do this, I use functions such as accept and recv (from Winsock). All of those functions are used on different threads. I'm wondering, though, how am I supposed to clean up the program before exiting since all those blocking calls are made. Normally I'd either wait until the person exits the console through the X button or waiting for the user to input a certain character in the main thread. The thing is I'm not sure what happens if the application exits while there are still active threads/if memory is still allocated/ if sockets are in use. Are all destructors called? Are h andles and sockets correctly closed? Or do I need to somehow do it myself?
Thanks

In general, I would say no. Do not try to explicitly clean up resources like sockets, fd's, handles, threads unless you are absolutely forced to.
Exact behaviour depends on OS and how you terminate your app.
All the common desktop OS will release resources allocated to a process by the OS when a process terminates. This includes sockets, file descriptors, memory.
On Windows/Linux, if you return from your C/C++ main() without any explicit cleanup, static dtors will get called by the crt code. Dtors for dynamically allocated objects in non-main threads are not run.
Executables written in other languages may behave differently.
If, instead of returning from main(), you call a 'ProcessExit()' API directly, static destructors will not get called because the OS has no concept of dtors - it has no idea, or interest, in what language was used to generate the executable.
In either case, the OS will be called to terminate your process. The OS does this, (simple 'Dummies' version:), by first changing the state of all process threads that are not running so that they never run again. Threads that are running on other cores are then stopped. Then OS resources like fd, sockets are closed, then released, then all process memory is freed, then OS kernel process/thread objects freed, then your process no longer exists.
If you absolutely need some, or all, C++/whatever dtors called when some thread needs to stop the app, you will have to explcitly signal other threads to stop so that dtors can be run. I tend to use a globally-accessible 'CloseRequested' bool that relevant blocking calls check immediately after returning. There remains the issue of persuading the blocking calls to return.
Some blocking calls can be coded up to wait on more than one signal, so allowing the call to return by a simple event/sema/condvar/whatever signal.
Some calls, like recv(), accept(), can be pesuaded to return early by closing the fd/socket they are waiting on.
Some calls can be made to return by 'artificially' satisfying their wait condition - eg. creating a temp file just to make a folder-monitor call return so that the 'CloseRequested' bool can be checked.
If a blocking call is so annoyingly stubborn that it cannot be persuaded to return, you could redesign your app so that whatever the critical resource is that is released in the dtors can be released by another thread - maybe create the thing in another thread and pass it to the thread that blocks in a ctor parameter, something like that.
NOTE WELL: Thread shutdown code bodges, as listed above, are extra code that does not add to the normal functionality of your app. You should restrict explicit thread shutdown to those threads that hold resources that absolutely must be released by explicit user code - DB connections, say. If the OS can release the resource, it should be allowed to do so. The OS is very good at stopping all process threads before releasing resources they are using, user code is not.

Where possible, use blocking calls that take a timeout value, and have your threads loop. That gives you a place to check for a shutdown condition and exit the thread gracefully. Handles will generally be cleaned up by the system when the process exits. It is polite to shut down sockets gracefully, but not absolutely mandatory. The downside of not doing so is it can take a while for the kernel to clean up exclusive resources. For example, if you just kill a thread waiting to accept(), and then your app re-launches, it won't be able to successfully accept() on the same port until the kernel cleans up the old socket.

Related

How to detect if a linux thread is crashed

I've this problem, I need to understand if a Linux thread is running or not due to crash and not for normal exit. The reason to do that is try to restart the thread without reset\restart all system.
The pthread_join() seems not a good option because I've several thread to monitoring and the function return on specific thread, It doesn't work in "parallel". At moment I've a keeep live signal from thread to main but I'm looking for some system call or thread attribute to understand the state
Any suggestion?
P
Thread "crashes"
How to detect if a linux thread is crashed
if (0) //...
That is, the only way that a pthreads thread can terminate abnormally while other threads in the process continue to run is via thread cancellation,* which is not well described as a "crash". In particular, if a signal is received whose effect is abnormal termination then the whole process terminates, not just the thread that handled the signal. Other kinds of errors do not cause threads to terminate.
On the other hand, if by "crash" you mean normal termination in response to the thread detecting an error condition, then you have no limitation on what the thread can do prior to terminating to communicate about its state. For example,
it could update a shared object that tracks information about your threads
it could write to a pipe designated for the purpose
it could raise a signal
If you like, you can use pthread_cleanup_push() to register thread cleanup handlers to help with that.
On the third hand, if you're asking about detecting live threads that are failing to make progress -- because they are deadlocked, for example -- then your best bet is probably to implement some form of heartbeat monitor. That would involve each thread you want to monitor periodically updating a shared object that tracks the time of each thread's last update. If a thread goes too long between beats then you can guess that it may be stalled. This requires you to instrument all the threads you want to monitor.
Thread cancellation
You should not use thread cancellation. But if you did, and if you include termination because of cancellation in your definition of "crash", then you still have all the options above available to you, but you must engage them by registering one or more cleanup handlers.
GNU-specific options
The main issues with using pthread_join() to check thread state are
it doesn't work for daemon threads, and
pthread_join() blocks until the specified thread terminates.
For daemon threads, you need one of the approaches already discussed, but for ordinary threads on GNU/Linux, Glibc provides non-standard pthread_tryjoin_np(), which performs a non-blocking attempt to join a thread, and also pthread_timedjoin_np(), which performs a join attempt with a timeout. If you are willing to rely on Glibc-specific functions then one of these might serve your purpose.
Linux-specific options
The Linux kernel makes per-process thread status information available via the /proc filesystem. See How to check the state of Linux threads?, for example. Do be aware, however, that the details vary a bit from one kernel version to another. And if you're planning to do this a lot, then also be aware that even though /proc is a virtual filesystem (so no physical disk is involved), you still access it via slow-ish I/O interfaces.
Any of the other alternatives is probably better than reading files in /proc. I mention it only for completeness.
Overall
I'm looking for some system call or thread attribute to understand the state
The pthreads API does not provide a "have you terminated?" function or any other such state-inquiry function, unless you count pthread_join(). If you want that then you need to roll your own, which you can do by means of some of the facilities already discussed.
*Do not use thread cancellation.

C# When thread switching will most probably occur?

I was wondering when .Net would most probably switch from a thread to another?
I understand we can't predict when this will happen exactly, but is there any intelligence in this? For example, when a thread is executed will it try to wait for a method to returns or a loop to finish before switching?
I'm not an expert on .NET, but in general scheduling is handled by the kernel.
Either your thread's timeslice has expired (threads/processes only get a certain amount of CPU time)
Your thread has blocked for IO.
Some other obscure reason, like waiting for an IPC message, a network packet or something.
Threads can be preempted at any point along their execution path, be it in a loop or returning from a function. This in general isn't handled by the underlying VM (.NET or JVM) but is controlled by the OS.
Of course there is 'intelligence', of a sort:). The set of running threads can only change upon an interrupt, either:
An actual hardware interrupt from a peripheral device, eg. disk, NIC, KB, mouse, timer.
A software interrupt, (ie. a system call), that can change the state of thread/s. This encompasses sleep calls and calls to wait/signal on inter-thread synchro objects, as well as I/O calls that request data that is not immediately available.
If there is no interrupt, the OS cannot change the set of running threads because it is not entered. The OS does not know or care about loops, function/methods calls, (except those that make system calls as above), gotos or any other user-level flow-control mechanisms.
I read your question now, it may not be rellevant anymore, but after reading the above answers, i want to just to make sure:
Threads are managed (or as i know) by the process they belong to. There is nothing to do with the Operation System(and that's is the main reason why working with multithreads is more faster than working with multiprocess, because there are data sharing between threads and the switching between them is occuring faster than the context switch wich occure between process by the Short-Term-Scheduler).
(NOTE: There are two types of threads: USER_MODE' threads and KERNEL_MODE' threadss, and each os can have both of them or just on of them. Anyway a thread that working in a user application environment is considered as a USER_MODE' thread and managed by the process it's belong to.)
Am I Write?
Thanks!!!

If I "get back to the main thread" then what exactly happens, and how do interrupts work with threads?

Background: I was using Beej's guide and he mentioned forking and ensuring you "get the zombies". An Operating Systems book I grabbed explained how the OS creates "threads" (I always thought it was a more fundamental piece), and by quoting it, I mean it the OS decides nearly everything. Basically they share all external resources, but they split the register and stack spaces (and I think a 3rd thing).
So I get to the waitpid function which http://www.qnx.com's developer docs explain very well. In fact, I read the entire section on threads, minus all the types of conditions after a Processes and Threads google.
The fact that I can split code up and put it back together doesn't confuse me. HOW I can do this is confusing.
In C and C++, your program is a Main() function, which goes forward, calls other functions, maybe loops forever (waiting for input or rendering), and then eventually quits or returns. In this model I see NO reason for it to stop beyond a "I'm waiting for something", in which case it just loops.
Well, it seems it can loop by setting certain things, like "I'm waiting for a semaphore" or "a response" or "an interrupt". Or maybe it gets interrupted without waiting for one. This is what confuses me.
The processor time-slices processes and threads. That's all fine and dandy, but how does it decide when to stop one? I understand that you get to the Polling function and say "Hey I'm waiting for input, clock tick or user do something". Somehow it tells this to the os? I'm not sure. But moreso:
It seems to be able to completely randomly interrupt or interject, even on a single-threaded application. So you're running one thread and suddenly waitpid() says "Hey, I finished a process, let me interrupt this, we both hate zombies, I gotta do this." and you're still looping on some calculation. So, what just happens??? I have no idea, somehow they both run and your computation isn't messed with, 'cause it's single threaded, but that somehow doesn't mean that it won't stop what it's doing to run waitpid() inside the same thread WHILE you're still doing your other app things.
Also confusing, is how you can be notified, like iOSes notifications, and say "Hey, I got some UI changes, get me off of 16 and put me back on 1 so I can change this thing". But same question as last paragraph, how does it interrupt a thread that's running?
I think I understand the splitting, but this joining is utterly confusing. It's like the textbooks have this "rabbit from hat" step I'm supposed to accept. Other SO posts told me they don't share the same stack, but that didn't help, now I'm imagining a slinky (stack) leaning over to another slinky, but unsure how it recombines to change the data.
Thanks for any help, I apologize that this is long, but I know someone's going to misinterpret this and give me the "they are different stacks" answer if I'm too concise here.
Thanks,
OK, I'll have a go, though it's gonna be 'economical with the truth':)
It's sorta like this:
The OS kernel scheduler/dispatcher is a state-machine for managing threads. A thread comprises a stack, (allocated at the time of thread creation), and a Thread Control Block, (TCB), struct in the kernel that holds thread state and can store thread context, (including user registers, especially the stack-pointer). A thread must have code to run, but the code is not dedicated to the thread - many threads can run the same code. Threads have states, eg. blocked on I/O, blocked on an inter-thread signal, sleeping for a timer period, ready, running on a core.
Threads belong to processes - a process must have at least one thread to run its code and has one created for it by the OS loader when the process starts up. The 'main thread' may then create others that will also belong to that process.
The state-machine inputs are software interrupts - system calls from those threads that are already running on cores, and hardware interrupts from perhiperal devices/controllers, (disk, network, mouse, KB etc), that use processor hardware features to stop the processor/s running instructions from the threads and 'immediately' run driver code instead.
The output of the state-machine is a set of threads running on cores. If there are fewer ready threads than cores, the OS will halt the unuseable cores. If there are more ready threads than cores, (ie. the machine is overloaded), the 'sheduling algorithm' that decided with threads to run takes into account several factors - thread and process priority, prority boosts for threads that have just become ready on I/O completion or inter-thread signal, foreground-process boosts and others.
The OS has the ability to stop any running thread on any core. It has an interprocessor hardware-interrupt channel and drivers that can force any thread to enter the OS and be blocked/stopped, (maybe because another thread has just beome ready and the OS scheduling algorithm has decided that a running thread must be immediately preempted).
The software intrrupts from running threads can change the set of running threads by requesting I/O, or by signaling other threads, (the events, mutexes, condition-variables and semaphores). The hardware interrupts from peripheral devices can change the set of running threads by signaling I/O completion.
When the OS gets these inputs, it uses that input, and internal state in containers of Thread Control Block and Process Control Block structs, to decide which set of ready threads to run next. It can block a thread from running by saving its context, (including registers, especially stack pointer), in its TCB and not returning from the interrupt. It can run a thread that was blocked by restoring its context from its TCB to a core and performing an interrupt-return, so allowing the thread to resume from where it left off.
The gain is that no thread that is waiting for I/O gets to run at all and so does not use any CPU and, when I/O becomes avilable, a waiting thread is made ready 'immediately' and, if there is a core available, running.
This combination of OS state data, and hardware/software interrupts, effciently matches up threads that can make forward progress with cores avalable to run them, and no CPU is wasted on polling I/O or inter-thread comms flags.
All this complexity, both in the OS and for the developer who has to design multithreaded apps and so put up with locks, synchronization, mutexes etc, has just one vital goal - high performance I/O. Without it, you can forget video streaming, BitTorrent and browsers - they would all be too piss-slow to be useable.
Statements and phrases like 'CPU quantum', 'give up the remainder of their time-slice' and 'round-robin' make me want to throw up.
It's a state-machine. Hardware and software interrupts go in, a set of running threads comes out. The hardware timer interrupt, (the one that can time-out system calls, allow threads to sleep and share out CPU on a box that is overloaded), though valuable, is just one of many.
So I'm on thread 16, and I need to get to thread 1 to modify UI. I
randomly stop it anywhere, "move the stack over to thread 1" then
"take its context and modify it"?
No, time for 'economical with truth' #2...
Thread 1 is running the GUI. To do this, it needs inputs from mouse, keyboard. The classic way for this to happen is that thread 1 waits, blocked, on a GUI input queue - a thread-safe producer-consumer queue, for KB/mouse messages. It's using no CPU - the cores are off running services and BitTorrent downloads. You hit a key on the keyboard, and the keyboard-controller hardware raises an interrupt line on the interrupt controller, causing a core to jump to the keyboard driver code as soon as it has finished its current instruction. The driver reads the KB controller, assembles a KeyPressed message and pushes it onto the input queue of the GUI thread with focus - your thread 1. The driver exits by calling the scheduler interrupt entry point so that a scheduling run can be performed and your GUI thread is assigned a core an run on it. To thread 1, all it has done is make a blocking 'pop' call on a queue and, eventually, it returns with a message to process.
So, thread 1 is performing:
void* HandleGui{
while(true){
GUImessage message=thread1InputQueue.pop();
switch(message.type){
.. // lots of case statements to handle all the possible GUI messages
..
..
};
};
};
If thread 16 wants to interact with the GUI, it cannot do it directly. All it can do is to queue a message to thread 1, in a similar way to the KB/mouse drivers, to instruct it to do stuff.
This may seem a bit restrictive, but the message from thread 16 can contain more than POD. It could have a 'RunMyCode' message type and contain a function pointer to code that thread 16 wants to be run in the context of thread 1. When thread 1 gets around to hadling the message, its 'RunMyCode' case statement calls the function pointer in the message. Note that this 'simple' mechanism is asynchronous - thread 16 has issued the mesage and runs on - it has no idea when thread 1 will get around to running the function it passed. This can be a problem if the function accesses any data in thread 16 - thread 16 may also be accessing it. If this is an issue, (and it may not be - all the data required by the function may be in the message, which can be passed into the function as a parameter when thread 1 calls it), it is possible to make the function call synchronous by making thread 16 wait until thread 1 has run the function. One way would be for the function signal an OS synchronization object as its last line - an object upon which thread 16 will wait immediately after queueing its 'RunMyCode' message:
void* runOnGUI(GUImessage message){
// do stuff with GUI controls
message.notifyCompletion->signal(); // tell thread 16 to run again
};
void* thread16run(){
..
..
GUImessage message;
waitEvent OSkernelWaitObject;
message.type=RunMyCode;
message.function=runOnGUI;
message.notifyCompletion=waitEvent;
thread1InputQueue.push(message); // ask thread 1 to run my function.
waitEvent->wait(); // wait, blocked, until the function is done
..
..
};
So, getting a function to run in the context of another thread requires cooperation. Threads cannot call other threads - only signal them, usually via the OS. Any thread that is expected to run such 'externally signaled' code must have an accessible entry point where the function can be placed and must execute code to retreive the function address and call it.

What's the best way to signal threads that sleep or block to stop?

I've got a service that I need to shut down and update. I'm having difficulties with this in two different cases:
I have some threads that sleep for large amounts of time. Obviously I can't wait for them to wake up to finish shutting down the service. I had a thought to use an AutoResetEvent that gets set by some controller thread when the sleep interval is up (by just checking every two seconds or something), and triggering it immediately at OnClose time. Is there a better way to facilitate that?
I have one thread that makes a call to a blocking method call (one which I cannot modify). How do you signal such a thread to stop?
I'm not sure if I understood your first question correctly, but have you looked at using WaitForSingleObject as an alternative to Sleep? You can specify a timeout as well as an object to wait on, so if you want it to wake up earlier, just signal the object.
What exactly do you mean by "call to a blocking thread"? Or did you just mean a blocking call? In general, there isn't a way to interrupt a thread without forcefully terminating it. However, if the call is a system call, there might be ways to return control by making the call fail, eg. cancelling I/O or closing an associated handle.
For 1. you can get your threads into an interruptable Sleep by using SleepEx rather than Sleep. Once they get this shutdown kick (initiated from your termination logic using QueueUserApc), you can detect it happened using the return code from SleepEx and terminate those threads accordingly. This is similar to the suggestion to use WaitForSingleObject, but you don't need another per-thread handle that's just used to terminate the associated thread.
The return value is zero if the
specified time interval expired.
The return value is WAIT_IO_COMPLETION
if the function returned due to one or
more I/O completion callback
functions. This can happen only if
bAlertable is TRUE, and if the thread
that called the SleepEx function is
the same thread that called the
extended I/O function.
For 2., that's a tough one unless you have access to some resource used in that thread that can cause the blocking call to abort in such a way that the calling thread can handle it cleanly. You may just have to implement code to kill that thread with extreme prejudice using TerminateThread (probably this should be the last thing you do before exiting the process) and see what happens under test.
An easy and reliable solution is to kill the service process. A process is the memory-safe abstraction of the OS, after all, so you can safely terminate one without regard for process-internal state - of course, if your process is communicating or fiddling with external state, all bets are off...
Additionally, you could implement the solution which OS's themselves commonly do: one warning signal asking the process to clean up as best possible (which sets a flag and gracefully exits what can be gracefully stopped), and then forceful termination if the process doesn't exit by itself (which ends pesky things like blocking I/O).
All services should be built such that forceful termination isn't harmful, since these processes are system managed and may be terminated by things such as a reboot - i.e., your service ideally should permit this without corrupting storage anyhow.
Oh, and one final warning; windows services may share a process (I presume for efficiency, though it strikes me as an avoidable optimization), so if you go this route, you want to make sure your service is not sharing a process with other services. You can ensure this by passing the option SERVICE_WIN32_OWN_PROCESS to ChangeServiceConfig.

How to stop long executing threads gracefully?

I have a threading problem with Delphi. I guess this is common in other languages too. I have a long process which I do in a thread, that fills a list in main window. But if some parameters change in the mean time, then I should stop current executing thread and start from the beginning. Delphi suggests terminating a thread by setting Terminated:=true and checking for this variable's value in the thread. However my problem is this, the long executing part is buried in a library call and in this call I cannot check for the Terminated variable. Therefore I had to wait for this library call to finish, which affects the whole program.
What is the preferred way to do in this case? Can I kill the thread immediately?
The preferred way is to modify the code so that it doesn't block without checking for cancellation.
Since you can't modify the code, you can't do that; you either have to live with the background operation (but you can disassociate it from any UI, so that its completion will be ignored); or alternatively, you can try terminating it (TerminateThread API will rudely terminate any thread given its handle). Termination isn't clean, though, like Rob says, any locks held by the thread will be abandoned, and any cross-thread state protected by such locks may be in a corrupted state.
Can you consider calling the function in a separate executable? Perhaps using RPC (pipes, TCP, rather than shared memory owing to same lock problem), so that you can terminate a process rather than terminating a thread? Process isolation will give you a good deal more protection. So long as you aren't relying on cross-process named things like mutexes, it should be far safer than killing a thread.
The threads need to co-operate to achieve a graceful shutdown. I am not sure if Delphi offers a mechanism to abort another thread, but such mechanisms are available in .NET and Java, but should be considered an option of last resort, and the state of the application is indeterminate after they have been used.
If you can kill a thread at an arbitrary point, then you may kill it while it is holding a lock in the memory allocator (for example). This will leave your program open to hanging when your main thread next needs to access that lock.
If you can't modify the code to check for termination, then just set its priority really low, and ignore it when it returns.
I wrote this in reply to a similar question:
I use an exception-based technique
that's worked pretty well for me in a
number of Win32 applications.
To terminate a thread, I use
QueueUserAPC to queue a call to a
function which throws an exception.
However, the exception that's thrown
isn't derived from the type
"Exception", so will only be caught by
my thread's wrapper procedure.
I've used this with C++Builder apps very successfully. I'm not aware of all the subtleties of Delphi vs C++ exception handling, but I'd expect it could easily be modified to work.

Resources