I am writing a linux module which exchanges data with a user process.
Perhaps the system may crash if module try to send data to a terminated user process. For avoiding crash we can use kill() function for checking availability of pid. But it will not work every time because user process may be closed after successful return of kill() function. I wanted to know that "Is there any signal and signal handling mechanism exist which can handle this situation?".
If yes, then please explain this.
Thanks..
Related
I work on an application with a REDIS data store. To maintain the data integrity I would like to cleanup the store on process shutdown.
I created a function handler and bind it (with process.on()) to signal events: uncaughtException, SIGINT, SIGTERM, SIGQUIT and it works well (CTRL+C, process killing, exception, ...).
But I've read that in certain conditions the process can exit directly without triggering the other signal events.
The problem in this particular case is that process.on('exit') handler can only process synchronous tasks.
I made different test to try to kill the process in different ways.
And (except with SIGTERM on Windows) I wasn't able to identify the case where process.on('exit') is triggered directly without SIGINT, SIGTERM or other event firing.
So my question is (on Linux system), under what conditions the process can exit directly without firing on of this event: http://nodejs.org/api/all.html#all_signal_events ?
As of now, reading the documentation and doing some research, it seems there is only four way a node.js app exit:
process.exit(), which is handled by process.on('exit')
Receiving a *nix signal, which can be handled by process.on('signal_name')`
Having a exception going back to the event loop, handled by process.on('UncaughtException')
The computer being plugged out, destroyed, the node.js binary blowing up, or a SIGKILL/kill -9, and there is no handling to that.
It usually happen someone don t understand the error message from a uncaughtException, and mistakenly believe it is "something else" that killed node.js.
Indeed. I just meant to point out that Node programs can exit as a result of a signal without having executed a handler (if none was registered). Rereading your post, I see that may be what you meant as well.
We have one system and an external Baseboard Management Controller (BMC) to monitor this system. When there is a critical error occurred in the system, the error should be logged and sent to the external BMC. The process of sending the error message to the BMC may take a lot of time, as we need to compose the log entry, send the event out via the I2C bus. The error is captured inside the interrupt handler which requires to process the event in a very short time and non-blocking manner. On the other hand, if the error is non-recoverable, the system may reboot immediately.
May you please recommend a good way to handle the error reporting inside the interrupt handler, or is there any standard way for this procedure? Any suggestions are appreciated. Thanks in advanced.
There is no good way.
If your BMC communications sleep, you cannot do them from inside the interupt handler and must move them to a workqueue.
If your system reboots immediately after the interrupt handler, you cannot communicate with the BMC.
If your interrupt handler actually knows that the system will reboot, then you could change the I²C driver to add some method to send data from inside an interrupt handler, by busy-polling instead of sleeping.
I am having bit of a problem with my cgi web application, I use ithreads to do some parallel processing, where all the thread have a common 'goal'. Thus I detach all of them, and once I find my answer, I call exit.
However the problem is that the script will actually continue processing even after the user has closed the connection and left, which of course if a problem resourcewise.
Is there any way to force exit on the parent process if the user has disconnected?
If you're running under Apache, if the client closes the connection prematurely, it sends a SIGTERM to the cgi process. In my simple testing, that kills the script and threads as default behavior.
However, if there is a proxy between the server and the client, it's possible that Apache will not be able to detect the closed connection (as the connection from the server to the proxy may remain open) - in that case, you're out of luck.
AFAIK create and destroy threads isn't (at least for now) a good Perl practice because it will constantly increase the memory usage!
You should think in some other way to get the job done. Usually the solution is create a pool of threads and send arguments with the help of a shared array or Thread::Queue.
I personally would suggest changing you approach and, when creating these threads for the client connection, would be to save and associate PID of each thread with the client connection. I personally like to use daemons instead of threads, ie. Proc::Daemon. When client disconnects prematurely (before the threads finish), send SIGTERM to each process ID associated with that client.
To exit gracefully, override the termination sub in the thread process with a stop condition, so something like:
$SIG{TERM} = sub { $continue = 0; };
Where $continue would be the condition of the thread processing loop. You still would have to watch out for code errors, because even you can try overriding $SIG{__DIE__}, the die() method usually doesn't respect that and dies instantly without grace ;) (at least from my experience)
I'm not sure how you go about detecting if the user has disconnected, but, if they have, you'll have to make the threads stop yourself, since they're obviously not being killed automatically.
Destroying threads is a dangerous operation, so there isn't a good way to do it.
The standard way, as far as I know, is to have a shared variable that the threads check periodically to determine if they should keep working. Set it to some value before you exit, and check for that value inside your threads.
You can also send a signal to the threads to kill them. The docs know more about this than I do.
Situation:
I'm am using named pipes on Windows for IPC, using C++.
The server creates a named pipe instance via CreateNamedPipe, and waits for clients to connect via ConnectNamedPipe.
Everytime a client calls CreateFile to access the named pipe, the server creates a thread using CreateThread to service that client. After that, the server reiterates the loop, creating a pipe instance via CreateNamedPipe and listening for the next client via ConnectNamedPipe, etc ...
Problem:
Every client request triggers a CreateThread on the server. If clients come fast and furious, there would be many calls to CreateThread.
Questions:
Q1: Is it possible to reuse already created threads to service future client requests?
If this is possible, how should I do this?
Q2: Would Thread Pool help in this situation?
I wrote a named pipe server today using IOCompletion ports just to see how.
The basic logic flow was:
I created the first named pipe via CreateNamedPipe
I created the main Io Completion Port object using that handle: CreateIoCompletionPort
I create a pool of worker threads - as a thumb suck, CPUs x2. Each worker thread calls GetQueuedCompletionStatus in a loop.
Then called ConnectNamedPipe passing in an overlapped structure. When this pipe connects, one of the GetQueuedCompletionStatus calls will return.
My main thread then joins the pool of workers by also calling GetQueuedCompletionStatus.
Thats about it really.
Each time a thread returns from GetQueuedCompletionStatus its because the associated pipe has been connected, has read data, or has been closed.
Each time a pipe is connected, I immediately create a unconnected pipe to accept the next client (should probably have more than one waiting at a time) and call ReadFile on the current pipe, passing an overlapped structure - ensuring that as data arrives GetQueuedCompletionStatus will tell me about it.
There are a couple of irritating edge cases where functions return a fail code, but GetLastError() is a success. Because the function "failed" you have to handle the success immediately as no queued completion status was posted. Conversely, (and I belive Vista adds an API to "fix" this) if data is available immediately, the overlapped functions can return success, but a queued completion status is ALSO posted so be careful not to double handle data in that case.
On Windows, the most efficient way to build a concurrent server is to use an asynch model with completion ports. But yes you can use a thread pool and use blocking i/o too, as that is a simpler programming abstraction.
Vista/Windows 2008 provide a thread pool abstraction.
In Linux, is it possible for me to open a socket and pass the socket to another process?
If yes, can you please tell me where I can find an example?
Thank you.
Yes you can, using sendmsg() with SCM_RIGHTS from one process to another:
SCM_RIGHTS - Send or receive a set of
open file descriptors from another
process. The data portion contains an
integer array of the file descriptors.
The passed file descriptors behave as
though they have been created with
dup(2).
http://linux.die.net/man/7/unix
That is not the typical usage though. More common is when a process inherits sockets from its parent (after a fork()). Any file handles (including sockets) not closed will be available to the child process. So the child process inherits the parent's sockets.
A server process that listens for connections is called a daemon. This usually forks on each new connection, spawning a process to handle each new request. An example of the typical daemon is here:
http://www.steve.org.uk/Reference/Unix/faq_8.html#SEC88
Scroll down to void process().