RAII is a good solution for resource cleanup. However, RAII is based on stack unwinding. If process terminates abnormally, the stack won't be unwinded. It means RAII won't work in this situation. For process life-time's resource, it's nothing to worry about, but for file system life tiem or kernal life time resource, such as file, message queue, semaphore, shared memory, it will be a problem.
How can I cleanup the system(fs and kernal) resource in a reliable way?
Example:
A shared file will be create by "master" process, and be used by "slave" process. The shared file should be deleted by "master" process in plan. Does it exist a way to do that.
Obvious, the shared file can't be unlink at once after it is created. If that, other processes can't "see" the file.
There is no perfect answer to your question. You mostly just mitigate the effects as best you can, so that "abnormal termination leaving junk behind" is rare.
First, write your programs to be as robust as possible against abnormal process termination.
Second, rely on kernel mechanisms where possible. In your shared file example, if you are just talking about one "master" and one "slave" process using a file to communicate, then you can unlink the file as soon as both processes have it open. The file will continue to exist and be readable and writable by both processes until both of them close it, at which point the kernel will automatically reclaim the storage. (Even if both of them terminate abnormally.)
And of course, the next time your server starts, it can clean up any junk left by the previous run, assuming only one is supposed to exist at a time.
The usual last-chance mechanism is to have "cleanup" processes that run periodically to (e.g.) sweep out /tmp.
But what you are asking is fundamentally hard. Any process responsible for handling another's abnormal termination might itself terminate abnormally. "Who watches the watchers?"
Related
I'm making a client/server system where clients can download and upload files to the server (one client can do several such operations at once). In case of a client crash it has to resume its interrupted operations on restart.
Obviously, I need some metadata file that keeps track of current operations. It would be accessed by a thread every time a chunk of data is downloaded/uploaded. Also, the client application should be able to print all files' download/upload progress in %.
I don't mind locking the whole meta-file for a single entry (that corresponds to single download/upload operation) update but at least reading it should allow thread concurrency (unless finding and reading a line in a file is fast).
This article says that inter-process file locking sucks in POSIX. What options do I have?
EDIT: it might be clear already but concurrency in my system must be based on pthreads.
This article says that inter-process file locking sucks in POSIX. What options do I have?
The article correctly describes the state of inter-process file locking. In your case, you have single process, thus no inter-process locking is taking place.
[...] concurrency in my system must be based on pthreads.
As you say, one of the possibilities, is to use a global mutex to synchronize accesses to the meta file.
One way to minimize the locking, is to make the per-thread/per-file entry in the meta-file of predictable size (for best results aligned on the file-system block size). That would allow following:
while holding the global mutex, allocate the per-file/per-thread entry in the meta-file by appending it to the end of the meta-file, and saving its file offset,
with the file offset, use the pread()/pwrite() functions to update the information about the file operations, without using or affecting the global file offset.
Since every thread would be writing into its own area of the meta-file, there should be no race conditions.
(If the number of entries has upper limit, then it is also possible to preallocate the whole file and mmap() it, and then use it as if it was plain memory. If application crashes, the most recent state (possibly with some corruptions; application has crashed after all) would be present in the file. Some applications to speed-up restart after software crash, go as far as keep the whole state of the application in a mmap()ed file.)
Another alternative is to use the meta-file as "journal": open the meta-file in append mode; as entry, write status change of the pending file operations, including start and end of the file operations. After the crash, to recover the state of the file transfers, you "replay" the journal file: read the the entries from the file, and update the state of the file operations in the memory. After you reach the end of the journal, the state in memory should be up-to-date and ready to be resumed. The typical complication of the approach, since the journal file is only written into, is that it has to be clean-up periodically, purging old finished operations from it. (Simplest method is to implement the clean-up during recovery (after recovery, write new journal and delete the old) and then periodically gracefully restart the application.)
I'm working on linux and I'm using a pthread_rwlock, which is stored in shared memory and shared over multiple processes. This mostly works fine, but when I kill a process (SIGKILL) while it is holding a lock, it appears that the lock is still held (regardless of whether it's a read- or write-lock).
Is there any way to recognize such a state, and possibly even repair it?
The real answer is to find a decent way to stop a process. Killing it with SIGKILL is not a decent way to do it.
This feature is specified for mutexes, called robustness (PTHREAD_MUTEX_ROBUST) but not for rwlocks. The standard doesn't provide it and kernel.org doesn't even have a page on rwlocks. So, like I said:
Find another way to stop the process (perhaps another signal that can be handled ?)
Release the lock when you exit
#cnicutar - that "real answer" is pretty dubious. It's the kernel's job to handle cross process responsibilities of freeing of resources and making sure things are marked consistent - userspace can't effectively do the job when stuff goes wrong.
Granted if everybody plays nice the robust features will not be needed but for a robust system you want to make sure the system doesn't go down from some buggy client process.
I have a Linux process that is being called numerous times, and I need to make this process as fast as possible.
The problem is that I must maintain a state between calls (load data from previous call and store it for the next one), without running another process / daemon.
Can you suggest fast ways to do so? I know I can use files for I/O, and would like to avoid it, for obvious performance reasons. Should (can?) I create a named pipe to read/write from and by that avoid real disk I/O?
Pipes aren't appropriate for this. Use posix shared memory or a posix message queue if you are absolutely sure files are too slow - which you should test first.
In the shared memory case your program creates the segment with shm_open() if it doesn't exist or opens it if it does. You mmap() the memory and make whatever changes and exit. You only shm_unlink() when you know your program won't be called anymore and no longer needs the shared memory.
With message queues, just set up the queue. Your program reads the queue, makes whatever changes, writes the queue and exits. Mq_unlink() when you no longer need the queue.
Both methods have kernel persistence so you lose the shared memory and the queue on a reboot.
It sounds like you have a process that is continuously executed by something.
Why not create a factory that spawns the worker threads?
The factory could provide the workers with any information needed.
... I can use files for I/O, and would like to avoid it, for obvious performance reasons.
I wonder what are these reasons please...
Linux caches files in kernel memory in the page cache. Writes go to the page cash first, in other words, a write() syscall is a kernel call that only copies the data from the user space to the page cache (it is a bit more complicated when the system is under stress). Some time later pdflush writes data to disk asynchronously.
File read() first checks the page cache to see if the data is already available in memory to avoid a disk read. What it means is that if one program writes data to files and another program reads it, these two programs are effectively communicating via kernel memory as long as the page cache keeps those files.
If you want to avoid disk writes entirely, that is, the state does not need to be persisted across OS reboots, those files can be put in /dev/shm or in /tmp, which are normally the mount points of in-memory filesystems.
If one of my processes open a file, let's say for reading only, does the O.S guarantee that no other process will write on it as I'm reading, maybe
leaving the reading process with first part of the old file version, and second part of the newer file version, making data integrity questionable?
I am not talking about pipes which have no seek, but on regular files, with seek option (at least when opened with only one process).
No, other processes can change the file contents as you are reading it. Try running "man fcntl" and ignore the section on "advisory" locks; those are "optional" locks that processes only have to pay attention to if they want. Instead, look for the (alas, non-POSIX) "mandatory" locks. Those are the ones that will protect you from other programs. Try a read lock.
No, if you open a file, other processes can write to it, unless you use a lock.
On Linux, you can add an advisory lock on a file with:
#include <sys/file.h>
...
flock(file_descriptor,LOCK_EX); // apply an advisory exclusive lock
Any process which can open the file for writing, may write to it. Writes can happen concurrently with your own writes, resulting in (potentially) indeterminate states.
It is your responsibility as an application writer to ensure that Bad Things don't happen. In my opinion mandatory locking is not a good idea.
A better idea is not to grant write access to processes which you don't want to write to the file.
If several processes open a file, they will have independent file pointers, so they can seek() and not affect one another.
If a file is opened by a threaded program (or a task which shares its file descriptors with another, more generally), the file pointer is also shared, so you need to use another method to access the file to avoid race conditions causing chaos - normally pread, pwrite, or the scatter/gather functions readv and writev.
In one of the multithreaded Linux application, the application quits without deleting the thread. Will this cause any thread resource leakage. If this application is launched many times during the course of the day, will the system crash?
For the most part, all resources used by a program are cleaned up when the program exists. There are a few exceptions (partial list here, no doubt):
files created (duh!)
TCP sockets may take several minutes after program exit to fully clean up (e.g., TIME_WAIT sockets)
SysV shared memory, semaphores, and message queues (clean up manually using ipcs/ipcrm)
Other than that, pretty much everything is cleaned up. Including threads.
Naturally, you should test this.
The kernel generally cleans up a process's resources (open files, threads, allocated memory, etc.) when it exits, so I don't think you need to worry. Although it could be stylistically better to delete the thread explicitly, possibly depending on your preferred coding style.