When does a created file become visible to other processes in ext4? - linux

Consider this pseudo-code and an ext4 file system:
f = open("/tmp/new_file", "w")
write(f, "Test")
close(f)
In another process, I try to open /tmp_newfile immediately afterwards:
Questions
Can the other process open the file?
What content does the other process see? Is it Test?
Expectations
I expect (1) to be true (the metadata is probably synchronized between processes) but (2) to be false (data might be buffered)
More questions
How can I ensure that my file changes are visible to other processes? flush seems to work but it is bad for performance because it forces a write-to-disk. Is there something like soft-flush that makes the changes visible to other processes without flushing it to disk?

Is it guaranteed, that the other process can see the file?
No, it's not guaranteed.
A third process can delete the file, even while you have it open.

Related

File append race condition (single thread!)

My program does something like:
Open a file (append mode)
Write some stuff
Close
[repeat]
The file is different most of the time but on certain occasions (not uncommon really) is repeated (either consecutively or in a very close iteration).
Is there any chance kernel can play tricks on me and opening the file not pointing to the end of the file? Say the write is not yet completed (buffered somewhere in the kernel) and opening the file again makes the fd point to a position that is not the real end of the file. That will result potentially in overlapping writes.
As I said, my program is single threaded, I see no reason why this would happen but I do not fully understand the kernel guarantees when it comes to this.
Thanks!

fs.createWriteStream over several processes

How can I implement a system where multiple Node.js processes write to the same file with fs.createWriteStream, such that they don't overwrite data? It looks like the default setup for fs.createWriteStream is that the file is cleared out when that method is called. My goal is to clear out the file once, and then have all other subsequent writers only append data.
Should I use fs.createWriteStream and then fs.appendFile? Or is there a way to open up a stream for each process, not just for the first process to open the file?
Should I use fs.createWriteStream and then fs.appendFile?
you can use either.
with fs.createWriteStream you have to change the flag like this:
fs.createWriteStream('your_file',{
flags: 'a+', // default is 'w' (just 'a' might be enough here, i'm not sure)
})
this should create the file if it doesn't exist or open it with write access if it exists and set the pointer to end. (append mode)
How to use fs.appendFile should be clear and it does pretty much the same.
Now the problem with multiple processes accessing the same file. Obviously only one process can open the same file with write access at the same time.
Therefore you need to wait for the file to be released if another process has the write access. You will probably need a library for that.
this one for example: https://www.npmjs.com/package/lockup
or this one: https://github.com/Perennials/mutex-node
you can also find alot more here: https://www.npmjs.com/browse/keyword/lock
or here: https://www.npmjs.com/browse/keyword/mutex
I have not tried any of those libraries but the one I posted and several others on the list should do exactly what you need.
Writing on a single file from multiple processes, ensuring data integrity, it is a fairly complex operation that you can orchestrate using File locking.
However, you have two simpler approaches:
Writing on a temporary file for each process, and then concatenate
the files at the end of the operations.
Transmitting what you need to write to a dedicated, single process and delegate the writing execution to it. Keep in mind that sending messages among processes can be expensive.

cross-process locking in linux

I am looking to make an application in Linux, where only one instance of the application can run at a time. I want to make it robust, such that if an instance of the app crashes, that it won't block all the other instances indefinitely. I would really appreciate some example code on how to do this (as there's lots of discussion on this topic on the web, but I couldn't find anything which worked when I tried it).
You can use file locking facilities that Linux provides. You haven't specified the language, however you might find this capability pretty much everywhere in some form or another.
Here is a simple idea how to do that in a C program. When the program starts you can take an exclusive non-blocking lock on the whole file using fcntl system call. When another instance of the applications is attempted to be started, it will get an error trying to lock the file, which will mean the application is already running.
Here is a small example how to take the full file lock using fcntl (this function provides facilities for putting byte range locks, but when length is 0, the full file is locked).
struct flock lock_struct;
memset(&lock_struct, 0, sizeof(lock_struct));
lock_struct.l_type = F_WRLCK;
lock_struct.l_whence = SEEK_SET;
lock_struct.l_pid = getpid();
ret = fcntl(fd, F_SETLK, &lock_struct);
Please note that you need to open a file first to put a lock. This means you need to have a file around to use for locking. It might be useful to put the it somewhere where it won't cause any distraction/confusion for other applications.
When the process terminates, all locks that it has taken will be released, so nothing will be blocked.
This is just one of the ideas. I'm pretty sure there are other ways around.
The conventional UNIX way of doing this is with PID files.
Before a process starts, it checks to see if a pre-determined file - usually /var/run/<process_name>.pid exists. If found, its an indication that a process is already running and this process quits.
If the file does not exist, this is the first process to run. It creates the file /var/run/<process_name>.pid and writes its PID into it. The process unlinks the file on exit.
Update:
To handle cases where a daemon has crashed & left behind the pid file, additional checks can be made during startup if a pid file was found:
Do a ps and ensure that a process with that PID doesn't exist
If it exists ensure that its a different process
from the said ps output
from /proc/$PID/stat

How to create temporary files on linux that will automatically clean up after themselves no matter what?

I want to create a temporary file on linux while making sure that the file will disappear after my program has terminated, even if it got killed or someone performs a hard reboot in the wrong moment. Does tmpfile() handle all this for me?
You seem pre-occupied with the idea that files might get left behind some how because of some race condition, I don't see an explanation of why this is a concern.
"A race condition occurs when a program doesn't work as it's supposed to because of an unexpected ordering of events that produces contention over the same resource."
I was assuming that from your comments on other answers your concern was specifically on a dead-lock which is a result of trying to remediate a race-condition ( contention of the shared resource ). It is still not clear what your concern is, calling tmpfile() and having the program exit abnormally before that function gets to call unlink() is the least of your worries if your application is really that fragile.
Given that there isn't any mention of concurrency, threading or other processes sharing this file descriptor to this temp file, I still don't see the possibility for a race condition, maybe the concept of an incomplete logical transaction, but that can be detected and cleaned up.
The correct way to make absolutely sure that any allocated file system resources are cleaned up is not solely on exit of an application but also also on start-up. All my server code, makes sure that everything is cleaned up from a previous run before it starts and makes itself available.
Put your temp files in a sub-dir in /tmp make sure your application cleans this sub-dir on startup and normal shutdown. You can wrap your app start up with a shell script that detects abnormal ( kill -9 ) shutdown based on PID existence and also does clean up activities.
If you don't want to use tmpfile(), you can unlink() your file immediately after creating it. It will stay open and present and allocated until it is closed.
But on a hard reboot, a fsck might be needed in order to recover the space. But as this is always the case, it is no special drawback of this approach.
according to tmpfile() man page:
The file will be automatically deleted when it is closed or the
program terminates.
I have not tested, but it seems it should do what you want.
Moreover:
The default location, if TMPDIR is not set, is /tmp.
Then, when a reboot is produced, /tmp will be empty.
EDIT: Yes
I checked the tmpfile source, and it does indeed use glglgl trick, and instantly unlocks the file.
Original:
I would say no. Got killed should work, but I would assume that it can happen, that after a hard reboot (e.g. due to power outtake) the file is still there. But that depends on your Linux distribution and the used settings.
If the temp file is created in a ramdisk, it is gone (there are unix distris out there that e.g. use a ram based tmpfs for temporary files).
Or if you use an environment that has certain policy regarding tmp, it could be also gone (maybe not instant, but often there are policies, like e.g. remove all files in /tmp that are not accessed within one month), but it could be also on a standard file system where such rules are not enforced. In this case the file would stay.
The customary approach is to set up a signal handler to clean up if the program is interrupted. This will not handle kill -9 or a physical reboot, which can't be trapped. Create temporary files in /tmp, which is normally cleaned out when the system boots. All that remains then is to teach people not to use kill -9 when they don't need to, but that appears to be an uphill battle.
In linux, mktemp command works.

Move file in Linux only when it's not in use by another process

Using the lsof command, I can determine whether a file is in use by some process, but I need to atomically check a file for use and move it only if unused. These files are in use by various other programs over which I have no control, so I can't use advisory locks. The purpose is to stop other processes from modifying that file, so just moving the file while a process has it open is not OK. Any solution?
UPDATE: a solution just occurred to me that I think suits my purposes. The end goal is to process these files in their final state when other programs finish modifying them. If I move the file to another directory, I can then use lsof to check whether it is still in use via its old path; if so, I just check again later until it's no longer in use and then process the file. By moving the file to another directory, it hides the file from users and the program. I don't want users and programs seeing the file in the old directory because that gives them opportunity to open the file in between the time I use lsof and process the file, which means I'd be processing the file in a modified state.
How about using fuser? Run it on a file and it will display the PIDs of processes using the file. If there are no PIDs, there is nothing using the file. It will also return a non-zero exit code if there is no process using the file.
However, note that you could still have a race condition, because a process could open the file after the fuser command returns and before you mv it.
Some sample code to move a file if not in use:
if ! fuser /my/file
then
mv /my/file /somewhere/else
fi
You can move a file which something is accessing it, assuming you move it on the same file system using the rename(2) syscall (which mv would use if source and target are on the same file system). You could even remove it using unlink(2) system call. And moving such a file would indeed forbid other future processes to access it by that same path.
You could also use the inotify(7) API to be notified when something access it.
At you might also consider mandatory locking at least with some file systems.
but rumors are that mandatory locking does not work well and could be buggy sometimes

Resources