I thought about a concurrency issue (in Solaris), what happen if while reading someone tries to delete the same file. I have a query regarding file existence in the Solaris/Linux. suppose I have a file test.txt, I have open it in vi editor, and then I have open a duplicate session and remove that file, but even after deleting that file I am able to read that file. so here are my questions:
Do I need to thinks about any locking mechanism while reading, so no one able to delete same file while reading.
What is the reason of showing different behavior from windows(like in windows if file is open in in some editor than we can not delete that file)
After removing that file, how I am still able to read that file, if I haven't closed file from vi editor.
I am asking files in general,but yes platform specific i.e. unix. what will happen if I am using a java program (buffer reader) for read file and file is deleted while reading, does buffer reader still able to read the file for next chunk or not?
You have basically 2 or 3 unrelated questions there. Text editors like to read the whole file into memory at the start of the editing session. Imagine every character you type being saved to disk immediately, with all characters after it in the file being rewritten one place further along to make room. That would be awful. Much better that the thing you're actually editing is a memory representation of the file (array of pointers to lines, probably with some metadata attached) which only gets converted back into a linear stream when you explicitly save.
Any relatively recent version of vim will notify you if the file you are editing is deleted from its original location with the message
E211: File "filename" no longer available
This warning is not just for unix. gvim on Windows will give it to you if you delete the file being edited. It serves as a reminder that you need to save the version you're working on before you exit, if you don't want the file to be gone.
(Note: the warning doesn't appear instantly - vim only checks for the original file's existence when you bring it back into the foreground after having switched away from it.)
So that's question 1, the behavior of text editors - there's no reason for them to keep the file open for the whole session because they aren't actually using it except at startup and during a save operation.
Question 2, why do some Windows editors keep the file open and locked - I don't know, Windows people are nuts.
Question 3, the one that's actually about unix, why do open files stay accessible after they're deleted - this is the most interesting one. The answer, guaranteed to shock you when presented directly:
There is no command, function, syscall, or any other method which actually requests deletion of a file.
Underlying rm and any other command that may appear to delete a file there is the system call unlink. And it's called unlink, not remove or deletefile or anything similar, because it doesn't remove a file. It removes a link (a.k.a. directory entry) which is an association between a file and a name in a directory. (Note: ANSI C added remove as a more generic function to appease non-unix people who had no intention of implementing unix filesystem semantics, but on unix, remove is just a rmdir if the target is a directory, and unlink for everything else.)
A file can have multiple links (see the ln command for how they are created), which means that the same file is known by multiple names. If you rm one of them, the others stick around and the file is not deleted. What happens when you remove the last link? Well, now you have a file with no name. But names are only one kind of reference to a file. There are at least 2 others: file descriptors and mmap regions. When the last reference to a file goes away, that's when the file is deleted.
Since references come in several forms, there are many kinds of events that can cause a file to be deleted. Here are some examples:
unlink (rm, etc.)
close file descriptor
dup2 (can implicitly closes a file descriptor before replacing it with a copy of a different file descriptor)
exec (can cause file descriptors to be closed via close-on-exec flag)
munmap (unmap memory region)
mmap (if you create a new memory map at an address that's already mapped, the old mapping is unmapped)
process death (which closes all file descriptors and unmaps all memory mappings of the process)
normal exit
fatal signal generated by the kernel (^C, segfault)
fatal signal sent from another process (kill)
I won't call that a complete list. And I don't encourage anyone to try to build a complete list. Just know that rm is "remove name", not "remove file", and files go away as soon as they're not in use.
If you want to destroy the contents of a file immediately, truncate it. All processes already using it will find that its size has suddenly become 0. (This is destruction as far as the normal file access methods are concerned. To destroy it more thoroughly so that even someone with raw disk access can't read what used to be there, you need to overwrite it. There's a tool called shred for that.)
I think your question has nothing to do with the difference between Windows/Linux. It's about how VI works.
when using VI to edit a file. VI will create a .swp file. And the .swp file is what you are actually editing. At the same time, if other users delete the original file will not effect your editing.
And when you type :w in VI, VI will use .swp file to overwrite the original file.
Related
I'm considering making my application create a file in /tmp. All I'm doing is making a temporary file that I'll copy to a new location. On a good day this looks like:
Write the temp file
Move the temp file to a new location
However on my system (RHEL 7) there is a file at /usr/lib/tmpfiles.d/tmp.conf which indicates that /tmp gets cleaned up every 10 days. From what I can tell this is the default installation. So, what I'm concerned about is that I have the following situation:
Write the temp file
/tmp gets cleaned up
Move the temp file to a new location (explodes)
Is my concern founded? If so, how is this problem solved in sophisticated programs? If there are no sophisticated tricks, then it's a bit puzzling to me as I don't have a concrete picture of what the utility of /tmp is if it can be blown away completely at any moment.
this should not be a problem if you keep a file descriptor open during your operation. As long as a file descriptor is open, the FS keeps the file on disk but it just don't appear when using ls. So If you create another name for this file, it will "resurect" in some way. Keeping an open fd on a file that is deleted is a common way to create temporary files on linux
see the man 3 unlink:
The unlink() function shall remove a link to a file. [..] unlink() shall remove the link named by the pathname pointed to by
path and shall decrement the link count of the file
referenced by the link.
When the file's link count becomes 0 and no process has the file open, the space occupied by the file shall be freed and the file
shall no longer be accessible. If one or more
processes have the file open when the last link is removed, the link shall be removed before unlink() returns, but the removal of
the file contents shall be postponed until all
references to the file are closed.
External Updates that happened to that file content during the time between open() and first read() are not returned in the read() content.
How can I get the latest file content from the read()?
I've tried flush() and seek(0) but didn't help.
https://repl.it/repls/RealGreedyTransfer#main.py
import time
def myfoo(handle):
print("myfoo started", flush=True)
time.sleep(50)
# External updates that happen during that time don't show up in read()
# foo.flush()
# foo.seek(0)
# can't close and re-open file handle
print(handle.read()) # <-- Not reading updates done after file open
# Upstream code base passing a file handle under an exclusive fcntl.lockf() lock
handle = open('temp.txt', 'r+')
myfoo(handle)
The issue is with the way files are written. Many text editors don’t just write to the file, they use a different method: they write to a temporary file, and then rename it to the original filename. Since renames are atomic in POSIX, in the event of a system crash during saving, the old version of the file will be available, and the new version might or might not be available in the temporary file.
For most purposes, this works as desired. The only exception is in this case, where you’re holding onto a file handle. Renames/moves/deletions do not affect the file handles, they are still open with the file they were opened with, even if that file is no longer accessible from the filesystem. You can experiment with this by opening a file, then removing it with rm, and then reading from the file — it will still show you the file contents from before you deleted it. You can also access the file in Linux inside /proc/XX/fd.
Your file handle won’t see changes, unless they are actually written (and flushed) to the same file (without the rename dance). If you’re working with something that writes by renaming, you would need to reopen the file to see the new contents.
I have problem in working with link command in linux mint.
I made file1 and add a new hard link to that file :
link file1 file2
I know when I change the contents of file1 , file2 should change too.
and when I edit file1 with vim or add text to it with redirections it works well but when
I edit file1 with kate editor then it's like that the editor break the link of file2! and after that when
I change the contents of file1 with kate or vim,... file 2 will never change any more.
what's the problem?
I'm one of the Kate developers. The issue is as follows: Whenever Kate saves, it does so by saving into a temporary file in the same folder, and on success just does a move to the desired location.
This move operation is exactly what destroys your hard link: first, the hard link is removed, then the temporary file is renamed.
While this avoids data loss, it also has its own problems as you experience. We are tracking this bug here:
https://bugs.kde.org/show_bug.cgi?id=358457 - QSaveFile: Kate removes a hard link to file when opening a file with several hard links
And in addition, QSaveFile also has two further issues, tracked here:
https://bugs.kde.org/show_bug.cgi?id=333577 - QSaveFile: kate changes file owner
https://bugs.kde.org/show_bug.cgi?id=354405 - QSaveFile: Files are unlinked upon save
The solution would be to just directly write in all these corner cases, then we could avoid this trouble at the expense of loosing data in case of a system crash, so it's a tradeoff. To fix this, we need to patch Qt, which noone did so far.
Different programs save files in different ways. At least two come to mind:
open existing file and overwrite its content
create temporary file, write new content there, then somehow replace original file with new one (remove old file and rename new one; or rename old file, rename new file, then remove old file; or use system function to swap files content (in fact, swap names of files), then remove old file; or ...)
Judging from its current source code, Kate is using the latter approach (using QSaveFile in the end, with direct write fallback though). Why? Usually, to make file saving more or less atomic, and also to make sure that saving won't result in errors caused by e.g. lack of free space (although this also means that it uses the space of sum of old and new file sizes when saving).
I don't have Kate on Linux Mint but I have noticed issues which lead me to suspect you may have come across a "bug".
Here are two 'similar' issues with hard links that have been reported.
Bug 316240 - KSaveFile: Kate/kwrite makes a copy of a file while editing a hard link
"Hard link not supported" with NTFS on Linux #3241
My System was crashed yesterday, the problem is that syntax is disable and set autoindent is not in use.
Even I remove this file, touch this file again, it remains not right!
Swap files are not meant to edit directly.
Swap files are special files that store pieces of Vim's state, and pieces of the unsaved file, in a Vim-specific format. These are not backup files.
You may be able to use the swap file to recover any edits-in-progress. To do that, simply edit the file you were editing when your system crashed. Vim will detect the swap file and prompt you to recover the file if it is able to do so.
That is, if you haven't invalidated the swap file format attempting to edit it in a misguided attempt to recover your file from it by hand.
Now, Vim does have a separate ability to make real backup files that are copies of your file, whenever it saves. But that doesn't help you from a system crash, that helps you when you mess up your file yourself while you edit, and then save it.
There is also a proposed new feature (in the todo list) for adding a command to recover an entire file from an undo file, if the file itself got deleted somehow, but that's not included in any released Vim yet.
I using centos5.8
I wonder why syslog stop writing log after certain log edited.
for example, after cut the line of /var/log/messages, syslog doesn't write new log.
Just old logs are remained.
But If I delete the messages file and reboot the system, syslog works fine.
Is there any ways, syslogd write new logs continuosely after edit certain log??
It depends how exactly a file is edited.
Remember that syslogd keeps the file open for continuously writing. If your editor writes a new file with the old name after unlink()ing or rename()ing the old file, that old file remains in operation for syslogd. The new file, however, remains untouched.
It can be informed about the new file to be used with the HUP signal.
Well, that would depend on how syslogd works. I should mention that it's probably not a good idea to edit system log files, by the way :-)
You may have been caught out by one peculiarity of the way UNIX file systems can work.
If process A has a file open and is writing to it, process B can go and just delete the file (either with something like rm or, as seems likely here, in an edit session which deletes the old file and rewrites a new one).
Deletion of that original file does not destroy the data, it simply breaks the link between the file name (the directory entry) and that data.
A process that has the original file open can continue to write to it as long as it wants and, since there's no entry in the file system for it, you can't (easily) see it.
A new file with that file name may be bought into existence but process A will not be writing to it, it will be writing to the original. There will be no connection between the old and the new file.
You'll sometimes see this effect when you're running low on disk space and decide to delete a lot of files to clear it up somewhat. If processes have those files open, deleting them will not help you at all since the space is still being used (and quite possibly the usage is increasing).
You have to both delete the files and somehow convince those processes holding them open to relinquish them.
If you edited /var/log/syslog you need to restart the syslog service afterwards, because syslogd needs to open the file handle again.