how to logrotate(8) tell the process that inode has changed? - linux

say biz.log:
I read the logrotate(8) defaults to (1) rename the old log file with the date, biz.log.20220129 and then (2) create a new file with the origin name, and finally
(3) change the process's log output to the new file inode.
My question is the fact that the biz-process uses the inode instead of the file makes step1, and step2 reasonable. But how does the (3) work? I am not sure how logrotate can tell the process the log file should be reopened, and do I have to add some code to handle this in my project?

It requires cooperation between logrotate and application. Logrotate must be configured through postrotate scripts, to emit signal (i.e. SIGUSR), which application must handle - by reopening log files by their name. Actually one may also use different interprocess communication method, but i believe signals are common. See for example discussion on logrotate for httpd configuration (https://serverfault.com/questions/262868/logrotate-configuration-for-httpd-centos).
If there is no option to modify application, to work with signals for file rotation, one may configure logrotate to use copytruncate option - where logrotate copy current file to backup file, and then truncates current file to 0 bytes. If file is opened by application in append mode, it should be seamless. See https://man7.org/linux/man-pages/man8/logrotate.8.html copytruncate section.

Related

linux logrotate, rotate file with underscore

I have a logrotate configuration which is given below.
/var/log/Test.log {
size 50k
missingok
rotate 10
notifempty
create 0640 root root
}
The file is rotating successfully once it reached the size of 50kb in the below format
Test.log.1
Test.log.2
Test.log.3
What I can do if I want to rotate files in the below format
Test_1.log
Test_2.log
Test_3.log
Try this one,it should help you.
/var/log/Test.log {
...
postrotate
/usr/bin/mv Test.log.1 Test_1.log
...
endscript
}
OK, so the problem here is that logrotate uses these filename extensions to keep track of its rotations. So if you have x.log, it inherently needs a predictable, sequential extension to keep track of what is what; i.e. x.log.1 is the first rotation, x.log.2 is the second, etc.
By renaming the file like that, you're moving the extension to the middle of the file and subsequently leaving each file with the same extension: .log. So as far as logrotate is concerned, it will think those are all unrelated files, not rotations of a previous log.
E.g., if you rename Test.log.1 to Test_1.log, and leave that file in the same folder, logrotate won't know that it's a rotation of Test.log and create a new Test.log.1 upon next rotation. And so on, and so on.
So the short answer is no, there is no logrotate directive that will rename a file like you want. You will have to create this functionality yourself using a postrotate script, and you will also have to (at some point) move these files out of the var/log/ directory once they're renamed, using either said postrotate script or the olddir directive.
The dateext directive might also be useful, as in my opinion date extensions are typically more user-friendly than numerical ones.

Why file is accessible after deleting in unix?

I thought about a concurrency issue (in Solaris), what happen if while reading someone tries to delete the same file. I have a query regarding file existence in the Solaris/Linux. suppose I have a file test.txt, I have open it in vi editor, and then I have open a duplicate session and remove that file, but even after deleting that file I am able to read that file. so here are my questions:
Do I need to thinks about any locking mechanism while reading, so no one able to delete same file while reading.
What is the reason of showing different behavior from windows(like in windows if file is open in in some editor than we can not delete that file)
After removing that file, how I am still able to read that file, if I haven't closed file from vi editor.
I am asking files in general,but yes platform specific i.e. unix. what will happen if I am using a java program (buffer reader) for read file and file is deleted while reading, does buffer reader still able to read the file for next chunk or not?
You have basically 2 or 3 unrelated questions there. Text editors like to read the whole file into memory at the start of the editing session. Imagine every character you type being saved to disk immediately, with all characters after it in the file being rewritten one place further along to make room. That would be awful. Much better that the thing you're actually editing is a memory representation of the file (array of pointers to lines, probably with some metadata attached) which only gets converted back into a linear stream when you explicitly save.
Any relatively recent version of vim will notify you if the file you are editing is deleted from its original location with the message
E211: File "filename" no longer available
This warning is not just for unix. gvim on Windows will give it to you if you delete the file being edited. It serves as a reminder that you need to save the version you're working on before you exit, if you don't want the file to be gone.
(Note: the warning doesn't appear instantly - vim only checks for the original file's existence when you bring it back into the foreground after having switched away from it.)
So that's question 1, the behavior of text editors - there's no reason for them to keep the file open for the whole session because they aren't actually using it except at startup and during a save operation.
Question 2, why do some Windows editors keep the file open and locked - I don't know, Windows people are nuts.
Question 3, the one that's actually about unix, why do open files stay accessible after they're deleted - this is the most interesting one. The answer, guaranteed to shock you when presented directly:
There is no command, function, syscall, or any other method which actually requests deletion of a file.
Underlying rm and any other command that may appear to delete a file there is the system call unlink. And it's called unlink, not remove or deletefile or anything similar, because it doesn't remove a file. It removes a link (a.k.a. directory entry) which is an association between a file and a name in a directory. (Note: ANSI C added remove as a more generic function to appease non-unix people who had no intention of implementing unix filesystem semantics, but on unix, remove is just a rmdir if the target is a directory, and unlink for everything else.)
A file can have multiple links (see the ln command for how they are created), which means that the same file is known by multiple names. If you rm one of them, the others stick around and the file is not deleted. What happens when you remove the last link? Well, now you have a file with no name. But names are only one kind of reference to a file. There are at least 2 others: file descriptors and mmap regions. When the last reference to a file goes away, that's when the file is deleted.
Since references come in several forms, there are many kinds of events that can cause a file to be deleted. Here are some examples:
unlink (rm, etc.)
close file descriptor
dup2 (can implicitly closes a file descriptor before replacing it with a copy of a different file descriptor)
exec (can cause file descriptors to be closed via close-on-exec flag)
munmap (unmap memory region)
mmap (if you create a new memory map at an address that's already mapped, the old mapping is unmapped)
process death (which closes all file descriptors and unmaps all memory mappings of the process)
normal exit
fatal signal generated by the kernel (^C, segfault)
fatal signal sent from another process (kill)
I won't call that a complete list. And I don't encourage anyone to try to build a complete list. Just know that rm is "remove name", not "remove file", and files go away as soon as they're not in use.
If you want to destroy the contents of a file immediately, truncate it. All processes already using it will find that its size has suddenly become 0. (This is destruction as far as the normal file access methods are concerned. To destroy it more thoroughly so that even someone with raw disk access can't read what used to be there, you need to overwrite it. There's a tool called shred for that.)
I think your question has nothing to do with the difference between Windows/Linux. It's about how VI works.
when using VI to edit a file. VI will create a .swp file. And the .swp file is what you are actually editing. At the same time, if other users delete the original file will not effect your editing.
And when you type :w in VI, VI will use .swp file to overwrite the original file.

Why syslog stop writing log after edit certain log?

I using centos5.8
I wonder why syslog stop writing log after certain log edited.
for example, after cut the line of /var/log/messages, syslog doesn't write new log.
Just old logs are remained.
But If I delete the messages file and reboot the system, syslog works fine.
Is there any ways, syslogd write new logs continuosely after edit certain log??
It depends how exactly a file is edited.
Remember that syslogd keeps the file open for continuously writing. If your editor writes a new file with the old name after unlink()ing or rename()ing the old file, that old file remains in operation for syslogd. The new file, however, remains untouched.
It can be informed about the new file to be used with the HUP signal.
Well, that would depend on how syslogd works. I should mention that it's probably not a good idea to edit system log files, by the way :-)
You may have been caught out by one peculiarity of the way UNIX file systems can work.
If process A has a file open and is writing to it, process B can go and just delete the file (either with something like rm or, as seems likely here, in an edit session which deletes the old file and rewrites a new one).
Deletion of that original file does not destroy the data, it simply breaks the link between the file name (the directory entry) and that data.
A process that has the original file open can continue to write to it as long as it wants and, since there's no entry in the file system for it, you can't (easily) see it.
A new file with that file name may be bought into existence but process A will not be writing to it, it will be writing to the original. There will be no connection between the old and the new file.
You'll sometimes see this effect when you're running low on disk space and decide to delete a lot of files to clear it up somewhat. If processes have those files open, deleting them will not help you at all since the space is still being used (and quite possibly the usage is increasing).
You have to both delete the files and somehow convince those processes holding them open to relinquish them.
If you edited /var/log/syslog you need to restart the syslog service afterwards, because syslogd needs to open the file handle again.

log4j fileappender doesn't switch to the new file when logrotate rotates the log file

Context:
I want to use log4j to write audit-related logs to a specific log file, let's say audit.log. I don't want to use syslogappender(udp based) because I don't want to be tolerant to data loss. Plus, I am using logrotate to rotate the audit.log when the file gets to certain size.
Problem:
I am encountering is that, when logrotate rotates the file audit.log to audit.log.1, log4j keeps writing to audit.log.1 other than writing to the audit.log.
Possible approaches:
I know I can use rollingfileappender to do the log rotation other than use logrotate, so when rollingfileappender rolls the file, it switch to the new file without hassles. But the reason I can't use rollingfileappender is that I want to use logrotate's post rotate feature to trigger some scripts after the rotation happens which can't be provided by rollingfileappender.
Another desperate way I can think of is to write a log4j customized appender myself to close the log file(audit.log.1) and open the new one(audit.log) when it detects the file is rotated.
I never used ExternallyRolledFileAppender, but if it's possible to use logrotate post rotate to send the signal to ExternallyRolledFileAppender and make log4j aware the file is rotated, and start writing to the new file?
Question:
Just wondering is there some appender like that already been invented/written? or do I have other options to solve this?
Check out logrotate's copytruncate option, it might help your case:
copytruncate
Truncate the original log file to zero size in place
after creating a copy, instead of moving the old log
file and optionally creating a new one. It can be
used when some program cannot be told to close its
logfile and thus might continue writing (appending) to
the previous log file forever. Note that there is a
very small time slice between copying the file and
truncating it, so some logging data might be lost.
When this option is used, the create option will have
no effect, as the old log file stays in place

How can you tell what files are currently open by any user?

I am trying to write a script or a piece of code to archive files, but I do not want to archive anything that is currently open. I need to find a way to determine what files in a directory are open. I want to use either Perl or a shell script, but can try use other languages if needed. It will be in a Linux environment and I do not have the option to use lsof. I have also had inconsistant results with fuser. Thanks for any help.
I am trying to take log files in a directory and move them to another directory. If the files are open however, I do not want to do anything with them.
You are approaching the problem incorrectly. You wish to keep files from being modified underneath you while you are reading, and cannot do that without operating system support. The best that you can hope for in a multi-user system is to keep your archive metadata consistent.
For example, if you are creating the archive directory, make sure that the number of bytes stored in the archive matches the directory. You can checksum the file contents before and after reading the filesystem and compare that with what you wrote to the archive and perhaps flag it as "inconsistent".
What are you trying to accomplish?
Added in response to comment:
Look at logrotate to steal ideas about how to handle this consistently just have it do the work for you. If you are concerned that rename of files will make processes that are currently writing them will break things, take a look at man 2 rename:
rename() renames a file, moving it
between directories if required. Any
other hard links to the file (as
created using link(2)) are unaffected.
Open file descriptors for oldpath are
also unaffected.
If newpath already exists it will be atomically replaced (subject
to a few conditions; see ERRORS
below), so that there is no point at
which another process attempting to
access newpath will find it missing.
Try ls -l /proc/*/fd/* as root.
msw has answered the question correctly but if you want to file the list of open processes, the lsof command will give it to you.

Resources