What happen when delete shared memory files in dev/shm by using 'rm' command - linux

I used Posix shared memory to communicate between 2 process. Then during 2 process were sharing data, I used 'rm' command to remove all shared file which mounted in dev/shm. I expected some errors will happen, but everything still work normal.
So I have a question as below:
What happen when I using rm command line to delete all shared memory files in dev/shm directory.
I have googled but cannot find anywhere discuss about this situation.
Can anyone please explain to me about it?
Thanks so much

Related

Protect file from system modifications

I am working on a linux computer which is locked down and used in kiosk mode to run only one application. This computer cannot be updated or modified by the user. When the computer crashes or freezes the OS rebuilds or modifies the ld-2.5.so file. This file needs to be locked down without allowing even the slightest change to it (there is an application resident which requires ld-2.5.so to remain unchanged and that is out of my control). Below are the methods I can think of to protect ld-2.5.so but wanted to run it by the experts to see if I am missing anything.
I modified the fstab to mount the EXT3 filesystem as EXT2 to disable journaling. Also set the DUMP and FSCK values to "0" to disable those processes.
Performed a "chattr +i ld-2.5.so" on the file but there are still system processes that can overwrite this protection.
I could attempt to trap the name of the processes which are hitting ld-2.5.so and prevent this.
Any ideas or hints would be greatly appreciated.
-Matt (CentOS 5.0.6)
chattr +i should be fine in most circumstances.
The ld-*.so files are under /usr/lib/ and /usr/lib64/. If /usr/ is a separate partition, you also might want to mount that partition read only on a kiosk system.
Do you have, by any chance, some automated updating/patching of said PC configured? ld-*.so is part of glibc and basically should only change if the glibc package is updated.

Track origin of file deletion during make

I have a CMake project I am working on, and when I run "make" then the rebuild of one of the targets is getting triggered regardless of whether I touch any of the files in the system. I have managed to discover that this is because one of the files on which the target depends is somehow getting deleted when I run "make", so the system registers that it needs to be rebuilt. I tried setting "chattr +i" on this file to make it immutable and indeed, with this flag set, the rebuilding of the target is not triggered, though there are also no errors from a deletion attempt. But I think I can be sure that this is the culprit.
So now my question. How can I figure out what script or makefile is actually deleting this file? It is a big project with quite a few scripts and sub-makefiles which potentially run at different times during the build, so I am having a hard time manually discovering what is doing this. Is there some nice trick I can pull from the filesystem side to help? I tried doing "strace make", but parsing the output I can't seem to find the filename appearing anywhere. I am no strace expert though, so perhaps the file is getting deleted via some identifier rather than the filename? Or I am not stracing spawned processes perhaps?

Files "disappearing" from initramfs

On an embedded platform running Linux 2.6.36, I occasionally run into a problem where files do not appear in the root file system that ARE present in our initramfs cpio file.
I am building the initramfs from a cpio listing file (see gen_init_cpio.c), but also ran into the problem before when just using a full directory.
When I say I know the files are present in the cpio file, I mean if I extract usr/initrmafs_data.cpio.gz the files are there.
It seems to be loosely related to the amount of content in the initramfs, but I haven't found the magic number of files and/or total storage size that causes files to start disappearing.
Is there an option in make menuconfig I'm missing that would fix this? A boot argument? Something else?
Any suggestions?
Update: To clarify, this is with a built-in ramdisk using CONFIG_INITRAMFS_SOURCE and it's compressed with gzip via setting CONFIG_INITRAMFS_COMPRESSION_GZIP. Also, this is for a mipsel-linux platform.
Update #2: I've added a printk to init/initramfs.c:clean_path and mysteriously, the previously "disappearing" files are now all there. I think this sorta seems to point to a kernel bug if attempting to log the behavior altered the behavior. I'll compare initramfs.c against a newer kernel tomorrow to see if that sheds any light on the matter.
Probably your image size is bigger than default ramdisk size (4MB afaik). Check if adding ramdisk_size=valuebiggerthanyourimagesize as a kernel parameter (before root=... parameter) solves your problem. You can also try to change kernel config value CONFIG_BLK_DEV_RAM_SIZE.

centos free space on disk not updating

I am new to the linux and working with centos system ,
By running command df -H it is showing 82% if full, that is only 15GB is free.
I want some more extra spaces, so using WINSCP i hav done shift deleted the 15G record.
and execured df -H once again, but still it is showing 15 GB free. but the free size of the deleted
file where it goes.
Plese help me out in finding solution to this
In most unix filesystems, if a file is open, the OS will delete the file right way, but will not release space until the file is closed. Why? Because the file is still visible for the user that opened it.
On the other side, Windows used to complain that it can't delete a file because it is in use, seems that in later incarnations explorer will pretend to delete the file.
Some applications are famous for bad behavior related to this fact. For example, I have to deal with some versions of MySQL that will not properly close some files, over the time I can find several GB of space wasted in /tmp.
You can use the lsof command to list open files (man lsof). If the problem is related to open files, and you can afford a reboot, most likely it is the easiest way to fix the problem.

rm not freeing diskspace [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've rm'ed a 2.5gb log file - but it doesn't seemed to have freed any space.
I did:
rm /opt/tomcat/logs/catalina.out
then this:
df -hT
and df reported my /opt mount still at 100% used.
Any suggestions?
Restart tomcat, if the file is in use and you remove it, the space becomes available when that process finishes.
As others suggested, the file probably is still opened by other processes. To find out by which ones, you can do
lsof /opt/tomcat/logs/catalina.out
which lists you the processes. Probably you will find tomcat in that list.
Your Problem:
Its possible that a running program is still holding on to the file.
Your Solution:
Per the other answers here, you can simply shutdown tomcat to stop it from holding on to the file.
If that is not an option, or if you simply want more details, check out this question: Find and remove large files that are open but have been deleted - it suggests some harsher ways to deal with it that may be more useful to your situation.
More Details:
The linux/unix filesystem considers "opened" files to be another name for them. rm removes the "name" from the file as seen in the directory tree. Until the handles are closed, the files still has more "names" and so the file still exists. The file system doesn't reap files until they are completely unnamed.
It might seem a little odd, but doing it this way allows for useful things like enabling symlinks. Symlinks can essentially be treated as an alternate name for the same file.
This is why it is important to always call your languages equivalent to close() on a file handle if you are done with it. This notifies the OS that the file is no longer being used. Although sometimes this cant be helped - which is likely the case with Tomcat. Refer to Bill Karwin's Answer to read why.
Depending on the file-system, this is usually implemented as a sort of reference count, so there may not be any real names involved. It can also get weird if things like stdin and stderr are redirected to a file or another bytestream (most commonly done with services).
This whole idea is closely related to the concept of 'inodes', so if you are the curious type, i'd recommend checking that out first.
Discussion
It doesn't work so well anymore, but you used to be able to update the entire OS, start up a new http-daemon using the new libraries, and finally close the old one when no more clients are being serviced with it (releasing the old handles) . http clients wouldn't even miss a beat.
Basicly, you can completely wipe out the kernel and all the libraries "from underneath" running programs. But since the "name" still exists for the older copies, the file still exists in memory/disk for that particular program. Then it would be a matter of restarting all the services etc. While this is an advanced usage scenario, it is a reason why some unix system have years of up-time on record.
Restarting Tomcat will release any hold Tomcat has on the file. However, to avoid restarting Tomcat (e.g. if this is a production environment and you don't want to bring the services down unncessarily), you can usually just overwrite the file:
cp /dev/null /opt/tomcat/logs/catalina.out
Or even shorter and more direct:
> /opt/tomcat/logs/catalina.out
I use these methods all the time to clear log files for currently running server processes in the course of troubleshooting or disk clearing. This leaves the inode alone but clears the actual file data, whereas trying to delete the file often either doesn't work or at the very least confuses the running process' log writer.
As FerranB and Paul Tomblin have noted on this thread, the file is in use and the disk space won't be freed until the file is closed.
The problem is that you can't signal the Catalina process to close catalina.out, because the file handle isn't under control of the java process. It was opened by shell I/O redirection in catalina.sh when you started up Tomcat. Only by terminating the Catalina process can that file handle be closed.
There are two solutions to prevent this in the future:
Don't allow output from Tomcat apps to go into catalina.out. Instead use the swallowOutput property, and configure log channels for output. Logs managed by log4j can be rotated without restarting the Catalina process.
Modify catalina.sh to pipe output to cronolog instead of simply redirecting to catalina.out. That way cronolog will rotate logs for you.
the best solution is using 'echo' ( as #ejoncas' suggestion ):
$ echo '' > huge_file.log
This operation is quite safe and fast(remove about 1G data per second), especially when you are operating on your production server.
Don't simply remove this file using 'rm' because firstly you have to stop the process writing it, otherwise the disk won't be freed.
refer to: http://siwei.me/blog/posts/how-to-deal-with-huge-log-file-in-production
UPDATED: the origin of my story
in 2013, when I was working for youku.com, on the Saturday, I found one core server was down, the reason is : disk is full ( with log files)
so I simplely rm log_file.log ( without stopping the web app proccess) but found: 1. no disk space was freed and: 2. the log file was actually not seen to me.
so I have to restart my web-server( an Rails app ) and the disk space was finally freed.
This is a quite important lesson to me. It told me that echo '' > log_file.log is the correct way to free disk space if you don't want to stop the running process which is writing log to this file.
If something still has it open, the file won't actually go away. You probably need to signal catalina somehow to close and re-open its log files.
If there is a second hard link to the file then it won't be deleted until that is removed as well.
Enter the command to check which deleted files has occupied memory
$ sudo lsof | grep deleted
It will show the deleted file that still holds memory.
Then kill the process with pid or name
$ sudo kill <pid>
$ df -h
check now you will have the same memory
If not type the command below to see which file is occupying memory
# cd /
# du --threshold=(SIZE)
mention any size it will show which files are occupying above the threshold size and delete the file
Is the rm journaled/scheduled? Try a 'sync' command for force the write.

Resources