centos free space on disk not updating - linux

I am new to the linux and working with centos system ,
By running command df -H it is showing 82% if full, that is only 15GB is free.
I want some more extra spaces, so using WINSCP i hav done shift deleted the 15G record.
and execured df -H once again, but still it is showing 15 GB free. but the free size of the deleted
file where it goes.
Plese help me out in finding solution to this

In most unix filesystems, if a file is open, the OS will delete the file right way, but will not release space until the file is closed. Why? Because the file is still visible for the user that opened it.
On the other side, Windows used to complain that it can't delete a file because it is in use, seems that in later incarnations explorer will pretend to delete the file.
Some applications are famous for bad behavior related to this fact. For example, I have to deal with some versions of MySQL that will not properly close some files, over the time I can find several GB of space wasted in /tmp.
You can use the lsof command to list open files (man lsof). If the problem is related to open files, and you can afford a reboot, most likely it is the easiest way to fix the problem.

Related

Too many open files in linux we are closing file descriptor

We are facing too many open files in linux server and we already increase ulimits to maximum.
I am planning to close file descriptor using gdb command and then close fd for pid which I am looking for.
Will that solve issue of too many files, we are also looking solution to fix this from application .
Can anyone suggest closing fd by using gdb will solve our problem? Please suggest.
When executing lsof command it giving 26k files on server.
It seems unlikely that you have 26K distinct .ttf font files on the system.
This suggests that you open the same .ttf font multiple times, or that you are leaking file descriptors.
You should definitely fix the FD leak (if there is one) -- no amount of manually closing FDs is going to let the server work -- it will simply run into the same problem after a while.
For the "open the same file many times", the usual solution is to introduce a font cache -- before opening a file, check the font cache and use data from it if it's already been opened. This may increase memory consumption, but eliminate the "open many times" problem.
Memory consumption could be managed (if necessary) by using appropriate cache expiration policy.

Getting trouble to get the file count or file list of a large mounted drive

I have a shared drive where I have more than 2 million of wmv files of around 2TB total size. I am trying to access the drive by mounting it using smb protocol from my local MAC machine. When I run "
$ ls -a | wc -l
command to check the total count of files. I am getting different result every time. e.g if sometime I get the result as X then next time I am getting another result Y Here is the sample output sample output, which should not be as no other person is accessing this drive. When I investigate more I come to know that "ls" command output is different every time. This command should work as I have been using them since decade. Is it something I am doing wrong or in a large volume of data or network shared drive, this command fails ? I am sure there is no access or network issue while I am doing this activity. Any hint or work around will be much appreciated
I have faced the similar issue when I tried to access shared location which is having around 200K numbers of file.In my case the shared drive file system was NTFS file system. I believe there is a compatibility issue with SMB protocol and NTFS file system. Finally I tried to mount the shared drive using "NFS" instead of "SMB" and I could able to get the correct number of files in my mounted drive. This problem never happened in WINDOWS as I have mounted much higher number of files earlier using Windows many time. Hope this helps.
It is most likely because the file list is not immediately available to OSX from the network share. Apples implementation of SMB is still a bit buggy, unfortunately.
You may try: defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool TRUE
And see if it helps.

Protect file from system modifications

I am working on a linux computer which is locked down and used in kiosk mode to run only one application. This computer cannot be updated or modified by the user. When the computer crashes or freezes the OS rebuilds or modifies the ld-2.5.so file. This file needs to be locked down without allowing even the slightest change to it (there is an application resident which requires ld-2.5.so to remain unchanged and that is out of my control). Below are the methods I can think of to protect ld-2.5.so but wanted to run it by the experts to see if I am missing anything.
I modified the fstab to mount the EXT3 filesystem as EXT2 to disable journaling. Also set the DUMP and FSCK values to "0" to disable those processes.
Performed a "chattr +i ld-2.5.so" on the file but there are still system processes that can overwrite this protection.
I could attempt to trap the name of the processes which are hitting ld-2.5.so and prevent this.
Any ideas or hints would be greatly appreciated.
-Matt (CentOS 5.0.6)
chattr +i should be fine in most circumstances.
The ld-*.so files are under /usr/lib/ and /usr/lib64/. If /usr/ is a separate partition, you also might want to mount that partition read only on a kiosk system.
Do you have, by any chance, some automated updating/patching of said PC configured? ld-*.so is part of glibc and basically should only change if the glibc package is updated.

rsync hang in the middle of transfer with a fixed position

I am trying to use rsync to transfer some big files between servers.
For some reasons, when the file is big enough (2GB - 4GB), the rsync would hang in the middle, with the exactly same position, i.e., the progress at which it hanged always stick to the same place even if I retried.
If I remove the file from the destination server first, then the rsync would work fine.
This is the command I used:
/usr/bin/rsync --delete -avz --progress --exclude-from=excludes.txt /path/to/src user#server:/path/to/dest
I have tried to add delete-during and delete-delay, all have no luck.
The rsync version is rsync version 3.1.0 protocol version 31
Any advice please? Thanks!
Eventually I solved the problem by removing compression option: -z
Still don't know why is that so.
I had the same problem (trying to rsync multiple files of up to 500GiB each between my home NAS and a remote server).
In my case the solution (mentioned here) was to add to "/etc/ssh/sshd_config" (on the server to which I was connecting) the following:
ClientAliveInterval 15
ClientAliveCountMax 240
"ClientAliveInterval X" will send a kind of message/"probe" every X seconds to check if the client is still alive (default is not to do anything).
"ClientAliveCountMax Y" will terminate the connection if after Y-probes there has been no reply.
I guess that the root cause of the problem is that in some cases the compression (and/or block diff) that is performed locally on the server takes so much time that while that's being done the SSH-connection (created by the rsync-program) is automatically dropped while that's still ongoing.
Another workaround (e.g. if "sshd_config" cannot be changed) might be to use with rsync the option "--new-compress" and/or a lower compression level (e.g. "rsync --new-compress --compress-level=1" etc...): in my case the new compression (and diff) algorithm is a lot faster than the old/classical one, therefore the ssh-timeout might not occur than when using its default settings.
The problem for me was I thought I had plenty of disk space on a drive but the partition was not using the whole disk and was half the size of what I expected.
So check the size of the space available with lsblk and df -h and make sure the disk you are writing to reports all the space available on the device.

rm not freeing diskspace [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've rm'ed a 2.5gb log file - but it doesn't seemed to have freed any space.
I did:
rm /opt/tomcat/logs/catalina.out
then this:
df -hT
and df reported my /opt mount still at 100% used.
Any suggestions?
Restart tomcat, if the file is in use and you remove it, the space becomes available when that process finishes.
As others suggested, the file probably is still opened by other processes. To find out by which ones, you can do
lsof /opt/tomcat/logs/catalina.out
which lists you the processes. Probably you will find tomcat in that list.
Your Problem:
Its possible that a running program is still holding on to the file.
Your Solution:
Per the other answers here, you can simply shutdown tomcat to stop it from holding on to the file.
If that is not an option, or if you simply want more details, check out this question: Find and remove large files that are open but have been deleted - it suggests some harsher ways to deal with it that may be more useful to your situation.
More Details:
The linux/unix filesystem considers "opened" files to be another name for them. rm removes the "name" from the file as seen in the directory tree. Until the handles are closed, the files still has more "names" and so the file still exists. The file system doesn't reap files until they are completely unnamed.
It might seem a little odd, but doing it this way allows for useful things like enabling symlinks. Symlinks can essentially be treated as an alternate name for the same file.
This is why it is important to always call your languages equivalent to close() on a file handle if you are done with it. This notifies the OS that the file is no longer being used. Although sometimes this cant be helped - which is likely the case with Tomcat. Refer to Bill Karwin's Answer to read why.
Depending on the file-system, this is usually implemented as a sort of reference count, so there may not be any real names involved. It can also get weird if things like stdin and stderr are redirected to a file or another bytestream (most commonly done with services).
This whole idea is closely related to the concept of 'inodes', so if you are the curious type, i'd recommend checking that out first.
Discussion
It doesn't work so well anymore, but you used to be able to update the entire OS, start up a new http-daemon using the new libraries, and finally close the old one when no more clients are being serviced with it (releasing the old handles) . http clients wouldn't even miss a beat.
Basicly, you can completely wipe out the kernel and all the libraries "from underneath" running programs. But since the "name" still exists for the older copies, the file still exists in memory/disk for that particular program. Then it would be a matter of restarting all the services etc. While this is an advanced usage scenario, it is a reason why some unix system have years of up-time on record.
Restarting Tomcat will release any hold Tomcat has on the file. However, to avoid restarting Tomcat (e.g. if this is a production environment and you don't want to bring the services down unncessarily), you can usually just overwrite the file:
cp /dev/null /opt/tomcat/logs/catalina.out
Or even shorter and more direct:
> /opt/tomcat/logs/catalina.out
I use these methods all the time to clear log files for currently running server processes in the course of troubleshooting or disk clearing. This leaves the inode alone but clears the actual file data, whereas trying to delete the file often either doesn't work or at the very least confuses the running process' log writer.
As FerranB and Paul Tomblin have noted on this thread, the file is in use and the disk space won't be freed until the file is closed.
The problem is that you can't signal the Catalina process to close catalina.out, because the file handle isn't under control of the java process. It was opened by shell I/O redirection in catalina.sh when you started up Tomcat. Only by terminating the Catalina process can that file handle be closed.
There are two solutions to prevent this in the future:
Don't allow output from Tomcat apps to go into catalina.out. Instead use the swallowOutput property, and configure log channels for output. Logs managed by log4j can be rotated without restarting the Catalina process.
Modify catalina.sh to pipe output to cronolog instead of simply redirecting to catalina.out. That way cronolog will rotate logs for you.
the best solution is using 'echo' ( as #ejoncas' suggestion ):
$ echo '' > huge_file.log
This operation is quite safe and fast(remove about 1G data per second), especially when you are operating on your production server.
Don't simply remove this file using 'rm' because firstly you have to stop the process writing it, otherwise the disk won't be freed.
refer to: http://siwei.me/blog/posts/how-to-deal-with-huge-log-file-in-production
UPDATED: the origin of my story
in 2013, when I was working for youku.com, on the Saturday, I found one core server was down, the reason is : disk is full ( with log files)
so I simplely rm log_file.log ( without stopping the web app proccess) but found: 1. no disk space was freed and: 2. the log file was actually not seen to me.
so I have to restart my web-server( an Rails app ) and the disk space was finally freed.
This is a quite important lesson to me. It told me that echo '' > log_file.log is the correct way to free disk space if you don't want to stop the running process which is writing log to this file.
If something still has it open, the file won't actually go away. You probably need to signal catalina somehow to close and re-open its log files.
If there is a second hard link to the file then it won't be deleted until that is removed as well.
Enter the command to check which deleted files has occupied memory
$ sudo lsof | grep deleted
It will show the deleted file that still holds memory.
Then kill the process with pid or name
$ sudo kill <pid>
$ df -h
check now you will have the same memory
If not type the command below to see which file is occupying memory
# cd /
# du --threshold=(SIZE)
mention any size it will show which files are occupying above the threshold size and delete the file
Is the rm journaled/scheduled? Try a 'sync' command for force the write.

Resources