I recently ran a report on my EC2 server and was told that it ran out of space. I deleted the csv that was partially generated from my report (it was going to be a pretty sizable one) and ran df -h and was surprised to get this output:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 7.0G 718M 91% /
devtmpfs 15G 100K 15G 1% /dev
tmpfs 15G 0 15G 0% /dev/shm
I surprised not only by how little was available/how much space was used,(I am on the /dev/xvda1 instance) but also surprised to see 2 alternative filesystems.
To investigate what was taking so much space, I ran du -h in ~ and saw the list of all directories on the server. Their reported size in aggregate should not be even close to 7 gb...which is why I ask "what is taking up all that space??"
The biggest directory by far was the ~ directory containing 165MB all other were 30MB and below. My mental math added it up to WAY less than 7gb. (if I understand du -h correctly, all directories within ~ ought to be included within 165MB...so I am very confused how 7 gb could be full)
Anyone know what's going on here, or how I can clean up the space? Also, just out of curiosity, is there a way to utilize the devtmpfs/tmpfs servers from the same box? I am running on AWS Linux, with versions of python and ruby installed
According to this answer, it seems as though it might be because of log files getting too large. Try run the command OP mentioned in their answer, in order to find all large files: sudo find / -type f -size +10M -exec ls -lh {} \;
For me, the best option was to delete the overlay2 docker folder and to completely refresh docker to a clean state. It clears up more than 3GB in my case.
Important note: it will stop and remove your instances, so you need to rebuild them.
In order to do that, first stop the docker engine
sudo systemctl stop docker
Prune and then delete the entire docker directory (not just the overlay2 folder):
docker system prune
sudo rm -rf /var/lib/docker
Restart docker:
sudo systemctl start docker
The engine will restart without any images, containers, volumes, user created networks, or swarm state.
Additionaly you can remove snap with:
sudo apt autoremove --purge snapd
On a CentOS Linux box, when I run the following:
df-h
I get that vg_name-1v_root is at 100%.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_name-lv_root 12G 12G 0 100% /
When I drill down to /dev/mapper it looks like this vg_name-1v_root is a soft link to ../dm-0.
However i'm not able to get into vg_name-1v_root or the ../dm-0 directories.
I am able to run lsblk, vgs and lvs to view the volume, but cannot enter it or view the contents.
I've spent some time googling and searching Stack Overflow. How can I delete or even view what's in the directory /dev/mapper/vg_name-1v_root?
Many thanks in advance.
You're looking at the wrong column in the df output. That's a device, not a directory. In the Mounted on column, you see that the device is mounted on /, the root directory. It contains all of the files that aren't under any other mount point - your /bin, your /etc, your /lib, and depending on your setup, maybe your /usr, /tmp, /home... anything at the top level that you don't see listed separately in the output of df or mount.
To find out what's taking up space on that filesystem, you can run du from the root directory, using the -x option to prevent it from crossing into other filesystems.
cd /
du -x
After setting up my Raspberry Pi, I made an image to make reverting to older software states easier. Recently I wanted to do that so I saved the content of my /home/pi folder, formated the sd-card and wrote the image onto it.
So far everything worked fine. Then I tried to simply delete the complete /home/pi folder and replace it with my previously saved folder from the old image. Now it seems like all files are there. But it doesnt boot correctly.
At some point it just stops to boot. I can then use it normally like the terminal, but Desktop is not starting.
So, how can I replace my home directory the right way so I don't make any damage to the system?
edit:
I just tried to do this again.
sudo cp -a /home/pi/fileserver/backup /home/backup
(i mounted a network drive in fileserver. Since network is on windows i assume all permissions are already gone here)
cp -a /home/pi/. /home/original
sudo umount /home/pi/fileserver
rm -r /home/pi/
mv /home/backup /home/pi
sudo chmod -R 755 /home/pi (So far everything still works)
sudo reboot
After reboot it doesnt boot correctly anymore. When I wait long enough I see errors of X Server.
That's quite doubtful approach to archiving the data. First of all, as you mentioned, windows will remove the permission bits. Running chmod -R 755 afterwards has very bad consequences because some programs in order to work require very specific access bits on some files (ssh keys for example). Not to mention that making everything executable is bad for security.
Considering your scenario, you may either
a) backup everything into Tar or Zip archives - this way permissions will be intact
b) Make virtual disk file which will be stored on shared windows drive and mounted to /home/pi
How to do scenario A:
cd /home/pi
tar cvpzf backup.tar.gz .
Copy backup.tar.gz to shared drive
to unpack:
cd /home/pi
tar xpvzf backup.tar.gz
Pros:
One-line backup
Takes small amount of space
Cons:
Packing/unpacking takes time
How to do scenario B:
1) Create a new file to hold the virtual drive volume:
cd /mnt/YourNetworkDriveMountPoint
fallocate -l 500M HomePi.img
dd if=/dev/zero of=HomePi.img bs=1M count=500
mkfs -t ext3 HomePi.img
2) Mount it to home dir
mount -t auto -o loop HomePi.img /home/pi/
500 means the disk will be 500 megabytes in size
This way your whole pi will be saved as a file on windows shared drive, but all the content will be in ext3 so all permissions are preserved.
I suggest you though to keep the current version image file on Pi device itself and the old versions on shared drive. Just copy files over if you need to switch because otherwise if all images are on shared drive then read/write performance will be 100% dependant on network speed.
You can then easily make copies of this file and swap them instantly by unmounting existing image and mounting new one
Pros:
Easy swap between backup versions
Completely transparent process
Cons:
If current image file is on shared drive, performance will be reduced
It will consume considerably more space because all 500 megs will be preallocated.
Pi user must be logged off during image swap for obvious reasons
Now, as for issues with Desktop not displayed, you need to check /var/log/Xorg.0.log for detailed messages. Likely this is caused by messed permissions. I would try to rename/remove your current Xorg settings and cache which are located somewhere in /home/Pi/.config/ (depends on what you're using - XFCE, Gnome, etc.) and let X server recreate them. But again, before doing this please check Xorg.0.log for exact messages - maybe there's another error. If you need any further help please comment to this answer
I accidently formatted my ntfs windows partition with "mkfs.ext4".
I was able to recover it with testdisk but it seems that the windows partition was hibernated,
so whenever i tried to open windows it starts repairing disk errors which was taking too long,
so i manually chkdsk ,to which after some time it started telling -"unreadable sector........
which also took very long so i shut it down.
In kali linux whenever i tried to mount it with "mount /dev/sda3 /mnt -t ntfs -r"
it mounts but many of the folders are empty including windows,program files,Users.
I am new to linux,can you tell me steps to recover my files if possible windows...
Thanks in Advance.
Use sudo ntfsfix /dev/sdXY where XY is the partition name. ex:sda4. use gparted to find partition name. Then mount. It may help.
First check to see it the partition is mounted, it maybe mounted as Read-only. Then issue the mount command with the options.
sudo mount -t ntfs-3g -o remove_hiberfile /dev/yourWindowsPartition /media/yourUser/WindowsPartitionName
I have a disk drive where the inode usage is 100% (using df -i command).
However after deleting files substantially, the usage remains 100%.
What's the correct way to do it then?
How is it possible that a disk drive with less disk space usage can have
higher Inode usage than disk drive with higher disk space usage?
Is it possible if I zip lot of files would that reduce the used inode count?
If you are very unlucky you have used about 100% of all inodes and can't create the scipt.
You can check this with df -ih.
Then this bash command may help you:
sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
And yes, this will take time, but you can locate the directory with the most files.
It's quite easy for a disk to have a large number of inodes used even if the disk is not very full.
An inode is allocated to a file so, if you have gazillions of files, all 1 byte each, you'll run out of inodes long before you run out of disk.
It's also possible that deleting files will not reduce the inode count if the files have multiple hard links. As I said, inodes belong to the file, not the directory entry. If a file has two directory entries linked to it, deleting one will not free the inode.
Additionally, you can delete a directory entry but, if a running process still has the file open, the inode won't be freed.
My initial advice would be to delete all the files you can, then reboot the box to ensure no processes are left holding the files open.
If you do that and you still have a problem, let us know.
By the way, if you're looking for the directories that contain lots of files, this script may help:
#!/bin/bash
# count_em - count files in all subdirectories under current directory.
echo 'echo $(ls -a "$1" | wc -l) $1' >/tmp/count_em_$$
chmod 700 /tmp/count_em_$$
find . -mount -type d -print0 | xargs -0 -n1 /tmp/count_em_$$ | sort -n
rm -f /tmp/count_em_$$
My situation was that I was out of inodes and I had already deleted about everything I could.
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 942080 507361 11 100% /
I am on an ubuntu 12.04LTS and could not remove the old linux kernels which took up about 400,000 inodes because apt was broken because of a missing package. And I couldn't install the new package because I was out of inodes so I was stuck.
I ended up deleting a few old linux kernels by hand to free up about 10,000 inodes
$ sudo rm -rf /usr/src/linux-headers-3.2.0-2*
This was enough to then let me install the missing package and fix my apt
$ sudo apt-get install linux-headers-3.2.0-76-generic-pae
and then remove the rest of the old linux kernels with apt
$ sudo apt-get autoremove
things are much better now
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 942080 507361 434719 54% /
My solution:
Try to find if this is an inodes problem with:
df -ih
Try to find root folders with large inodes count:
for i in /*; do echo $i; find $i |wc -l; done
Try to find specific folders:
for i in /src/*; do echo $i; find $i |wc -l; done
If this is linux headers, try to remove oldest with:
sudo apt-get autoremove linux-headers-3.13.0-24
Personally I moved them to a mounted folder (because for me last command failed) and installed the latest with:
sudo apt-get autoremove -f
This solved my problem.
I had the same problem, fixed it by removing the directory sessions of php
rm -rf /var/lib/php/sessions/
It may be under /var/lib/php5 if you are using a older php version.
Recreate it with the following permission
mkdir /var/lib/php/sessions/ && chmod 1733 /var/lib/php/sessions/
Permission by default for directory on Debian showed drwx-wx-wt (1733)
We experienced this on a HostGator account (who place inode limits on all their hosting) following a spam attack. It left vast numbers of queue records in /root/.cpanel/comet. If this happens and you find you have no free inodes, you can run this cpanel utility through shell:
/usr/local/cpanel/bin/purge_dead_comet_files
You can use RSYNC to DELETE the large number of files
rsync -a --delete blanktest/ test/
Create blanktest folder with 0 files in it and command will sync your test folders with large number of files(I have deleted nearly 5M files using this method).
Thanks to http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux
Late answer:
In my case, it was my session files under
/var/lib/php/sessions
that were using Inodes.
I was even unable to open my crontab or making a new directory let alone triggering the deletion operation.
Since I use PHP, we have this guide where I copied the code from example 1 and set up a cronjob to execute that part of the code.
<?php
// Note: This script should be executed by the same user of web server
process.
// Need active session to initialize session data storage access.
session_start();
// Executes GC immediately
session_gc();
// Clean up session ID created by session_gc()
session_destroy();
?>
If you're wondering how did I manage to open my crontab, then well, I deleted some sessions manually through CLI.
Hope this helps!
firstly, get the inode storage usage:
df -i
The next step is to find those files. For that, we can use a small script that will list the directories and the number of files on them.
for i in /*; do echo $i; find $i |wc -l; done
From the output, you can see the directory which uses a large number of files, then repeat this script for that directory like below. Repeat it until you see the suspected directory.
for i in /home/*; do echo $i; find $i |wc -l; done
When you find the suspected directory with large number of unwanted files. Just delete the unwanted files on that directory and free up some inode space by the following the command.
rm -rf /home/bad_user/directory_with_lots_of_empty_files
You have successfully solved the problem. Check the inode usage now with the df -i command again, you can see the difference like this.
df -i
eaccelerator could be causing the problem since it compiles PHP into blocks...I've had this problem with an Amazon AWS server on a site with heavy load. Free up Inodes by deleting the eaccelerator cache in /var/cache/eaccelerator if you continue to have issues.
rm -rf /var/cache/eaccelerator/*
(or whatever your cache dir)
We faced similar issue recently, In case if a process refers to a deleted file, the Inode shall not be released, so you need to check lsof /, and kill/ restart the process will release the inodes.
Correct me if am wrong here.
As told before, filesystem may run out of inodes, if there are a lot of small files. I have provided some means to find directories that contain most files here.
In one of the above answers it was suggested that sessions was the cause of running out of inodes and in our case that is exactly what it was. To add to that answer though I would suggest to check the php.ini file and ensure session.gc_probability = 1 also session.gc_divisor = 1000 and
session.gc_maxlifetime = 1440. In our case session.gc_probability was equal to 0 and caused this issue.
this article saved my day:
https://bewilderedoctothorpe.net/2018/12/21/out-of-inodes/
find . -maxdepth 1 -type d | grep -v '^\.$' | xargs -n 1 -i{} find {} -xdev -type f | cut -d "/" -f 2 | uniq -c | sort -n
On Raspberry Pi I had a problem with /var/cache/fontconfig dir with large number of files. Removing it took more than hour. And of couse rm -rf *.cache* raised Argument list too long error. I used below one
find . -name '*.cache*' | xargs rm -f
you could see this info
for i in /var/run/*;do echo -n "$i "; find $i| wc -l;done | column -t
For those who use Docker and end up here,
When df -i says 100% Inode Use;
Just run docker rmi $(docker images -q)
It will let your created containers (running or exited) but will remove all image that ain't referenced anymore freeing a whole bunch of inodes; I went from 100% back to 18% !
Also might be worth mentioning I use a lot CI/CD with docker runner set up on this machine.
It could be the /tmp folder (where all the temporarily files are stored, yarn and npm script execution for exemple, specifically if you are starting a lot of node script). So normally, you just have to reboot your device or server, and it will delete all the temporarily file that you don't need. For my, I went from 100% of use to 23% of use !
Many answers to this one so far and all of the above seem concrete. I think you'll be safe by using stat as you go along, but OS depending, you may get some inode errors creep up on you. So implementing your own stat call functionality using 64bit to avoid any overflow issues seems fairly compatible.
Run sudo apt-get autoremove command
in some cases it works. If previous unused header data exists, this will be cleaned up.
If you use docker, remove all images. They used many space....
Stop all containers
docker stop $(docker ps -a -q)
Delete all containers
docker rm $(docker ps -a -q)
Delete all images
docker rmi $(docker images -q)
Works to me