linux - df command says disk 100% used, but it actually it's not - linux

I have an EC2 instance running jenkins service and almost everyday it stops working saying the disk is full.
So I enter the instance to check and it really says 100% used
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 987M 60K 987M 1% /dev
tmpfs 997M 0 997M 0% /dev/shm
/dev/xvda1 32G 32G 0 100% /
So I use NCDU to check what is occuping the space and it says only 8gb is used.
ncdu 1.10 ~ Use the arrow keys to navigate, press ? for help
--- / -------------------------------------------------------
3.7GiB [##########] /var
1.7GiB [#### ] /home
1.6GiB [#### ] /usr
1.0GiB [## ] swapfile
323.6MiB [ ] /opt
133.8MiB [ ] /lib
47.3MiB [ ] /boot
43.8MiB [ ] /root
23.1MiB [ ] /public
19.8MiB [ ] /lib64
12.3MiB [ ] /sbin
10.7MiB [ ] /etc
7.0MiB [ ] /bin
3.7MiB [ ] /tmp
60.0KiB [ ] /dev
e 16.0KiB [ ] /lost+found
16.0KiB [ ] /.gnupg
12.0KiB [ ] /run
e 4.0KiB [ ] /srv
e 4.0KiB [ ] /selinux
e 4.0KiB [ ] /mnt
e 4.0KiB [ ] /media
e 4.0KiB [ ] /local
e 4.0KiB [ ] /cgroup
. 0.0 B [ ] /proc
0.0 B [ ] /sys
0.0 B [ ] .autorelabel
0.0 B [ ] .autofsck
Total disk usage: 8.6GiB Apparent size: 8.6GiB Items: 379695
Then I reboot the instance and the free space comes back to only 28% used
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 987M 60K 987M 1% /dev
tmpfs 997M 0 997M 0% /dev/shm
/dev/xvda1 32G 8.7G 23G 28% /
What can be causing this problem?
Additional data:
PSTREE
init─┬─acpid
├─agetty
├─amazon-ssm-agen───6*[{amazon-ssm-agen}]
├─atd
├─auditd───{auditd}
├─crond
├─dbus-daemon
├─2*[dhclient]
├─java─┬─sh───sudo───gulp --prod───9*[{gulp --prod}]
│ └─67*[{java}]
├─lvmetad
├─lvmpolld
├─6*[mingetty]
├─rngd
├─rpc.statd
├─rpcbind
├─rsyslogd───3*[{rsyslogd}]
├─2*[sendmail]
├─sshd───sshd───sshd───bash───pstree
├─supervisord───2*[php]
└─udevd───2*[udevd]
CRONTAB
* * * * * cd /home/ec2-user/laravel-prod && php artisan schedule:run >> /dev/null 2>&1
0 1 * * * rm /var/log/jenkins/jenkins.log

Related

Xmobar DiskU not get the root partition information in Slackware 15

I trying to show the root partition usage in the Xmobar (with XMonad), but not working!! Without any verbose or error message.
I don't know if the problem is the way as the Slackware loading the root partition or its way of the xmobar working.
Explaining the context:
The disk have three partitions: "swap", "/" and "/home"
/dev/sda1 is the /
/dev/sda2 is the swap
/dev/sda3 is the /home
in the Slackware, the system creates a virtual mount point to "/dev/sda1" calling it the "/dev/root" with mount point to "/"
In the Xmobar (.xmobarrc file), any of the options below not work:
- Run DiskU [ ( "/", "<size>" ) ] [] 20
- Run DiskU [ ( "root", "<size>" ) ] [] 20
- Run DiskU [ ( "/dev/root", "<size>" ) ] [] 20
- Run DiskU [ ( "sda1", "<size>" ) ] [] 20
- Run DiskU [ ( "/dev/sda1", "<size>" ) ] [] 20
and calling
- Run DiskU [ ( "/", "<size>" ), ("/home", "<size>") ] [] 20
where "/home" is the "/dev/sda3" partition, works fine to get the information about "/home"
Reading Xmobar sources, i see that the list of the available partition is read from "/etc/mtab". In my case, the "/etc/mtab" have de list of partitions below:
/dev/root / ext4 rw,relatime 0 0
...
/dev/sda3 /home ext4 rw,relatime 0 0
but i don't get the DiskU function works...
Any idea is welcome to solve this problem...
thanks in advance!
With a mount table containing
/dev/sda1 / ext4 rw,relatime 0 0
# ...
/dev/sda3 /home ext4 rw,relatime 0 0
an xmobar configuration along these lines should actually work:
Config
{ -- ...
, template = "... %disku% ..."
-- ...
, commands =
[ -- ...
, Run DiskU
[ ( "/", "Root: <usedp> (<used>/<size>)")
, ( "/home", "Home: <usedp> (<used>/<size>)")
]
[] 20
-- ...
]
-- ...
}
The solution:
Slackware set the mount point of /dev/sda1 to /dev/root automatically in /etc/mtab, but /dev/root is a virtual device and not exists in /dev folder.
Xmobar try to access the device /dev/root and not found it.
The most simple solution is creation of a symbolic link to /dev/root on system startup.
In this case, in Slackware, edit the file /etc/rc.d/rc.local and add:
ln -s /dev/sda1 /dev/root
Problem solved!

Errno 28 No space availble on '/var/tmp/...'

I would like to install algobox for my studies but it is impossible for me to install it or even to update my computer. I looked for it and I saw that it was probably a problem of space in my partition.
I made the command "df -h" to see what was in my score and it displayed this:
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
devtmpfs 7,8G 0 7,8G 0% /dev
tmpfs 7,8G 15M 7,8G 1% /dev/shm
tmpfs 7,8G 2,1M 7,8G 1% /run
/dev/nvme0n1p7 69G 66G 0 100% /
tmpfs 7,8G 40K 7,8G 1% /tmp
/dev/loop2 224M 224M 0 100% /var/lib/snapd/snap/code/108
/dev/loop0 999M 999M 0 100% /var/lib/snapd/snap/android-studio/123
/dev/loop1 143M 143M 0 100% /var/lib/snapd/snap/chromium/2105
/dev/loop4 115M 115M 0 100% /var/lib/snapd/snap/core/13741
/dev/nvme0n1p4 976M 247M 663M 28% /boot
/dev/loop5 63M 63M 0 100% /var/lib/snapd/snap/deezer-unofficial-player/11
/dev/nvme0n1p2 96M 46M 51M 48% /boot/efi
/dev/nvme0n1p8 114G 40G 69G 37% /home
/dev/loop7 56M 56M 0 100% /var/lib/snapd/snap/core18/2566
/dev/loop8 64M 64M 0 100% /var/lib/snapd/snap/core20/1623
/dev/loop10 347M 347M 0 100% /var/lib/snapd/snap/gnome-3-38-2004/115
/dev/loop11 176M 176M 0 100% /var/lib/snapd/snap/postman/133
/dev/loop17 151M 151M 0 100% /var/lib/snapd/snap/remmina/5379
/dev/loop18 82M 82M 0 100% /var/lib/snapd/snap/discord/143
/dev/loop19 165M 165M 0 100% /var/lib/snapd/snap/gnome-3-28-1804/161
/dev/loop21 48M 48M 0 100% /var/lib/snapd/snap/snapd/16778
/dev/loop22 92M 92M 0 100% /var/lib/snapd/snap/gtk-common-themes/1535
/dev/loop23 71M 71M 0 100% /var/lib/snapd/snap/core22/275
/dev/loop24 128K 128K 0 100% /var/lib/snapd/snap/bare/5
/dev/loop27 219M 219M 0 100% /var/lib/snapd/snap/gnome-3-34-1804/77
/dev/loop28 415M 415M 0 100% /var/lib/snapd/snap/gnome-42-2204/29
tmpfs 1,6G 88K 1,6G 1% /run/user/1000
The /var/tmp does not seem to be mounted separate from / and it looks you still have 3G free
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
/dev/nvme0n1p7 69G 66G 0 100% /
I would suggest running a "df -i" to see the inode utilization at the IUse% column(it must be under 100%) as if you have a lot of small files, each one will utilize one inode and if you are using 100% of inodes you will get the "No space left error".

Filter df -h to only show 'Mounted on' part

this is the normal output of df -h:
df -h
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1s2s1 932Gi 14Gi 823Gi 2% 500637 4293577168 0% /
devfs 193Ki 193Ki 0Bi 100% 673 0 100% /dev
/dev/disk1s5 932Gi 3.0Gi 823Gi 1% 3 8628536760 0% /System/Volumes/VM
/dev/disk1s3 932Gi 367Mi 823Gi 1% 1816 8628536760 0% /System/Volumes/Preboot
/dev/disk1s6 932Gi 4.0Mi 823Gi 1% 20 8628536760 0% /System/Volumes/Update
/dev/disk1s1 932Gi 90Gi 823Gi 10% 789694 8628536760 0% /System/Volumes/Data
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /System/Volumes/Data/home
I need to filter it to have:
/
/dev
/System/Volumes/VM
/System/Volumes/Preboot
/System/Volumes/Update
/System/Volumes/Data
/System/Volumes/Data/home
So basically I need only the "mounted on" column of the command df -h.
Any idea?
df --output=target
If you need mount targets, you can also look at findmnt. It has tons of formatting options, the list you want can be done with
$ findmnt --real -O TARGET

.glusterfs folder is too big whereas my regular data is smaller

I'm using glusterfs 7.8 across 3 nodes. Recently we are removed bunch of data which is takes approximately 170 GB from glusterfs volume. Now our regular data sits at 1.5 GB but there is a folder named .glusterfs which haves data 177.9 Gb in the glusterfs volume. And we are running out of disk space. What is it and how can i clean it?
$ gluster volume status vlys_vol:
Status of volume: vlys_vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.1.42:/mnt/kubernetes/vlys_vol 49152 0 Y 1919
Brick 192.168.1.10:/mnt/kubernetes/vlys_vol 49152 0 Y 6702
Brick 192.168.1.37:/mnt/kubernetes/vlys_vol 49152 0 Y 1054
Self-heal Daemon on localhost N/A N/A Y 1126
Self-heal Daemon on 192.168.1.10 N/A N/A Y 6714
Self-heal Daemon on ubuntu-vm1 N/A N/A Y 2021
Task Status of Volume vlys_vol
------------------------------------------------------------------------------
There are no active volume tasks
$ gluster volume status vlys_vol detail:
Status of volume: vlys_vol
------------------------------------------------------------------------------
Brick : Brick 192.168.1.42:/mnt/kubernetes/vlys_vol
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 1919
File System : ext4
Device : /dev/vda2
Mount Options : rw,relatime,data=ordered
Inode Size : 256
Disk Space Free : 193.1GB
Total Disk Space : 617.6GB
Inode Count : 41156608
Free Inodes : 30755114
------------------------------------------------------------------------------
Brick : Brick 192.168.1.10:/mnt/kubernetes/vlys_vol
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 6702
File System : ext4
Device : /dev/nvme0n1p2
Mount Options : rw,relatime,errors=remount-ro
Inode Size : 256
Disk Space Free : 220.7GB
Total Disk Space : 937.4GB
Inode Count : 62480384
Free Inodes : 58114459
------------------------------------------------------------------------------
Brick : Brick 192.168.1.37:/mnt/kubernetes/vlys_vol
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 1054
File System : ext4
Device : /dev/vda2
Mount Options : rw,relatime
Inode Size : 256
Disk Space Free : 1.5TB
Total Disk Space : 2.0TB
Inode Count : 134217728
Free Inodes : 109614197
$ gluster peer status:
Number of Peers: 2
Hostname: ubuntu-vm1
Uuid: a4bc6a92-0505-4cbc-8811-e3c69714519b
State: Peer in Cluster (Connected)
Hostname: 192.168.1.37
Uuid: 01568855-feca-4e30-8ef3-8d626e0c8e6d
State: Peer in Cluster (Connected)
Here is the ncdu results:
--- /mnt/kubernetes/vlys_vol ------------------------------------------------------------
177,9 GiB [##########] /.glusterfs
1,5 GiB [ ] /vlys-test
706,8 MiB [ ] /vlys
40,0 KiB [ ] /data-redis-cluster-1
32,0 KiB [ ] /data-redis-cluster-4
32,0 KiB [ ] /data-redis-cluster-3
32,0 KiB [ ] /data-redis-cluster-2
32,0 KiB [ ] /data-redis-cluster-0
32,0 KiB [ ] /data-redis-cluster-5
24,0 KiB [ ] /vlys-test-sts-default
24,0 KiB [ ] /vlys-test-sts
e 8,0 KiB [ ] /data-redis-cluster-test-5
e 8,0 KiB [ ] /data-redis-cluster-test-4
e 8,0 KiB [ ] /data-redis-cluster-test-3
e 8,0 KiB [ ] /data-redis-cluster-test-2
e 8,0 KiB [ ] /data-redis-cluster-test-1
e 8,0 KiB [ ] /data-redis-cluster-test-0
--- /mnt/kubernetes/vlys_vol/.glusterfs -------------------------------------------------
/..
6,2 GiB [##########] /57
5,3 GiB [######## ] /22
4,6 GiB [####### ] /cd
4,5 GiB [####### ] /97
4,5 GiB [####### ] /e8
4,3 GiB [###### ] /a2
4,3 GiB [###### ] /c1
4,0 GiB [###### ] /88
3,9 GiB [###### ] /07
3,8 GiB [###### ] /66
3,7 GiB [##### ] /48
3,6 GiB [##### ] /8e
3,4 GiB [##### ] /15
3,4 GiB [##### ] /ee
3,3 GiB [##### ] /26
2,9 GiB [#### ] /aa
2,9 GiB [#### ] /52
2,9 GiB [#### ] /f3
(much more folders like this)
2,9 MiB [ ] /09
2,9 MiB [ ] /71
2,9 MiB [ ] /a4
2,8 MiB [ ] /4a
32,0 KiB [ ] /indices
e 20,0 KiB [ ] /unlink
12,0 KiB [ ] /changelogs
e 4,0 KiB [ ] /landfill
4,0 KiB [ ] health_check

AWS ECS volumes do not share any files

I have an EBS volume I have mounted to an AWS ECS cluster instance. This EBS volume is mounted under /data:
$ cat /etc/fstab
...
UUID=xxx /data ext4 defaults,nofail 0 2
$ ls -la /data
total 28
drwxr-xr-x 4 1000 1000 4096 May 14 06:11 .
dr-xr-xr-x 26 root root 4096 May 15 21:18 ..
drwxr-xr-x 4 root root 4096 May 14 06:11 .ethereum
drwx------ 2 1000 1000 16384 May 14 05:29 lost+found
Edit: Output of /proc/mounts
[ec2-user#xxx ~]$ cat /proc/mounts
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
/dev/xvda1 / ext4 rw,noatime,data=ordered 0 0
devtmpfs /dev devtmpfs rw,relatime,size=4078988k,nr_inodes=1019747,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0
cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /cgroup/hugetlb cgroup rw,relatime,hugetlb 0 0
cgroup /cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /cgroup/perf_event cgroup rw,relatime,perf_event 0 0
/dev/xvdf /data ext4 rw,relatime,data=ordered 0 0
Now, I would like to mount /data/.ethereum as a Docker volume to /geth/.ethereum in my ECS task definition:
{
...
"containerDefinitions": [
{
...
"volumesFrom": [],
"mountPoints": [
{
"containerPath": "/geth/.ethereum",
"sourceVolume": "ethereum_datadir",
"readOnly": null
}
],
...
}
],
...
"volumes": [
{
"host": {
"sourcePath": "/data/.ethereum"
},
"name": "ethereum_datadir"
}
],
...
}
It appears that the volume is correctly mounted after running the task:
$ docker inspect -f '{{ json .Mounts }}' f5c36d9ea0d6 | python -m json.tool
[
{
"Destination": "/geth/.ethereum",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/data/.ethereum"
}
]
How ever, if I create file inside the container inside the mount point, it will not be there on the host machine.
[ec2-user#xxx .ethereum]$ docker exec -it f5c36d9ea0d6 bash
root#f5c36d9ea0d6:/geth# cat "Hello World!" > /geth/.ethereum/hello_world.txt
cat: Hello World!: No such file or directory
root#f5c36d9ea0d6:/geth# echo "Hello World!" > /geth/.ethereum/hello_world.txt
root#f5c36d9ea0d6:/geth# cat /geth/.ethereum/hello_world.txt
Hello World!
root#f5c36d9ea0d6:/geth# exit
exit
[ec2-user#xxx ~]$ cat /data/.ethereum/hello_world.txt
cat: /data/.ethereum/hello_world.txt: No such file or directory
Somehow the file systems are not being shared. Any ideas?
Found the issue.
It seems like with Docker, any mount point (e.g. for EBS volumes) on the host instance has to be created before the Docker daemon has started, otherwise Docker will write the files into the instance's root file system without you even noticing.
I stopped Docker, unmounted the EBS volume, cleaned everything up, mounted the EBS volume again and started Docker afterwards. Now Docker seems to recognize the mount point and writes everything into my EBS volume as it should.

Resources