About Linux file system - linux

Just set a Fedora 22 system on VMWare with 60GB. When inputting the "df" command, the system displayed this:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/fedora-root 38440424 4140700 32324020 12% /
devtmpfs 2009804 0 2009804 0% /dev
tmpfs 2017796 92 2017704 1% /dev/shm
tmpfs 2017796 872 2016924 1% /run
tmpfs 2017796 0 2017796 0% /sys/fs/cgroup
tmpfs 2017796 532 2017264 1% /tmp
/dev/sda1 487652 79147 378809 18% /boot
/dev/mapper/fedora-home 18701036 49464 17678568 1% /home
What is the exact size of each 1K-blocks? Does the /dev/mapper/fedora-root contain the /dev/mapper/fedora-home?
I'm so confused with "df" command.
Thanks a lot.

You can see from the df output that /dev/mapper/fedora-home can currently be reached at /home which is its mount point. Because the mount point for /dev/mapper/fedora-root is at / farther up the directory tree, anything that /dev/mapper/fedora-root has in /home is not accessible by normal means until and unless /dev/mapper/fedora-home gets unmounted.
As David Schwartz noted, a 1K-block is a unit of one (binary) kilobyte, which is 1024 bytes. Because no one has bothered to change it since the time when it was important to performance, df still reports sizes in terms of the device's block size. That value does vary. There are still plenty of devices around that have a block size of 512 bytes. For output consistently in kilobytes, you can use df -k.

What is the exact size of each 1K-blocks?
1,024 bytes.
Does the /dev/mapper/fedora-root contains the /dev/mapper/fedora-home?
They're separate filesystems, that's why they appear on separate lines in the df output.

Related

Linux partition not showing full size [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
Improve this question
I have a Linux system where the disk space shows as only 29Gb, but when I look at the partition with the parted - print command it shows as a 64Gb partition. I'm not sure if the remaining disk space is unallocated, mounted in other folders, stuck in "tmpfs" or how to add it to the primary partition. This is in Ubuntu 18.04 OS. I would like for the full 64 GB to be available at root. I appreciate any help!
When I run df -h, here are the results:
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.2M 3.2G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 29G 25G 2.7G 91% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda2 976M 81M 829M 9% /boot
/dev/sda1 511M 4.4M 507M 1% /boot/efi
tmpfs 3.2G 0 3.2G 0% /run/user/1000
Results of parted print command shows a 64GB partition:
Model: ATA MSH-64 (scsi)
Disk /dev/sda: 63.4GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 538MB 537MB fat32 boot, esp
2 538MB 1612MB 1074MB ext4
3 1612MB 63.3GB 61.7GB
Results of vgs command:
VG #PV #LV #SN Attr VSize VFree
ubuntu-vg 1 1 0 wz--n- <57.50g <28.75g
Results of the lvs command:
(talos-env) pradmin#pradmin:~$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ubuntu-lv ubuntu-vg -wi-ao---- 28.75g
Depending on the installation, the root partition might only use a part of the logical volume (LV).
Try the commands vgs and lvs to get information about your current setup. I assume that vgs shows about 30G free space. You can enlarge the root volume using lvresize. After this you need to adapt the file system. This depends on the file system type you are using. If you use extX then you might want to run resize2fs.
Edit based on the edited question:
Yes, everything can be done when the disk is mounted and in use.
BUT YOU NEED TO TAKE CARE ABOUT THE COMMANDS YOURSELF!!! A WRONG COMMAND MIGHT DESTROY YOUR SYSTEM.
PLEASE TAKE YOUR TIME TO MAKE YOURSELF COMFORTABLE WITH LVS BEFORE CHANGING THE SYSTEM.
There are many good tutorials which might help you, e.g.:
http://ryandoyle.net/posts/expanding-a-lvm-partition-to-fill-remaining-drive-space/
The guidance from Andreas proved helpful. I managed to resize the logical volume to the full size of the partition using the following commands and sequence.
Resources that I found helpful:
https://www.redhat.com/sysadmin/resize-lvm-simple
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/ext4grow
root:~# lvs
  LV        VG        Attr       LSize   Pool Origin
Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- <57.50g
 
Here you can see that the logical volume doesn't fill the full partition size
root:~# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  ubuntu-vg   1   1   0 wz--n- <57.50g <28.75g
Extend the logical volume to 100% of the free space, /dev/{VG FROM lvs CMD}/{LV FROM lvs CMD}
root:~# lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
Size of logical volume ubuntu-vg/ubuntu-lv changed from 28.75 GiB (7360 extents) to <57.50 GiB (14719 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.
Checked disk space and saw that it hadn't changed yet
root:~# df
Filesystem 1K-blocks Used
Available Use% Mounted on
udev 16390292 0 16390292 0% /dev
tmpfs 3284628 1164 3283464 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 29542388 25311328 2707348 91% /
tmpfs 16423128 0 16423128 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 16423128 0 16423128 0% /sys/fs/cgroup
/dev/sda2 999320 82552 847956 9% /boot
/dev/sda1 523248 4492 518756 1% /boot/efi
tmpfs 3284624 0 3284624 0% /run/user/1000
Resize file system to full size of logical volume, use Filesystem name from df command above. Note this is an ext4 filesystem, you may have to use a different command for a different filesystem.
root:~# resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line
resizing required
old_desc_blocks = 4, new_desc_blocks = 8
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 15072256 (4k) blocks
long.
root:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 16390292 0 16390292 0% /dev
tmpfs 3284628 1164 3283464 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 59211724 25319316 31128948 45% /
tmpfs 16423128 0 16423128 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 16423128 0 16423128 0%
/sys/fs/cgroup
/dev/sda2 999320 82552 847956 9% /boot
/dev/sda1 523248 4492 518756 1% /boot/efi
tmpfs 3284624 0 3284624 0%
/run/user/1000

Different sizes for /var/lib/docker

I don't know actually if this is more a "classic" linux or a docker question but:
On an VM where some of my docker containers are running I've a strange thing. /var/lib/docker is an own partitionwith 20GB. When I look over the partition with df -h I see this:
eti-gwl1v-dockerapp1 root# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 815M 7.0G 11% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda2 12G 3.2G 8.0G 29% /
/dev/sda7 3.9G 17M 3.7G 1% /tmp
/dev/sda5 7.8G 6.8G 649M 92% /var
/dev/sdb2 20G 47M 19G 1% /usr2
/dev/sdb1 20G 2.9G 16G 16% /var/lib/docker
So usage is at 16%. But when I now navigate to /var/lib and do a du -sch docker I see this:
eti-gwl1v-dockerapp1 root# cd /var/lib
eti-gwl1v-dockerapp1 root# du -sch docker
19G docker
19G total
eti-gwl1v-dockerapp1 root#
So same directory/partition but two sizes? How is that going?
This is really a question for unix.stackexchange.com, but there is filesystem overhead that makes the partition larger than the total size of the individual files within it.
du and df show you two different metrics:
du shows you the (estimated) file space usage, i.e. the sum of all file sizes
df shows you the disk space usage, i.e. how much space on the disk is actually used
These are distinct values and can often diverge:
disk usage may be bigger than the mere sum of file sizes due to additional meta data: e.g. the disk usage of 1000 empty files (file size = 0) is >0 since their file names and permissions need to be stored
the space used by one or multiple files may be smaller than their reported file size due to:
holes in the file - block consisting of only null bytes are not actually written to disk, see sparse files
automatic file system compression
deduplication through hard links or copy-on-write
Since docker uses the image layers as a means of deduplication the latter is most probably the cause of your observation - i.e. the sum of the files is much bigger because most of them are shared/deduplicated through hard links.
du estimates filesystem usage through summing the size of all files in it. This does not deal well with the usage of overlay2: there will be many directories which contain the same files as contained in another, but overlaid with additional layers using overlay2. As such, du will show a very inflated number.
I have not tested this since my Docker daemon is not using overlay2, but using du -x to avoid going into overlays could give the right amount. However, this wouldn't work for other Docker drivers, like btrfs, for example.

Hadoop No space left on device erro when there is space available

I have 5 Linux machines cluster. There are 3 data nodes and one master. At now about 50% hdfs storage is available on each data nodes. But I run a mapreduce job, It is failed with following error
2017-08-21 17:58:47,627 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for blk_6835454799524976171_3615612 bad datanode[0] 10.11.1.42:50010
2017-08-21 17:58:47,628 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_6835454799524976171_3615612 in pipeline 10.11.1.42:50010, 10.11.1.43:50010: bad datanode 10.11.1.42:50010
2017-08-21 17:58:51,785 ERROR org.apache.hadoop.mapred.Child: Error in syncLogs: java.io.IOException: No space left on device
While on each system df -h gives following information
Filesystem Size Used Avail Use% Mounted on
devtmpfs 5.9G 0 5.9G 0% /dev
tmpfs 5.9G 84K 5.9G 1% /dev/shm
tmpfs 5.9G 9.1M 5.9G 1% /run
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/mapper/centos-root 50G 6.8G 44G 14% /
/dev/sdb 1.8T 535G 1.2T 31% /mnt/11fd6fcc-1f87-4f1e-a53c-54cc7117759c
/dev/mapper/centos-home 412G 155G 59M 100% /home
/dev/sda1 494M 348M 147M 71% /boot
tmpfs 1.2G 16K 1.2G 1% /run/user/42
tmpfs 1.2G 0 1.2G 0% /run/user/1000
As clear from above that my sdb dicsk (SDD) is only 31% used but centos-home is 100%. While hadoop is using local file system in mapreduce job when there is enough HDFS available? Where is the problem? I have search at google and found many such problem but no one covers my situation.
syncLogs does not use HDFS, it writes to hadoop.log.dir so
if you're using MapReduce, look for the value of hadoop.log.dir in /etc/hadoop/conf/taskcontroller.cfg.
If you're using YARN, look for the value of yarn.nodemanager.log-dirs in the yarn-site.xml.
One of these should point you to where you're writing your logs. Once you figure out which filesystem has the problem, you can free space from there.
Another thing to remember is you could get "No space left on device" if you've exhausted your inodes on your disk. df -i would show this.
Please check how many inodes are used. If I undertand it right, if it is still the full disk, but all inodes has gone, the error would be still the same, "no space left".

Cannot login to owncloud. No space left on device

I am currently using the last version of owncloud. Since the installation, I cannot login anymore. A quick look at /var/log/apache2/error.log explains why :
WARNING: could not create relation-cache initialization file "global/pg_internal.init.7826": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "base/17999/pg_internal.init.7826": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "global/pg_internal.init.7827": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "base/17999/pg_internal.init.7827": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "global/pg_internal.init.7828": No space left on device
But I cannot figure where I do not have enough space. If I try df -h as root, everything seems ok to me :
:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 20G 20G 0 100% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 82M 3.8G 3% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda2 898G 912M 851G 1% /home
tmpfs 788M 0 788M 0% /run/user/0
Excepted the first line which I hardly understand what it represents. I installed owncloud into /home/owncloud so I bet everything should be ok.
Any idea?
Edit :
Results of findmnt :
~# findmnt /
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda1 ext4 rw,relatime,errors=remount-ro,data=ordered
~# findmnt /dev/sda1
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda1 ext4 rw,relatime,errors=remount-ro,data=ordered
~# findmnt /dev/sda2
TARGET SOURCE FSTYPE OPTIONS
/home /dev/sda2 ext4 rw,relatime,data=ordered
Often, these programs store their data under /var, In your case, you don't have a separate mountpoint for /var so it's a directory on your root file system /. This is full and so the program is not working.
Before you attempt a resize or anything, I think you should find out what is hogging 20GB. du / | sort -n should give you a rough idea of the guilty parties or you can use a graphical tool like xdiskusage. Clean it up and you'll be good to go.
The other alternative is to look through the config files for owncloud and make it use your home directory to store its data. That way, it will work. But you should clean up your /. Various things will misbehave if you don't.
Maybe you are out of inodes: No space left on device – running out of Inodes.
Use df -i to check that. It happened to me as my backup used to have millions of small files. So there was space left but no inodes left.

How to increase ec2 instance root file system without EBS?

How to increase disk space of an instance without using EBS ? Root file system size is only showing 10 GB. Is there a way to create a bigger file system without EBS ?
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 3.3G 6.1G 35% /
tmpfs 874M 0 874M 0% /lib/init/rw
udev 874M 84K 874M 1% /dev
tmpfs 874M 0 874M 0% /dev/shm
/dev/sdb 335G 12G 307G 4% /mnt
As you can see in the output, a much bigger partition is mounted at /mnt. You can move some of the things on the root filesystem there by either remounting it at the appropriate location or add symlinks. There is no other way to add more diskspace if you don't want to resort to EBS or a network filesystem.

Resources