/dev/mapper/RHELCSB-Home marked as full when it is not after verification - linux

I was trying to copy a 1.5GiB file from a location to another and was warned that my disk space is full, so I proceeded to a verification using df -h, which gave the following output:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 114M 16G 1% /dev/shm
tmpfs 16G 2.0M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/RHELCSB-Root 50G 11G 40G 21% /
/dev/nvme0n1p2 3.0G 436M 2.6G 15% /boot
/dev/nvme0n1p1 200M 17M 184M 9% /boot/efi
/dev/mapper/RHELCSB-Home 100G 100G 438M 100% /home
tmpfs 3.1G 88K 3.1G 1% /run/user/4204967
where /dev/mapper/RHELCSB-Home seemed to cause the issue. But when running sudo du -xsh /dev/mapper/RHELCSB-Home, I got the following result:
0 /dev/mapper/RHELCSB-Home
and same thing for /dev/ and /dev/mapper/. After researching this issue, I figured out that this might have been caused by undeleted log files in /var/log/, but the total size of files there is far from approaching the 100GiB. What could cause my disk space to be full?
Additional context: I was running a local postgresql database when this happened, but I can't see how this can relate to my issue as postgres log files are not taking that much space either.

The issue was solved by deleting podman container volumes in ~/.local/share/containers/

Related

Drone sometimes fail with error related to "No space left on device"

Sometimes we experience a drone pipeline failing due to a lack of disk space, but there is a lot of space.
drone#drone2:~$ df -H
Filesystem Size Used Avail Use% Mounted on
udev 8.4G 0 8.4G 0% /dev
tmpfs 1.7G 984k 1.7G 1% /run
/dev/vda2 138G 15G 118G 12% /
tmpfs 8.4G 0 8.4G 0% /dev/shm
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 8.4G 0 8.4G 0% /sys/fs/cgroup
overlay 138G 15G 118G 12% /var/lib/docker/overlay2/cf15a2d5e215d6624d9239d28c34be2aa4a856485a6ecafb16b38d1480531dfc/merged
overlay 138G 15G 118G 12% /var/lib/docker/overlay2/c4b441b59cba0ded4b82c94ab5f5658b7d8015d6e84bf8c91b86c2a2404f81b2/merged
tmpfs 1.7G 0 1.7G 0% /run/user/1000
We use docker images for running drone infostracture. The console command:
docker run \
--volume=/var/lib/drone:/data \
--publish=80:80 \
--publish=443:443 \
--restart=always \
--detach=true \
--name=drone \
<drone image>
My assumption may be due to the docker container's limitations, and we need to configure it somehow manually.
Any suggestion on how to fix it?
We have manage to identify the problem.
Our cloud provider has confirmed that our software has used the disc a lot this lead to triggering limitations on disk operation.
They gave us a recommendation to increase the disk plan. Or optimise the disk usage.

Hadoop No space left on device erro when there is space available

I have 5 Linux machines cluster. There are 3 data nodes and one master. At now about 50% hdfs storage is available on each data nodes. But I run a mapreduce job, It is failed with following error
2017-08-21 17:58:47,627 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for blk_6835454799524976171_3615612 bad datanode[0] 10.11.1.42:50010
2017-08-21 17:58:47,628 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_6835454799524976171_3615612 in pipeline 10.11.1.42:50010, 10.11.1.43:50010: bad datanode 10.11.1.42:50010
2017-08-21 17:58:51,785 ERROR org.apache.hadoop.mapred.Child: Error in syncLogs: java.io.IOException: No space left on device
While on each system df -h gives following information
Filesystem Size Used Avail Use% Mounted on
devtmpfs 5.9G 0 5.9G 0% /dev
tmpfs 5.9G 84K 5.9G 1% /dev/shm
tmpfs 5.9G 9.1M 5.9G 1% /run
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/mapper/centos-root 50G 6.8G 44G 14% /
/dev/sdb 1.8T 535G 1.2T 31% /mnt/11fd6fcc-1f87-4f1e-a53c-54cc7117759c
/dev/mapper/centos-home 412G 155G 59M 100% /home
/dev/sda1 494M 348M 147M 71% /boot
tmpfs 1.2G 16K 1.2G 1% /run/user/42
tmpfs 1.2G 0 1.2G 0% /run/user/1000
As clear from above that my sdb dicsk (SDD) is only 31% used but centos-home is 100%. While hadoop is using local file system in mapreduce job when there is enough HDFS available? Where is the problem? I have search at google and found many such problem but no one covers my situation.
syncLogs does not use HDFS, it writes to hadoop.log.dir so
if you're using MapReduce, look for the value of hadoop.log.dir in /etc/hadoop/conf/taskcontroller.cfg.
If you're using YARN, look for the value of yarn.nodemanager.log-dirs in the yarn-site.xml.
One of these should point you to where you're writing your logs. Once you figure out which filesystem has the problem, you can free space from there.
Another thing to remember is you could get "No space left on device" if you've exhausted your inodes on your disk. df -i would show this.
Please check how many inodes are used. If I undertand it right, if it is still the full disk, but all inodes has gone, the error would be still the same, "no space left".

Disk size for Azure VM on docker-machine

I am creating an Azure VM using docker-machine as follows.
docker-machine create --driver azure --azure-size Standard_DS2_v2 --azure-subscription-id #### --azure-location southeastasia --azure-image canonical:UbuntuServer:14.04.2-LTS:latest --azure-open-port 80 AwesomeMachine
following the instructions here. Azure VM docs say - Max. disk size of Standard_DS2_v2 is 100GB,
however when I login to the machine (or create a container on this machine), the max available disk size I see is 30GB.
$ docker-machine ssh AwesomeMachine
docker-user#tf:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 6.9G 21G 25% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 3.4G 12K 3.4G 1% /dev
tmpfs 698M 452K 697M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.5G 1.1M 3.5G 1% /run/shm
none 100M 0 100M 0% /run/user
none 64K 0 64K 0% /etc/network/interfaces.dynamic.d
/dev/sdb1 14G 35M 13G 1% /mnt
What is the meaning of Max. disk size then? Also what is this /dev/sdb1? Is it usable space?
My bad, I didn't look at the documentation carefully.
Wo when --azure-size is Standard_DS2_v2, Local SSD disk = 14 GB, which is /dev/sdb1, while
--azure-size Standard_D2_v2 gives you Local SSD disk = 100 GB.
Not deleting the question in case somebody else makes the same stupid mistake.

Incorrect key file for table '/tmp/#sql_42cd_0.MYI'; try to repair it

I recently had an issue of disk space. More than 97% of disk was full. I cleaned the disk space by clearing the log files.The issue now i am having is the error in my api which says,
Incorrect key file for table '/tmp/#sql_42cd_0.MYI'; try to repair it.Below is the output of the df -h command i executed on the SSH
root#localhost:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda 24G 16G 6.4G 72% /
devtmpfs 994M 4.0K 994M 1% /dev
none 200M 180K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 996M 0 996M 0% /run/shm
none 100M 0 100M 0% /run/user
overflow 1.0M 0 1.0M 0% /tmp
Other than this, i also tried running the myisamchk -r profiles.MYI in order to repair the .myi file but nothing seems to be working.
Sorted the issue. It was related with the incorrect join in the query.I do not understand, if the issue is related with the query then how does it echo the error Incorrect key file for table '/tmp/#sql_42cd_0.MYI'; try to repair it

how to designate Cassandra data storage to certain file-system partition?

I used Cassandra to store my data. I use Centos.
The data seems always to be stored in the root partition, which is too small.
My file system partitions like
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 25G 26G 49% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 17M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda2 494M 177M 318M 36% /boot
/dev/sda1 200M 9.8M 191M 5% /boot/efi
/dev/mapper/centos-home 873G 66G 807G 8% /home
tmpfs 1.6G 0 1.6G 0% /run/user/1001
Obviously the root partition (50 GB) is much smaller than one at home (873GB).
Is there a way that I change a setup to enforce data storage using the
partition "/dev/mapper/centos-home" ?
I need to use the command "sudo service cassandra start" to activate Cassandra.
If without sudo, my authority doesn't allow me to activate Cassandra.
Thanks!
Edit the $CASSANDRA_HOME/conf/cassandra.yaml file (sometimes it is
located under /etc/cassandra also, depending on how you install
Cassandra)
Update the following properties
(only available since Cassandra 3.x) hints_directory: /var/lib/cassandra/hints // put your own directory here
data_file_directories: //put a list of directories here
/var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog // put your own directory here
saved_caches_directory: /var/lib/cassandra/saved_caches // put your
own directory here

Resources