Problem Statement:-
I am getting this below exception-
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException:
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace
quota of /tmp is exceeded: quota=659706976665600 diskspace consumed=614400.1g
So I just wanted to know how much is the size of /tmp directory currently and because of that I am getting this exception. How can I see the free space in /tmp?
Update:-
bash-3.00$ df -h /tmp
Filesystem size used avail capacity Mounted on
rpool/tmp 10G 2.2G 7.8G 22% /tmp
I am puzzled right now why I am getting that exception then as it clearly states above that I have space available.
You can do (For SunOS)
# du -sh /tmp
To see how much it uses now, but that you already saw.
To see how much total, free and used space is on the partition where /tmp resides you can use:
# df -h /tmp
Note that filling up space is not the only thing that can prevent writing to filesystem.
Running out of inodes is another popular reason.
You can check that with
# df -i /tmp
for a in ls; do du -ch ${a} | grep total ; echo ${a}; done
try this but it takes time if the size of dir is in GBs
Managing HDFS Space Quotas
It’s important to understand that in HDFS, there must be enough quota space to accommodate an entire block. If the user has, let’s say, 200MB free in their allocated quota, they can’t create a new file, regardless of the file size, if the HDFS block size happens to be 256MB. You can set the HDFS space quota for a user by executing the setSpace-Quota command. Here’s the syntax:
$ hdfs dfsadmin –setSpaceQuota <N> <dirname>...<dirname>
The space quota you set acts as the ceiling on the total size of all files in a directory. You can set the space quota in bytes (b), megabytes (m), gigabytes (g), terabytes (t) and even petabytes (by specifying p—yes, this is big data!). And here’s an example that shows how to set a user’s space quota to 60GB:
$ hdfs dfsadmin -setSpaceQuota 60G /user/alapati
You can set quotas on multiple directories at a time, as shown here:
$ hdfs dfsadmin -setSpaceQuota 10g /user/alapati /test/alapati
Related
For example, the dir "/app/source"
There is an 100GB filesystem mount on "/"
So when I use "df /app/source" I can get the capacity is 100GB.
Then there is a dir "/app/source/video" and an 100GB filesystem mount on it.
Is there any easy way to get the real capacity (200GB) of "/app/source" ?
/app/source don't have a capacity of 200G, thus you cannot expect to see it. It's "real" capacity is 100G as the underlying disk capacity is 100G. If you think it has a capacity of 200G, then you expect to be able to store 200G of data in /app/source but you cannot ! You can store 100G in /app/source and 100G in /app/source/video
Maybe you would like to really merge the capacity of both partitions, for this you could use LVM.
Trying to merge only the reported numbers, which you could do with a simple script or alias (see below), would give you bad information and then you may try to add files on a full partition.
If at the end you still need the added total, maybe something like this can help:
# df -h --total /app/source /app/source/video | grep total | awk -F" " '{print $2}'
I deploy one seaweedfs master and one volume server
/usr/bin/weed master -ip=10.110.200.149 -port=9333 -mdir=/weed/mdir
/usr/bin/weed volume -ip=10.110.200.149 -dir=/weed/vdir -port=8080 -mserver=10.110.200.149:9333 -max=7
When running for several weeks, it show error:
curl -X POST http://10.110.200.149:9333/dir/assign
{"error":"No free volumes left!"}
I change volumes from 7 to 50(parameter max), it solved. But I check the disk size of seaweedfs usage
[root#node149 vdir]# ls
1.dat 1.idx 2.dat 2.idx 3.dat 3.idx 4.dat 4.idx 5.dat 5.idx 6.dat 6.idx 7.dat 7.idx
[root#node149 vdir]# du . -hs
14M .
[root#node149 vdir]#
It show only 14M disk space usage, so what's the really meaning of number of volumes?
If you have uploaded any files into the FS. Then the total space used by the files will be what you are seeing as 14M.
If you were expecting to see 50 volumes use 50 times the space allocated for each volume then you are wrong because this space will be used when fully filled with files.
Before you store any files in your weed-fs, the space used by all volumes should be a low number and it increases as you add more files.
On my VPS I am running Plesk 12.5 and CentOS 6.6.
I updated my VPS and I get a 100G extra but somehow, I created a new partition in stead of adding the space to an existing partition. I did this a year ago, and because space wasn't an issue at that time, I left if.
Now space is becoming an issue and I want to use the 100Gig extra. However, I have no clue of how to merge the new partition into the partition Plesk uses.
Below you see a my filesystem:
[root#vps dumps]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_plesk-lv_root
47G 14G 31G 31% /
tmpfs 1,9G 8,0K 1,9G 1% /dev/shm
/dev/vda1 485M 169M 292M 37% /boot
/dev/mapper/vg_plesk-lv_tmp
504M 17M 462M 4% /tmp
/dev/vda3 99G 188M 94G 1% /mynewpartition
/dev/vda3 is the new partition that needs to be merged, I think with /dev/mapper/vg_plesk-lv_root. I'm not sure about that, because I have not enough experiance on this. But that one is the one that floods every time!
Looking at the details you provided I assume that the name of the volume-group on your server is vg_plesk. Also I can see that there is a device /dev/vda3 which you wish to merge with vg_plesk-lv_root.
In order to merge you will have to extent your existing volume-group vg_plesk.
First of all unmount /dev/vda3
umount /mynewpartition
Remove or comment the entry for this particular device in /etc/fstab and save the file.
Extend existing volume group.
vgextend vg_plesk /dev/vda3
Extend desired lv vg_plesk-lv_root
lvextend -L+100G /dev/mapper/vg_plesk-lv_root
Resize the extended LV
resize2fs /dev/mapper/vg_plesk-lv_root
Keep in mind all your data in /mynewpartition will be lost when you un-mount the partition /dev/vda3 so please keep a copy of this data if it is important.
you may also find this link useful.
Hope this helps.
Thanks.
If i know that partition is for example /dev/sda1 how can i get disk name (/dev/sda in this case) that contain the partition?
The output should be only path to disk. (like '/dev/sda')
EDIT: It shouldn't be string manipulation
You can use the shell's built-in string chopping:
$ d=/dev/sda1
$ echo ${d%%[0-9]*}
/dev/sda
$ d=/dev/sda11212
$ echo ${d%%[0-9]*}
/dev/sda
This works for some of the disk names only. If there can be several digits in the name, it will chop everything after the first.
What is the exact specification to separate a disk name from a partition name?
You can use sed to get the disk. Because partitions are just increments of disk names, it's easy to perform:
echo "/dev/sda1" | sed 's/[0-9]*//g'
which produces the output /dev/sda
Another command you can use to obtain disk information is lsblk. Just typing it without args prints out all info pertaining to your disks and partitions.
I was using the instructions on https://matt.berther.io/2015/02/03/how-to-resize-aws-ec2-ebs-volumes/ and http://atodorov.org/blog/2014/02/07/aws-tip-shrinking-ebs-root-volume-size/ to move to a EBS volume with less disk space. In both cases, when I attached the shrinked EBS volume(as /dev/xdva or /dev/sda1 , neither works) to an EC2 instance and start it, it stops on its own with the message
State transition reason
Client.InstanceInitiatedShutdown: Instance initiated shutdown
Some more tinkering and I found that the new volume did not have BIOS boot partition. So I used gdisk to make one and copied the MBR from the original volume(that works and using which I can start instances) to the new volume. Now the instance does not terminate but I am not able to ssh into the newly launched instance.
What might be the reason behind this happening? How can I get more information(from logs/AWS Console etc) on why this is happening?
To shrink a GPT partioned boot EBS volume below the 8GB that standard images seem to use you can do the following: (a slight variation of the dd method from https://matt.berther.io/2015/02/03/how-to-resize-aws-ec2-ebs-volumes/ )
source disk is /dev/xvdf, target is /dev/xvdg
Shrink source partition
$ sudo e2fsck -f /dev/xvdf1
$ sudo resize2fs -M /dev/xvdf1
Will print something like
resize2fs 1.42.12 (29-Aug-2014)
Resizing the filesystem on /dev/xvdf1 to 257491 (4k) blocks.
The filesystem on /dev/xvdf1 is now 257491 (4k) blocks long.
I converted this to MB, i.e. 257491 * 4 / 1024 ~= 1006 MB
copy above size + a bit more from device to device (!), not just partition to partition, because that includes both partition table & data in the boot partition
$ sudo dd if=/dev/xvdf of=/dev/xvdg bs=1M count=1100
now use gdisk to fix the GPT partition on the new disk
$ sudo gdisk /dev/xvdg
You'll be greeted with roughly
GPT fdisk (gdisk) version 0.8.10
Warning! Disk size is smaller than the main header indicates! Loading
secondary header from the last sector of the disk! You should use 'v' to
verify disk integrity, and perhaps options on the experts' menu to repair
the disk.
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.#
Warning! One or more CRCs don't match. You should repair the disk!
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: damaged
****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
Command (? for help):
The following is the keyboard input within gdisk. To fix the problems, the data partition that is present in the copied partition table needs to be resized to fit on the new disk. This means it needs to be recreated smaller and it's properties need to be set to match the old partition definition.
Didn't test it so it's maybe not required to relocate the backup table to the actual end of the disk but I did it anyways:
go to extra expert options: x
relocate backup data structures to the end of the disk: e
back to main menu: m
Now to fixing the partition size
print and note some properties of partition 1 (and other non-boot partitions if they exist):
i
1
Will show something like
Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
Partition unique GUID: DBA66894-D218-4D7E-A33E-A9EC9BF045DB
First sector: 4096 (at 2.0 MiB)
Last sector: 16777182 (at 8.0 GiB)
Partition size: 16773087 sectors (8.0 GiB)
Attribute flags: 0000000000000000
Partition name: 'Linux'
now delete
d
1
and recreate the partition
n
1
Enter the required parameters. All defaults worked for me here (= press enter), when in doubt refer to partition information from above
First sector = 4096
Last sector = whatever is the actual end of the new disk - take the default here
type = 8300 (Linux)
The new partition's default name did not match the old one. So change it to the original One
c
1
Linux (see Partition name from above)
Next thing to change is the partition's GUID
x
c
1
DBA66894-D218-4D7E-A33E-A9EC9BF045DB (see Partition unique GUID, not the partition guid code above that)
That should be it. Back to main menu & print state
m
i
1
Will now print
Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
Partition unique GUID: DBA66894-D218-4D7E-A33E-A9EC9BF045DB
First sector: 4096 (at 2.0 MiB)
Last sector: 8388574 (at 4.0 GiB)
Partition size: 8384479 sectors (4.0 GiB)
Attribute flags: 0000000000000000
Partition name: 'Linux'
The only change should be the Partition size.
write to disk and exit
w
y
grow filesystem to match entire (smaller) disk. The fist step shrunk it down to the minimal size it can fit
$ sudo resize2fs -p /dev/xvdg1
We're done. Detach volume & snapshot it.
Optional step. Choosing proper Kernel ID for the AMI.
If you are dealing with PVM image and encounter following mount error in instance logs
Kernel panic - not syncing: VFS: Unable to mount root
when your instance doesn't pass startup checks, you may probably be required to perform this additional step.
The solution to this error would be to choose proper Kernel ID for your PVM image during image creation from your snapshot.
The full list of Kernel IDs (AKIs) can be obtained here.
Do choose proper AKI for your image, they are restricted by regions and architectures!
The problem was with the BIOS boot partition. I was able to solve this by first initializing an instance with a smaller EBS volume. Then detaching the volume and attaching it to an instance whihc will be used to copyt the contents fromt he larger volume o the smaller volume. That created a BIOS boot partition which actually works. Simply creating a new one and copying the boot partition does not work.
Now following the steps outlined in any of the two links will help one shrink the volume of root EBS.
Today, using UBUNTU doesn't work any other solution here. However, I found it:
For caution: snapshot the large volume (backup)
CREATE an instance IDENTICAL as possible such as LARGE volume works well. BUT with a SMALLER volume (desired size)
Detach his new volume and ATTACH the large volume (as /dev/sda1) and START instance
ATTACH the smaller new volume as /dev/sdf
LOG IN in new instance. Mount smaller volume on /mnt:
sudo mount -t ext4 /dev/xvdf1 /mnt
DELETE everything on /mnt. Ignore WARNNG/error since /mnt can’t be deleted :) with sudo rm -rf /mnt
Copy entire / to smaller volume: sudo cp -ax / /mnt
Exit from instance and Stop it in AWS console
Detach BOTH volumes. Now, re-attach the smaller volume, IMPORTANT, as /dev/sda1
Start instance. LOG IN instance and confirm everything is ok with smaller volume
Delete large volume, delete large snapshot, create a new snapshot of smaller volume. END.
The above procedures are not complete, missing steps:
Copy disk UUID
Install grub boot loader
Copy label
A more complete procedure can be found here:
https://medium.com/#m.yunan.helmy/decrease-the-size-of-ebs-volume-in-your-ec2-instance-ea326e951bce
This procedure is faster and more simple (no dd/resize2fs only rsync).
Tested with newer Nvme AWS disks.
Post any questions if you need help