On my VPS I am running Plesk 12.5 and CentOS 6.6.
I updated my VPS and I get a 100G extra but somehow, I created a new partition in stead of adding the space to an existing partition. I did this a year ago, and because space wasn't an issue at that time, I left if.
Now space is becoming an issue and I want to use the 100Gig extra. However, I have no clue of how to merge the new partition into the partition Plesk uses.
Below you see a my filesystem:
[root#vps dumps]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_plesk-lv_root
47G 14G 31G 31% /
tmpfs 1,9G 8,0K 1,9G 1% /dev/shm
/dev/vda1 485M 169M 292M 37% /boot
/dev/mapper/vg_plesk-lv_tmp
504M 17M 462M 4% /tmp
/dev/vda3 99G 188M 94G 1% /mynewpartition
/dev/vda3 is the new partition that needs to be merged, I think with /dev/mapper/vg_plesk-lv_root. I'm not sure about that, because I have not enough experiance on this. But that one is the one that floods every time!
Looking at the details you provided I assume that the name of the volume-group on your server is vg_plesk. Also I can see that there is a device /dev/vda3 which you wish to merge with vg_plesk-lv_root.
In order to merge you will have to extent your existing volume-group vg_plesk.
First of all unmount /dev/vda3
umount /mynewpartition
Remove or comment the entry for this particular device in /etc/fstab and save the file.
Extend existing volume group.
vgextend vg_plesk /dev/vda3
Extend desired lv vg_plesk-lv_root
lvextend -L+100G /dev/mapper/vg_plesk-lv_root
Resize the extended LV
resize2fs /dev/mapper/vg_plesk-lv_root
Keep in mind all your data in /mynewpartition will be lost when you un-mount the partition /dev/vda3 so please keep a copy of this data if it is important.
you may also find this link useful.
Hope this helps.
Thanks.
Related
I have 2.5TB data in a 4TB volume that needs to be copied to a 3 TB volume. The IOPS is high for both the volumes due to their size. So technically , the transfer speed should be faster. But since these are db files we need maintain File integrity,permissions,timestamps and all the stuff.
Everyone recommends Rsync but also mention that it is slow. Is there any other faster method to copy while keeping the data integrity in check?
Also is there any way to say create a internal image of this volume like .img/iso etc and upload it to s3/google drive and download it in the other volume -- Just thinking all possibilities to get this done faster.
Update: Adding more info here - The volumes are attached to the same machine - 4TB is the volume with data. 3 TB is the new empty volume. This is essentially part of volume resize activity.
If an "exact" copy of the disk is acceptable, then you could:
Create a Snapshot of the Amazon EBS volume
Create a new Amazon EBS volume from the snapshot
Done!
Internally, the new volume "points to" the snapshot, so you don't have to wait for the data to be copied across. The first time a disk block is accessed, the block will be copied from the snapshot to the disk volume. This happens behind-the-scenes, so you won't even notice it. This means that the new volume is available very quickly.
However, please note that the new volume needs to be at least as big as the source volume.
To test the speed of copying, I did the following:
Launched a t2.large Amazon EC2 instance with:
A 4TB volume
A 3TB volume
On the 4TB volume: Generated 2.6TB of files, across 439 files:
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 416K 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvda1 8.0G 1.3G 6.8G 16% /
/dev/xvdb 3.9T 2.6T 1.2T 69% /v4t <--- Generated files here
/dev/xvdc 2.9T 89M 2.8T 1% /v3t <--- Target drive
tmpfs 798M 0 798M 0% /run/user/1000
I then copied the data overnight, resulting in:
/dev/xvdc 2.9T 2.6T 231G 92% /v3t
Copy speed was reported as:
sent 2,762,338,236,045 bytes received 8,408 bytes 121,836,508.74 bytes/sec
total size is 2,761,663,971,512 speedup is 1.00
Unfortunately, my timer failed due to a disconnection, but it appears to have copied at a speed of about 1TB in 8 hours. So, it seems that you can copy your 2.5 TB overnight rather than needing 5 days.
Adding to John's answer, we used msrsync https://github.com/jbd/msrsync
Can run upto 16 threads of rsync in parallel. Since this was db, just the number of files we had was huge(irrespective of size). It took around 2 days for copying 2.5TB data.
Our db tables were fine and replication worked but we could still see differences in the size of data in the volumes. The newer volume had 160gb missing in the 2.5TB. This might be due to how the blocks were handled in the different volumes but we haven't seen any issue so far.
on all our kafka machines ( production machines ) , we see that: ( no free space )
df -h /var/kafka
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 11T 11T 2.3M 100% /var/kafka
and under /var/kafka/kafka-logs
we see all topic folders (huge size) as example:
117G hgpo.llo.prmt.processed-28
117G hgpo.llo.prmt.processed-29
117G hgpo.llo.prmt.processed-3
117G hgpo.llo.prmt.processed-30
117G hgpo.llo.prmt.processed-31
117G hgpo.llo.prmt.processed-32
what is the best approach to delete the topic/s from the folder /var/kafka/kafka-logs ,
and what are the exactly steps to do so , as stop service before deletion etc .
second important question:
what is the mechanizem that suppose to delete automatically the topics ?
I was using the instructions on https://matt.berther.io/2015/02/03/how-to-resize-aws-ec2-ebs-volumes/ and http://atodorov.org/blog/2014/02/07/aws-tip-shrinking-ebs-root-volume-size/ to move to a EBS volume with less disk space. In both cases, when I attached the shrinked EBS volume(as /dev/xdva or /dev/sda1 , neither works) to an EC2 instance and start it, it stops on its own with the message
State transition reason
Client.InstanceInitiatedShutdown: Instance initiated shutdown
Some more tinkering and I found that the new volume did not have BIOS boot partition. So I used gdisk to make one and copied the MBR from the original volume(that works and using which I can start instances) to the new volume. Now the instance does not terminate but I am not able to ssh into the newly launched instance.
What might be the reason behind this happening? How can I get more information(from logs/AWS Console etc) on why this is happening?
To shrink a GPT partioned boot EBS volume below the 8GB that standard images seem to use you can do the following: (a slight variation of the dd method from https://matt.berther.io/2015/02/03/how-to-resize-aws-ec2-ebs-volumes/ )
source disk is /dev/xvdf, target is /dev/xvdg
Shrink source partition
$ sudo e2fsck -f /dev/xvdf1
$ sudo resize2fs -M /dev/xvdf1
Will print something like
resize2fs 1.42.12 (29-Aug-2014)
Resizing the filesystem on /dev/xvdf1 to 257491 (4k) blocks.
The filesystem on /dev/xvdf1 is now 257491 (4k) blocks long.
I converted this to MB, i.e. 257491 * 4 / 1024 ~= 1006 MB
copy above size + a bit more from device to device (!), not just partition to partition, because that includes both partition table & data in the boot partition
$ sudo dd if=/dev/xvdf of=/dev/xvdg bs=1M count=1100
now use gdisk to fix the GPT partition on the new disk
$ sudo gdisk /dev/xvdg
You'll be greeted with roughly
GPT fdisk (gdisk) version 0.8.10
Warning! Disk size is smaller than the main header indicates! Loading
secondary header from the last sector of the disk! You should use 'v' to
verify disk integrity, and perhaps options on the experts' menu to repair
the disk.
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.#
Warning! One or more CRCs don't match. You should repair the disk!
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: damaged
****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
Command (? for help):
The following is the keyboard input within gdisk. To fix the problems, the data partition that is present in the copied partition table needs to be resized to fit on the new disk. This means it needs to be recreated smaller and it's properties need to be set to match the old partition definition.
Didn't test it so it's maybe not required to relocate the backup table to the actual end of the disk but I did it anyways:
go to extra expert options: x
relocate backup data structures to the end of the disk: e
back to main menu: m
Now to fixing the partition size
print and note some properties of partition 1 (and other non-boot partitions if they exist):
i
1
Will show something like
Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
Partition unique GUID: DBA66894-D218-4D7E-A33E-A9EC9BF045DB
First sector: 4096 (at 2.0 MiB)
Last sector: 16777182 (at 8.0 GiB)
Partition size: 16773087 sectors (8.0 GiB)
Attribute flags: 0000000000000000
Partition name: 'Linux'
now delete
d
1
and recreate the partition
n
1
Enter the required parameters. All defaults worked for me here (= press enter), when in doubt refer to partition information from above
First sector = 4096
Last sector = whatever is the actual end of the new disk - take the default here
type = 8300 (Linux)
The new partition's default name did not match the old one. So change it to the original One
c
1
Linux (see Partition name from above)
Next thing to change is the partition's GUID
x
c
1
DBA66894-D218-4D7E-A33E-A9EC9BF045DB (see Partition unique GUID, not the partition guid code above that)
That should be it. Back to main menu & print state
m
i
1
Will now print
Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
Partition unique GUID: DBA66894-D218-4D7E-A33E-A9EC9BF045DB
First sector: 4096 (at 2.0 MiB)
Last sector: 8388574 (at 4.0 GiB)
Partition size: 8384479 sectors (4.0 GiB)
Attribute flags: 0000000000000000
Partition name: 'Linux'
The only change should be the Partition size.
write to disk and exit
w
y
grow filesystem to match entire (smaller) disk. The fist step shrunk it down to the minimal size it can fit
$ sudo resize2fs -p /dev/xvdg1
We're done. Detach volume & snapshot it.
Optional step. Choosing proper Kernel ID for the AMI.
If you are dealing with PVM image and encounter following mount error in instance logs
Kernel panic - not syncing: VFS: Unable to mount root
when your instance doesn't pass startup checks, you may probably be required to perform this additional step.
The solution to this error would be to choose proper Kernel ID for your PVM image during image creation from your snapshot.
The full list of Kernel IDs (AKIs) can be obtained here.
Do choose proper AKI for your image, they are restricted by regions and architectures!
The problem was with the BIOS boot partition. I was able to solve this by first initializing an instance with a smaller EBS volume. Then detaching the volume and attaching it to an instance whihc will be used to copyt the contents fromt he larger volume o the smaller volume. That created a BIOS boot partition which actually works. Simply creating a new one and copying the boot partition does not work.
Now following the steps outlined in any of the two links will help one shrink the volume of root EBS.
Today, using UBUNTU doesn't work any other solution here. However, I found it:
For caution: snapshot the large volume (backup)
CREATE an instance IDENTICAL as possible such as LARGE volume works well. BUT with a SMALLER volume (desired size)
Detach his new volume and ATTACH the large volume (as /dev/sda1) and START instance
ATTACH the smaller new volume as /dev/sdf
LOG IN in new instance. Mount smaller volume on /mnt:
sudo mount -t ext4 /dev/xvdf1 /mnt
DELETE everything on /mnt. Ignore WARNNG/error since /mnt can’t be deleted :) with sudo rm -rf /mnt
Copy entire / to smaller volume: sudo cp -ax / /mnt
Exit from instance and Stop it in AWS console
Detach BOTH volumes. Now, re-attach the smaller volume, IMPORTANT, as /dev/sda1
Start instance. LOG IN instance and confirm everything is ok with smaller volume
Delete large volume, delete large snapshot, create a new snapshot of smaller volume. END.
The above procedures are not complete, missing steps:
Copy disk UUID
Install grub boot loader
Copy label
A more complete procedure can be found here:
https://medium.com/#m.yunan.helmy/decrease-the-size-of-ebs-volume-in-your-ec2-instance-ea326e951bce
This procedure is faster and more simple (no dd/resize2fs only rsync).
Tested with newer Nvme AWS disks.
Post any questions if you need help
Problem: drop keyspace MyKeyspace; hangs.
Environment:
This is an Ubuntu 12.04 64bit in virtualbox, running a single Cassandra instance (on a development machine).
Cassandra is 1.1.6:
myuser#myhost:~$ /usr/bin/nodetool -h localhost version
ReleaseVersion: 1.1.6
Plenty of free disk space:
myuser#myhost:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/myhost-root 100232772 3100308 92112816 4% /
udev 1016760 4 1016756 1% /dev
tmpfs 410340 268 410072 1% /run
none 5120 0 5120 0% /run/lock
none 1025844 0 1025844 0% /run/shm
/dev/sda1 233191 24999 195751 12% /boot
Machine is idle:
myuser#myhost:~$ uptime
21:24:50 up 3:46, 2 users, load average: 0.06, 0.04, 0.05
How I got there:
The machine was running another db, all fine for long time. Now I created a new keyspace MyKeyspace, and run a Java program to import data (using titan graph, but that shouldn't matter). After a couple thousand of records imported (a couple of MB only) the import program did not make progress anymore, and logged 6 times:
418455 [RetryService : myhost(192.168.1.241):9160] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - Reactivating myhost
Then my program (titan graph actually) gave up with:
com.thinkaurelius.titan.diskstorage.TemporaryStorageException: Temporary failure in storage backend
During all this time I was connected to /usr/bin/cassandra-cli and could successfully execute show keyspaces;.
Then I decided to start over; drop the keyspace. That's where it hangs now, for hours. It doesn't respond to ctrl-c either. Meanwhile I'm able to log in by ssh, connect with cassandra-cli, and run show keyspaces;. The keyspace is still there. Also, my Java app can access that data store, but it's read only. Reading succeeds, but writes fail. It's just a timeout I get from the titan graph library when writing:
com.thinkaurelius.titan.core.TitanException: ID renewal thread on partition [2] did not complete in time. [60007 ms]
Any commands I could run to see what's going on? Should I report a bug?
If you have auto_snapshot enabled in cassandra.yaml (it's enabled by default), then Cassandra will take a snapshot before dropping the keyspace. If you don't have JNA set up properly, this can sometimes cause problems, so I would check that first.
Problem Statement:-
I am getting this below exception-
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException:
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace
quota of /tmp is exceeded: quota=659706976665600 diskspace consumed=614400.1g
So I just wanted to know how much is the size of /tmp directory currently and because of that I am getting this exception. How can I see the free space in /tmp?
Update:-
bash-3.00$ df -h /tmp
Filesystem size used avail capacity Mounted on
rpool/tmp 10G 2.2G 7.8G 22% /tmp
I am puzzled right now why I am getting that exception then as it clearly states above that I have space available.
You can do (For SunOS)
# du -sh /tmp
To see how much it uses now, but that you already saw.
To see how much total, free and used space is on the partition where /tmp resides you can use:
# df -h /tmp
Note that filling up space is not the only thing that can prevent writing to filesystem.
Running out of inodes is another popular reason.
You can check that with
# df -i /tmp
for a in ls; do du -ch ${a} | grep total ; echo ${a}; done
try this but it takes time if the size of dir is in GBs
Managing HDFS Space Quotas
It’s important to understand that in HDFS, there must be enough quota space to accommodate an entire block. If the user has, let’s say, 200MB free in their allocated quota, they can’t create a new file, regardless of the file size, if the HDFS block size happens to be 256MB. You can set the HDFS space quota for a user by executing the setSpace-Quota command. Here’s the syntax:
$ hdfs dfsadmin –setSpaceQuota <N> <dirname>...<dirname>
The space quota you set acts as the ceiling on the total size of all files in a directory. You can set the space quota in bytes (b), megabytes (m), gigabytes (g), terabytes (t) and even petabytes (by specifying p—yes, this is big data!). And here’s an example that shows how to set a user’s space quota to 60GB:
$ hdfs dfsadmin -setSpaceQuota 60G /user/alapati
You can set quotas on multiple directories at a time, as shown here:
$ hdfs dfsadmin -setSpaceQuota 10g /user/alapati /test/alapati