is there a way to create a snapshot of a logical volume (lv1) that resides into volume group vgA inside a different volume group (say vgB)?
i have my root logical volume in volume group vgA on the SSD and i want to take a snapshot of the volume on the second volume grout vgB that sits onto the mechanical hard disk, so i tried to execute
lvcreate -L 10G -s -n vgB/rootSnapshot vgA/rootVolume
and some other variants but had no luck..
The snapshot volume must reside on the same VG as lv1.
For your situation, you may want to consider creating one VG (vgA) that spans over two PVs (pv1 for SSD, and pv2 for mechanical hard disk). Then you can create lv1 on pv1 and lvsnap on pv2.
lvcreate -L 100G -n lv1 vgA /dev/pv1
lvcreate -L 10G -s -n lvsnap /dev/vgA/lv1 /dev/pv2
Only want to say that limiting the snapshot in the same volume group as it's orignal lv really makes the idea of "logical" volume degraded.
For example, I use two hard drives with a RAID card to form a RAID1 disk and manage all it's physical space with volume group VG_SYS, and create my system volume and install my OS within it.Then I use another two drives to form a RAID0 disk and build a VG_DATA volume group on it, planning to use it as storage for unimportant data and snapshot.
However, I can't create snapshot volume in VG_DATA due to the limitation of LVM. Of course I can extend my VG_SYS onto my RAID0 drive and dedicate those pvs from RAID0 drive to my snapshot volume. But that would make my intention vague which separating logical volumes into important system volume group (redundancy guaranteed by RAID1) and unimportant quickly updated data volume group (RAID0 to increase I/O efficiency). Snapshots are meant to be updated and recycled very quickly so they don't need any redundancy. If a snapshot happens to be broken you just need to rebuild another one -- it's unlikely both your original volume and snapshot are broken at the same time.
It's not possible with LVM, specifically lvcreate does not support it. However, it's possible if you use device mapper directly (via dmsetup).
See here:
https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/snapshot.html#how-snapshot-is-used-by-lvm2
https://www.man7.org/linux/man-pages/man8/dmsetup.8.html
Related
I have a ~250 TB Xfs file system, distributed across several disks (PVs) via LVM.
I've moved most of the data to another server. The remaining data (~60 TB) would easily fit on just PV.
I would like to decommission all but one disk in my VG. The trouble is my LV is an Xfs, and Xfs filesystem shrinking is unsupported. So no matter how “empty” the filesystem is, I can’t use pvmove to take extents off the PV, because it’s still being “used” by free space in Xfs, and thus can’t vgreduce it.
All the tutorials on how to do this, e.g. https://yallalabs.com/linux/how-to-reduce-shrink-the-size-of-a-lvm-partition-formatted-with-xfs-filesystem/ , boil down to "back up your data, reformat, restore".
Is that truly the only option?
I have tried this for xfs long back. Don't have commands handy, but did below:
Take backup of current XFS file system
Remove the LV
Create new LV with required size
Restore the XFS backup
I am using Ubuntu 14.04 as my dom0 for Xen hypervisor installed on my server. During Ubuntu installation I used set up LVM option. Now, I see that i have 3 partitions sda1, sda2 and sda5, of which sda5 is set as LVM physical volume. I have a few questions regarding that:
Why is the physical volume full ? I have not installed anything on my server except Xen.
Of the two logical volumes root and swap_1 which one can be expanded or shrunk ?
How do I create a new logical volume when the 2 logical volumes appear to have taken up all the space (magically)?
I want to install 4 VMs. Do i need a separate LV for each VM ?
Here are the screenshots of the system:
Why is the physical volume full ? I have not installed anything on my
server except Xen.
I see that when installing the Ubuntu 14.04 server you allocated 144.1G from the ubuntu-vg volume group for the dom0 root partition, but also allocated 128G for swap space, assigning all the available space on it (which is why it says allocatable (but full).
Are you sure you intended to have such a large swap?
Of the two logical volumes root and swap_1 which one can be expanded
or shrunk ?
In the actual situation none of the volumes can be expanded, without shrinking the other. I suggest you reduce the swap to a more suitable size, 8G for instance.
To do that, first make sure to stop using the swap space:
root#ubuntu# swapoff /dev/mapper/ubuntu--vg-swap_1
then use LVM2 tools to resize the logical volume:
root#ubuntu# lvreduce -L 8G /dev/mapper/ubuntu--vg-swap_1
(more examples of shrinking logical volumes at https://www.rootusers.com/lvm-resize-how-to-decrease-an-lvm-partition/)
How do I create a new logical volume when the 2 logical
volumes appear to have taken up all the space (magically)?
After you do the previous step, you should now have ~120G available on volume group ubuntu-vg,
I want to install 4 VMs. Do i need a separate LV for each VM ?
It depends on what you find more flexible:
You can create xen domains with storage based on images, which are standard files inside dom0 root filesystem (in this case you may want to expand the root volume).
or,
You can create xen domains with storage based on LVM volumes.
More info on this and examples in https://www.howtoforge.com/using-xen-with-lvm-based-vms-instead-of-image-based-vms-debian-etch
Please check the below description:
Red Hat Enterprise Linux uses a naming scheme that is file-based, with file names in the form of /dev/xxyN.
Where,
xx:
The first two letters of the partition name indicate the type of device on which the partition resides, usually sd.
y:
This letter indicates which device the partition is on. For example, /dev/sda for the first hard disk, /dev/sdb for the second, and so on.
N:
The final number denotes the partition. The first four (primary or extended) partitions are numbered 1 through 4. Logical partitions start at 5. So, for example, /dev/sda3 is the third primary or extended partition on the first hard disk, and /dev/sdb6 is the second logical partition on the second hard disk.
In Red Hat Enterprise Linux each partition is used to form part of the storage necessary to support a single set of files and directories. Mounting a partition makes its storage available starting at the specified directory (known as a mount point).
For example, if partition /dev/sda5 is mounted on /usr/, that would mean that all files and directories under /usr/ physically reside on /dev/sda5. So the file /usr/share/doc/FAQ/txt/Linux-FAQ would be stored on /dev/sda5, while the file /etc/gdm/custom.conf would not. It is also possible that one or more directories below /usr/ would be mount points for other partitions. For instance, a partition (say, /dev/sda7) could be mounted on /usr/local/, meaning that /usr/local/man/whatis would then reside on /dev/sda7 rather than /dev/sda5.
Generally speaking, the disk spacing for /dev partition depends on number and size of the partitions (both primary and logical)to be used by operating system. However, there is no one right answer to this question. It depends on your needs and requirements.
My question is, Is there any affect to the initial partition memory (say, we given 32 GB to /dev partition while installing RHEL OS), if we are adding more harddisk memories(say in 100's of GB's) to /dev partition.
You don't create partitions for /dev. It's in memory, and managed fully automatically by the kernel. /dev exists to expose kernel objects such as devices to userspace, it is transient and doesn't require backing storage on disk.
if you run ls -l /dev/sda1, you will see that the first letter in the permission block says b. b = block-device. This is a special file that if stored on disk, only would hold two special numbers (called major and minor, usually stored together with the file-permissions). When you try to open this special file, the kernel will see that it is a "block" device and look up and major and minor numbers to find the matching physical driver that actually contains this data. Your read/write/ioctl calls will then be redirected to this driver.
I'm mostly a database guy but i have debian wheezy server with 4 hard disks. It was set up using one disk a while back, that one was all that was needed. Now, i need more space and the thing, i think, that is throwing me off is the UUID disk stuff.
anyway:
/mnt# lsblk -io KNAME,TYPE,SIZE,MODEL
KNAME TYPE SIZE MODEL
sdb disk 232.9G Hitachi HDP72502
sdc disk 232.9G Hitachi HDP72502
sda disk 232.9G Hitachi HDP72502
sda1 part 223.4G
sda2 part 1K
sda5 part 9.5G
sdd disk 232.9G Hitachi HDP72502
sr0 rom 1024M DVD A DS8A1P
Root is mounted to sda. sdb, c and d are unused, unformatted etc. i just need some more space, so i have created /mnt/ext_b/ and so on for b, c, d.
mount shows:
/dev/disk/by-uuid/1b1e97e4-3c04-4e50-8e06-b16752778717 on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered)
which is correct. i want to mount the others just for space, how do i get their UUIDs?
/mnt# blkid
/dev/sda5: UUID="f70ad0b2-a9d0-430a-829c-d2e37245fd71" TYPE="swap"
/dev/sda1: UUID="1b1e97e4-3c04-4e50-8e06-b16752778717" TYPE="ext4"
how do i get the UUIDs to put formatted filesystems on the disks?
/mnt# mkfs.ext4 /dev/sdb1
mke2fs 1.42.5 (29-Jul-2012)
Could not stat /dev/sdb1 --- No such file or directory
thanks in advance.
matt
It seems that those additional disks haven't been touched since they were connected to the server and don't have even partitions yet. In general, adding extra disk space in Linux can be done in the following steps:
Attach new disk to the server
Create partition table on it
Add one or more partitions to the disk
Format partition to the FS of your choice
Mounting this partition to the mount point of your choice
Make those mount persistent by adding appropriate line to the /etc/fstab
If you have multiple disks you may consider to create hardware RAID disk if you have RAID controller or software RAID, using mdadm tool. Both ways you'll get larger single disk(size would depend from the RAID level you choose) for which you'll need to go to step 2 and farther. It's worth mentioning that there is another way to get larger than single disk usable space from multiple disks called Logical Volume Manager or LVM. It's more sophisticated than MDM and allows to create FS snapshots and add extra disk space to the volume without need to create additional mountpoints.
Whatever you choose you'll need to create a partition table on a new disk/LVM volume/MDM disk. Here you need to make another choice - what type of partition table to use, MBR or GPT. Check Partitioning HOWTO for more details, but in general I'd recommend GPT for large non-bootable disks.
Same HOWTO will tell you, how create partition(s) on the selected disk. At this point you'll get devices like /dev/sdb1, etc.
Then you can go to step 4, the one you already tried:
# mkfs.ext4 /dev/sdb1
That should succeed now and you'll be able to get UUID of a new FS with blkid. Add obtained UUID to your /etc/fstab file and mount newly created FS to it's mount point.
It seems to me that you must create partitions on disks.
Think about storing your data. You have similar disks. It must be raid? If it must be raid, what type of raid it must be?
You can create partition with fdisk or some alternatives (gparted, cfdisk and so on).
There are a lot information in internet and manuals.
May be you need LVM? Some people say that it may slow down your database, but it gives you opportunity to get snapshots.
After creating partitions you can create filesystem and can mount it.
Usually people recommend to use XFS or Ext4 for databases.
And don't forget to set right mount flags to your filestystem.
noatime,nodiratime and barrier=0 will improve performance, but with barrier=0 in some cases you can loose your data. In case of Ext4 look at data (may be in your case you can set it in ordered).
UPD: may be this question must be in superuser or unix section?
You supposed to create partition with utility like the fdisk, cfdisk, gparted or partitionmanager before you can format it.
Base on http://www.centos.org/docs/5/html/5.2/Cluster_Logical_Volume_Manager/create_linear_volumes.html
"When you create a logical volume, the logical volume is carved from a volume group using the free extents on the physical volumes that make up the volume group. Normally logical volumes use up any space available on the underlying physical volumes on a next-free basis. Modifying the logical volume frees and reallocates space in the physical volumes."
I'm having doubt if creating a logical volume on physical volume that have existing data, lvcreate command will delete the data base on this statement "Modifying the logical volume frees and reallocates space in the physical volumes".
I'm trying to recover my logical volume and mounting it to the server. I have my whole problem stated in another question (http://stackoverflow.com/questions/13356555/how-to-mount-logical-volume).
Hope you guys will help me.
Any help would be appreciated.
Thanks.
lvm is a logical layer mounted on top of physical block allocated devices.
It allows you to basically extend your volume across multiple physical disks.
It's a particular way of partitioning the block devices, so yes it takes over how those disks are managed at a higher level then physical voluming.
LVCREATE will will disconnect your data on the physical volumes from the logical layer if you are not just doing a resize. (I'd be careful you have a backup)
If you've lost control of your LV ie destroyed a table entry, you can still forcibly mount the volume RO and possibly recover the contents. You also may be able to repair the volume in question.
If you have lost contact with your data, you can also fully image each disk off to scratch media as files to restore any data lost in your array or logical volume.
this way, you could run multiple passes or repair attempts.
To do this you MUST stop lv, and mdadm and have the drives unmounted in a system.
Let me know if you need a further explanation of how to image off the drives.