Base on http://www.centos.org/docs/5/html/5.2/Cluster_Logical_Volume_Manager/create_linear_volumes.html
"When you create a logical volume, the logical volume is carved from a volume group using the free extents on the physical volumes that make up the volume group. Normally logical volumes use up any space available on the underlying physical volumes on a next-free basis. Modifying the logical volume frees and reallocates space in the physical volumes."
I'm having doubt if creating a logical volume on physical volume that have existing data, lvcreate command will delete the data base on this statement "Modifying the logical volume frees and reallocates space in the physical volumes".
I'm trying to recover my logical volume and mounting it to the server. I have my whole problem stated in another question (http://stackoverflow.com/questions/13356555/how-to-mount-logical-volume).
Hope you guys will help me.
Any help would be appreciated.
Thanks.
lvm is a logical layer mounted on top of physical block allocated devices.
It allows you to basically extend your volume across multiple physical disks.
It's a particular way of partitioning the block devices, so yes it takes over how those disks are managed at a higher level then physical voluming.
LVCREATE will will disconnect your data on the physical volumes from the logical layer if you are not just doing a resize. (I'd be careful you have a backup)
If you've lost control of your LV ie destroyed a table entry, you can still forcibly mount the volume RO and possibly recover the contents. You also may be able to repair the volume in question.
If you have lost contact with your data, you can also fully image each disk off to scratch media as files to restore any data lost in your array or logical volume.
this way, you could run multiple passes or repair attempts.
To do this you MUST stop lv, and mdadm and have the drives unmounted in a system.
Let me know if you need a further explanation of how to image off the drives.
Related
I've been playing around with disk IO on flash drives, HDDs, and SSDs by opening /dev/sd* paths in Linux the way I would any other file.
I understand that the memory controller on the disk can hide true block order (via a mapping) from the OS.
This boils down to these questions:
Are the blocks in /dev/sd* in the order perceived by the OS, or in the order as perceived by the disk's memory controller?
Is the order of blocks in /dev/sd* subjective between POSIX OSes?
Can these properties change if done on an NT or Cygwin system?
Is this property different among Flash, HDD, and SSD?
Can a write occur to a specific index in an opened /dev/sd* path, or is this determined by the memory controller?
Thanks in advance!
If you use the device nodes for entire disks (/dev/sda, /dev/sdb, and so on), then the file offsets for the block device correspond to logical block addresses and will be portable across systems (assuming that the disk sector size is supported). This is independent of the storage technology.
However, the names of the device nodes are different from system to system.
If you use sub-devices (partitions), this is not necessarily the case because interpretation of and support for partition tables varies considerably.
I am using Ubuntu 14.04 as my dom0 for Xen hypervisor installed on my server. During Ubuntu installation I used set up LVM option. Now, I see that i have 3 partitions sda1, sda2 and sda5, of which sda5 is set as LVM physical volume. I have a few questions regarding that:
Why is the physical volume full ? I have not installed anything on my server except Xen.
Of the two logical volumes root and swap_1 which one can be expanded or shrunk ?
How do I create a new logical volume when the 2 logical volumes appear to have taken up all the space (magically)?
I want to install 4 VMs. Do i need a separate LV for each VM ?
Here are the screenshots of the system:
Why is the physical volume full ? I have not installed anything on my
server except Xen.
I see that when installing the Ubuntu 14.04 server you allocated 144.1G from the ubuntu-vg volume group for the dom0 root partition, but also allocated 128G for swap space, assigning all the available space on it (which is why it says allocatable (but full).
Are you sure you intended to have such a large swap?
Of the two logical volumes root and swap_1 which one can be expanded
or shrunk ?
In the actual situation none of the volumes can be expanded, without shrinking the other. I suggest you reduce the swap to a more suitable size, 8G for instance.
To do that, first make sure to stop using the swap space:
root#ubuntu# swapoff /dev/mapper/ubuntu--vg-swap_1
then use LVM2 tools to resize the logical volume:
root#ubuntu# lvreduce -L 8G /dev/mapper/ubuntu--vg-swap_1
(more examples of shrinking logical volumes at https://www.rootusers.com/lvm-resize-how-to-decrease-an-lvm-partition/)
How do I create a new logical volume when the 2 logical
volumes appear to have taken up all the space (magically)?
After you do the previous step, you should now have ~120G available on volume group ubuntu-vg,
I want to install 4 VMs. Do i need a separate LV for each VM ?
It depends on what you find more flexible:
You can create xen domains with storage based on images, which are standard files inside dom0 root filesystem (in this case you may want to expand the root volume).
or,
You can create xen domains with storage based on LVM volumes.
More info on this and examples in https://www.howtoforge.com/using-xen-with-lvm-based-vms-instead-of-image-based-vms-debian-etch
I'm mostly a database guy but i have debian wheezy server with 4 hard disks. It was set up using one disk a while back, that one was all that was needed. Now, i need more space and the thing, i think, that is throwing me off is the UUID disk stuff.
anyway:
/mnt# lsblk -io KNAME,TYPE,SIZE,MODEL
KNAME TYPE SIZE MODEL
sdb disk 232.9G Hitachi HDP72502
sdc disk 232.9G Hitachi HDP72502
sda disk 232.9G Hitachi HDP72502
sda1 part 223.4G
sda2 part 1K
sda5 part 9.5G
sdd disk 232.9G Hitachi HDP72502
sr0 rom 1024M DVD A DS8A1P
Root is mounted to sda. sdb, c and d are unused, unformatted etc. i just need some more space, so i have created /mnt/ext_b/ and so on for b, c, d.
mount shows:
/dev/disk/by-uuid/1b1e97e4-3c04-4e50-8e06-b16752778717 on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered)
which is correct. i want to mount the others just for space, how do i get their UUIDs?
/mnt# blkid
/dev/sda5: UUID="f70ad0b2-a9d0-430a-829c-d2e37245fd71" TYPE="swap"
/dev/sda1: UUID="1b1e97e4-3c04-4e50-8e06-b16752778717" TYPE="ext4"
how do i get the UUIDs to put formatted filesystems on the disks?
/mnt# mkfs.ext4 /dev/sdb1
mke2fs 1.42.5 (29-Jul-2012)
Could not stat /dev/sdb1 --- No such file or directory
thanks in advance.
matt
It seems that those additional disks haven't been touched since they were connected to the server and don't have even partitions yet. In general, adding extra disk space in Linux can be done in the following steps:
Attach new disk to the server
Create partition table on it
Add one or more partitions to the disk
Format partition to the FS of your choice
Mounting this partition to the mount point of your choice
Make those mount persistent by adding appropriate line to the /etc/fstab
If you have multiple disks you may consider to create hardware RAID disk if you have RAID controller or software RAID, using mdadm tool. Both ways you'll get larger single disk(size would depend from the RAID level you choose) for which you'll need to go to step 2 and farther. It's worth mentioning that there is another way to get larger than single disk usable space from multiple disks called Logical Volume Manager or LVM. It's more sophisticated than MDM and allows to create FS snapshots and add extra disk space to the volume without need to create additional mountpoints.
Whatever you choose you'll need to create a partition table on a new disk/LVM volume/MDM disk. Here you need to make another choice - what type of partition table to use, MBR or GPT. Check Partitioning HOWTO for more details, but in general I'd recommend GPT for large non-bootable disks.
Same HOWTO will tell you, how create partition(s) on the selected disk. At this point you'll get devices like /dev/sdb1, etc.
Then you can go to step 4, the one you already tried:
# mkfs.ext4 /dev/sdb1
That should succeed now and you'll be able to get UUID of a new FS with blkid. Add obtained UUID to your /etc/fstab file and mount newly created FS to it's mount point.
It seems to me that you must create partitions on disks.
Think about storing your data. You have similar disks. It must be raid? If it must be raid, what type of raid it must be?
You can create partition with fdisk or some alternatives (gparted, cfdisk and so on).
There are a lot information in internet and manuals.
May be you need LVM? Some people say that it may slow down your database, but it gives you opportunity to get snapshots.
After creating partitions you can create filesystem and can mount it.
Usually people recommend to use XFS or Ext4 for databases.
And don't forget to set right mount flags to your filestystem.
noatime,nodiratime and barrier=0 will improve performance, but with barrier=0 in some cases you can loose your data. In case of Ext4 look at data (may be in your case you can set it in ordered).
UPD: may be this question must be in superuser or unix section?
You supposed to create partition with utility like the fdisk, cfdisk, gparted or partitionmanager before you can format it.
is there a way to create a snapshot of a logical volume (lv1) that resides into volume group vgA inside a different volume group (say vgB)?
i have my root logical volume in volume group vgA on the SSD and i want to take a snapshot of the volume on the second volume grout vgB that sits onto the mechanical hard disk, so i tried to execute
lvcreate -L 10G -s -n vgB/rootSnapshot vgA/rootVolume
and some other variants but had no luck..
The snapshot volume must reside on the same VG as lv1.
For your situation, you may want to consider creating one VG (vgA) that spans over two PVs (pv1 for SSD, and pv2 for mechanical hard disk). Then you can create lv1 on pv1 and lvsnap on pv2.
lvcreate -L 100G -n lv1 vgA /dev/pv1
lvcreate -L 10G -s -n lvsnap /dev/vgA/lv1 /dev/pv2
Only want to say that limiting the snapshot in the same volume group as it's orignal lv really makes the idea of "logical" volume degraded.
For example, I use two hard drives with a RAID card to form a RAID1 disk and manage all it's physical space with volume group VG_SYS, and create my system volume and install my OS within it.Then I use another two drives to form a RAID0 disk and build a VG_DATA volume group on it, planning to use it as storage for unimportant data and snapshot.
However, I can't create snapshot volume in VG_DATA due to the limitation of LVM. Of course I can extend my VG_SYS onto my RAID0 drive and dedicate those pvs from RAID0 drive to my snapshot volume. But that would make my intention vague which separating logical volumes into important system volume group (redundancy guaranteed by RAID1) and unimportant quickly updated data volume group (RAID0 to increase I/O efficiency). Snapshots are meant to be updated and recycled very quickly so they don't need any redundancy. If a snapshot happens to be broken you just need to rebuild another one -- it's unlikely both your original volume and snapshot are broken at the same time.
It's not possible with LVM, specifically lvcreate does not support it. However, it's possible if you use device mapper directly (via dmsetup).
See here:
https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/snapshot.html#how-snapshot-is-used-by-lvm2
https://www.man7.org/linux/man-pages/man8/dmsetup.8.html
As btrfs has the feature of data redundancy, some files in continuous snapshots may mapped to one physical data. When I want to delete a snapshot to free some disk space, some file in disk may not be real freed.
So are there any method to get the free disk space info before snapshot be deleted?
I guess you can't. A snapshot doesn't know how much data it owns exclusively until it runs a full scan over all of its blocks/extents.