Created a snapshot
Deleted a huge file
Delta is still 0 (snapshot not using anymore space) in zfs list for past three snapshots
Should the delta or used space not be the size of the deleted file. I know ZFS is COW but I'm confused as to why I can't rollback the /usr/home/xxxx child
# ls /home/xxxxx/testing12345.txt
/home/xxxxx/testing12345.txt
# ls -alh /home/xxxxx/testing12345.txt
-rw-r--r-- 1 root xxxxx 254M Aug 28 00:06 /home/xxxxx/testing12345.txt
# zfs list -rt snapshot tank1/usr/home/xxxxx
NAME USED AVAIL REFER MOUNTPOINT
tank1/usr/home/xxxxx#myRecursiveSnapshot 291M - 804M -
tank1/usr/home/xxxxx#devEnv 71K - 1.39G -
tank1/usr/home/xxxxx#xfce 0 - 1.39G -
tank1/usr/home/xxxxx#testhome 0 - 1.39G -
tank1/usr/home/xxxxx#testagain 1K - 1.39G -
tank1/usr/home/xxxxx#27082015 0 - 1.39G -
tank1/usr/home/xxxxx#270820150 0 - 1.39G -
tank1/usr/home/xxxxx#2708201501 0 - 1.39G -
#
#
#
#
# zfs snapshot -r tank1#28082015
# zfs list -rt snapshot tank1/usr/home/xxxxx
NAME USED AVAIL REFER MOUNTPOINT
tank1/usr/home/xxxxx#myRecursiveSnapshot 291M - 804M -
tank1/usr/home/xxxxx#devEnv 71K - 1.39G -
tank1/usr/home/xxxxx#xfce 0 - 1.39G -
tank1/usr/home/xxxxx#testhome 0 - 1.39G -
tank1/usr/home/xxxxx#testagain 1K - 1.39G -
tank1/usr/home/xxxxx#27082015 0 - 1.39G -
tank1/usr/home/xxxxx#270820150 0 - 1.39G -
tank1/usr/home/xxxxx#2708201501 0 - 1.39G -
tank1/usr/home/xxxxx#28082015 0 - 1.39G -
# rm /home/xxxxx/testing12345.txt
# zfs list -rt snapshot tank1/usr/home/xxxxx
NAME USED AVAIL REFER MOUNTPOINT
tank1/usr/home/xxxxx#myRecursiveSnapshot 291M - 804M -
tank1/usr/home/xxxxx#devEnv 71K - 1.39G -
tank1/usr/home/xxxxx#xfce 0 - 1.39G -
tank1/usr/home/xxxxx#testhome 0 - 1.39G -
tank1/usr/home/xxxxx#testagain 1K - 1.39G -
tank1/usr/home/xxxxx#27082015 0 - 1.39G -
tank1/usr/home/xxxxx#270820150 0 - 1.39G -
tank1/usr/home/xxxxx#2708201501 0 - 1.39G -
tank1/usr/home/xxxxx#28082015 0 - 1.39G -
#
I've been tried rolling back using various snapshots the /usr, /usr/home, and /usr/home/xxxx directories. I've read the FreeBSD forums, the handbook, and I've also tried rolling back just tank1#[snapshot name]--all to no effect.
Something odd, when I change files in /usr/home/xxxxx files in the hidden .zfs/snapshots/[snapshot name]/usr/home/xxxxx directory change as well.
Use this command to see space used for all snapshots of a vdev - relevant property you want is usedsnap:
zfs list -o name,used,avail,refer,creation,usedds,usedsnap,origin,compression,compressratio,refcompressratio,mounted,atime,lused
I've added a few more properties since I use compression on my zfs pools.
zfs snapshots directories are read-only by the way.
You said you cannot roll back? If that's the case specify -r or -R and possibly -f if you have clones, sample:
zfs rollback -r poolname/dataset#oldersnaphot
zfs rollback -R poolname/dataset#oldersnaphot
Read the manual before issuing zfs rollback:
-r
Destroy any snapshots and bookmarks more recent than the one specified.
-R
Recursively destroy any more recent snapshots and bookmarks, as well as any clones of those snapshots.
-f
Used with the -R option to force an unmount of any clone file systems that are to be destroyed.
Related
My configuration
I'm trying to deploy a virtual machine in Azure using a Oracle Linux image (version 8.4, generation 2). We change the mount point of the Azure temporary disk (ephemeral0) to /mnt/resource. In addition I create a swapfile on the temporary disk. I'm using the following custom cloud-init script during deployment:
#cloud-config
datasource_list: [ Azure ]
mounts:
- [ ephemeral0, /mnt/resource, auto, "defaults,nofail,x-systemd.requires=cloud-init.service" ]
mount_default_fields: [ None, None, "auto", "defaults,nofail", "0", "2" ]
swap:
filename: /mnt/resource/swap.img
size: "auto" # or size in bytes
max_size: 17179869184 # 16GB
On the first boot right after VM creation everything (ephemeral0 and swap) is working as expected.
If I take a look in /var/log/cloud-init.log you can see the following entries:
[root#test01 ~]# cat /var/log/cloud-init.log | grep swap
2021-08-16 08:19:29,814 - cc_mounts.py[DEBUG]: Attempting to determine the real name of swap
2021-08-16 08:19:29,814 - cc_mounts.py[DEBUG]: changed default device swap => None
2021-08-16 08:19:29,814 - cc_mounts.py[DEBUG]: Ignoring nonexistent default named mount swap
2021-08-16 08:19:29,815 - cc_mounts.py[DEBUG]: suggest 4096.0 MB swap for 7672.03125 MB memory with '17018.25390625 MB' disk given max=None [max=4254.5634765625 MB]'
2021-08-16 08:19:29,815 - cc_mounts.py[DEBUG]: Creating swapfile in '/mnt/resource/swap.img' on fstype 'xfs' using 'fallocate'
2021-08-16 08:19:29,815 - subp.py[DEBUG]: Running command ['fallocate', '-l', '4096M', '/mnt/resource/swap.img'] with allowed return codes [0] (shell=False, capture=True)
2021-08-16 08:19:29,849 - subp.py[DEBUG]: Running command ['mkswap', '/mnt/resource/swap.img'] with allowed return codes [0] (shell=False, capture=True)
2021-08-16 08:19:29,887 - util.py[DEBUG]: Setting up swap file took 0.072 seconds
2021-08-16 08:19:29,893 - cc_mounts.py[DEBUG]: Changes to fstab: ['+ /dev/disk/cloud/azure_resource-part1 /mnt/resource auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2', '+ /mnt/resource/swap.img none swap sw,comment=cloudconfig 0 0']
2021-08-16 08:19:29,893 - subp.py[DEBUG]: Running command ['swapon', '-a'] with allowed return codes [0] (shell=False, capture=True)
2021-08-16 08:19:29,929 - cc_mounts.py[DEBUG]: Activate mounts: PASS:swapon -a
As you can see cloud-init suggests 4096 MB as swapsize.
In /etc/fstab the following entries are added:
/dev/disk/cloud/azure_resource-part1 /mnt/resource auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
/mnt/resource/swap.img none swap sw,comment=cloudconfig 0 0
Also swapon -s states that swap is configured corretly:
Filename Type Size Used Priority
/mnt/resource/swap.img file 4194300 0 -2
The Problem
Now if I deallocate the virtual machine and start it again the temporary disk is deleted and recreated as expected. It is mounted again under /mnt/resource but swap is not created any longer. /var/log/cloud-init.log states:
2021-08-16 08:29:33,331 - cc_mounts.py[DEBUG]: Attempting to determine the real name of swap
2021-08-16 08:29:33,331 - cc_mounts.py[DEBUG]: changed default device swap => None
2021-08-16 08:29:33,331 - cc_mounts.py[DEBUG]: Ignoring nonexistent default named mount swap
2021-08-16 08:29:33,331 - util.py[DEBUG]: Reading from /proc/swaps (quiet=False)
2021-08-16 08:29:33,331 - util.py[DEBUG]: Read 37 bytes from /proc/swaps
2021-08-16 08:29:33,331 - cc_mounts.py[DEBUG]: swap file /mnt/resource/swap.img exists, but not in /proc/swaps
2021-08-16 08:29:33,332 - cc_mounts.py[DEBUG]: suggest 0.0 MB swap for 7672.03125 MB memory with '12889.515625 MB' disk given max=None [max=3222.37890625 MB]'
2021-08-16 08:29:33,332 - cc_mounts.py[DEBUG]: Not creating swap: suggested size was 0
2021-08-16 08:29:33,337 - cc_mounts.py[DEBUG]: Changes to fstab: ['- /mnt/resource/swap.img none swap sw,comment=cloudconfig 0 0']
For my understanding the cc_mounts module of cloud-init suggests a swapsize of 0 MB because it determines that the temporary disk has only about 12 GB space left. This seems to be wrong since (a) the disk is empty due to deallocating and (b) df -h states it has about 15 GB available:
[root#test01 ~]# df -h /mnt/resource/
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 16G 45M 15G 1% /mnt/resource
Am I missing something here? Can anybody explain why cloud-init behaves like this and how to create a swapfile properly for every reboot?
• Your problem is occurred from a misconfiguration that causes Azure Linux Agent and the cloud init agent to try to configure the swap file. When cloud-init is responsible for provisioning, the swap file must be configured by cloud-init to enable only one agent (either cloud-init or waagent) for provisioning. This issue can be intermittent because of the timing of when the waagent daemons start.
• You can fix this problem by disabling the disk formatting and then swapping the configuration within the waagent configuration file, i.e., /etc/waagent.conf and ensuring that the azure linux agent is not mounting the ephemeral disk as this should be handled by the cloud-init agent. For this purpose, set the parameters as below: -
#vi /etc/waagent.conf
#Mount point for the resource disk
ResourceDisk.MountPoint=/mnt
#Create and use swapfile on resource disk
ResourceDisk.EnableSwap=n
#Size of the swapfile
ResourceDisk.SwapSizeMB=0
• Restart the Azure Linux agent and ensure that the VM is configured to create a swap file through cloud init. Also, add the below script to ‘/var/lib/cloud/scripts/per-boot’ and making the file executable by using the ‘# chmod +x create_swapfile.sh’ command: -
#!/bin/sh
if [ ! -f '/mnt/swapfile' ]; then
fallocate --length 2GiB /mnt/swapfile
chmod 600 /mnt/swapfile
mkswap /mnt/swapfile
swapon /mnt/swapfile
swapon -a
else
swapon /mnt/swapfile; fi
• Once done, stop and start the VM and check for swap enablement. Below is its example. Also, compare the logs from /var/log/waagent.log and /var/log/cloud-init.log for reboot timeframe. To avoid this situation completely, deploy the VM by using the swap configuration custom data during provisioning.
Please find the below documentation for more information: -
https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/swap-file-not-recreated-linux-vm-restart
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/update-linux-agent
Thanking you,
1.At the begining
mount | grep home
/dev/sdb1 on /home type xfs (rw,relatime,attr2,inode64,noquota)
2.try to modify
mount -o remount,rw,relatime,attr2,inode64,prjquota /dev/sbd1 /home
3.check it again
mount | grep home
/dev/sdb1 on /home type xfs (rw,relatime,attr2,inode64,noquota)
It doesn' work.
cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Aug 9 15:24:43 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=8f1038a3-6c31-4ce1-a9ef-3d7325e10bef / ext4 defaults 1 1
UUID=c687eab8-3ddd-4756-b91e-ad562b522f7c /boot ext4 defaults 1 2
UUID=7ae72a46-1407-49e6-8669-95bb9e592794 /home xfs rw,relatime,attr2,inode64,prjquota 0 0
UUID=3ccea12f-25d0-437b-9c4b-6ad6a9bd724c /tmp xfs defaults 0 0
UUID=b8ab4016-49bd-4f48-9620-5bda76f4d8b1 /var/log xfs defaults 0 0
UUID=8b9a7ada-3f02-4ee5-8010-ad32a5d7461e swap swap defaults 0 0
I can modify the /etc/fstab then restart machine make it work. But,is there any way I can change the quota configure without reboot?
Quotas
XFS quotas are not a remountable option. The -o quota option must be specified on the initial mount for quotas to be in effect.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch06s09.html
BTW if you need to enable quota for root partition /etc/fstab
does not help you only need to tweak kernel boot options
lsblk provides output in this fornat:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 300G 0 disk
sda1 8:1 0 500M 0 part /boot
sda2 8:2 0 299.5G 0 part
vg_data1-lv_root (dm-0) 253:0 0 50G 0 lvm /
vg_data2-lv_swap (dm-1) 253:1 0 7.7G 0 lvm [SWAP]
vg_data3-LogVol04 (dm-2) 253:2 0 46.5G 0 lvm
vg_data4-LogVol03 (dm-3) 253:3 0 97.7G 0 lvm /map1
vg_data5-LogVol02 (dm-4) 253:4 0 97.7G 0 lvm /map2
sdb 8:16 0 50G 0 disk
for a mounted volume say /map1 how do i directly get the physical volume associated with it. Is there any direct command to fetch the information?
There is no direct command to show that information for a mount. You can run
lvdisplay -m
Which will show which physical volumes are currently being used by the logical volume.
Remember, thought, that there is no such thing as a direct association between a logical volume and a physical volume. Logical volumes are associated with volume groups. Volume groups have a pool of physical volumes over which they can distribute any volume group. If you always want to know that a given lv is on a given pv, you have to restrict the vg to only having that one pv. That rather misses the point. You can use pvmove to push extents off a pv (sometimes useful for maintenance) but you can't stop new extents being created on it if logical volumes are extended or created.
As to why there is no such potentially useful command...
LVM is not ZFS. ZFS is a complete storage and filesystem management system, managing both storage (at several levels of abstraction) and the mounting of filesystems. LVM, in contrast, is just one layer of the Linux Virtual File System. It provides a layer of abstraction on top of physical storage devices and makes no assumption about how the logical volumes are used.
Leaving the grep/awk/cut/whatever to you, this will show which PVs each LV actually uses:
lvs -o +devices
You'll get a separate line for each PV used by a given LV, so if an LV has extents on three PVs you will see three lines for that LV. The PV device node path is followed by the starting extent(I think) of the data on that PV in parentheses.
I need to emphasize that there is no direct relation between a mountpoint (logical volume) and a physical volume in LVM. This is one of its design goals.
However you can traverse the associations between the logical volume, the volume group and physical volumes assigned to that group. However this only tells you: The data is stored on one of those physical volumes, but not where exactly.
I couldn't find a command which can produce the output directly. However you can tinker something using mount, lvdisplay, vgdisplay and awk|sed:
mp=/mnt vgdisplay -v $(lvdisplay $(mount | awk -vmp="$mp" '$3==mp{print $1}') | awk '/VG Name/{print $3}')
I'm using the environment variable mp to pass the mount point to the command. (You need to execute the command as root or using sudo)
For my test-scenario it outputs:
...
--- Volume group ---
VG Name vg1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
...
VG Size 992.00 MiB
PE Size 4.00 MiB
Total PE 248
Alloc PE / Size 125 / 500.00 MiB
Free PE / Size 123 / 492.00 MiB
VG UUID VfOdHF-UR1K-91Wk-DP4h-zl3A-4UUk-iB90N7
--- Logical volume ---
LV Path /dev/vg1/testlv
LV Name testlv
VG Name vg1
LV UUID P0rgsf-qPcw-diji-YUxx-HvZV-LOe0-Iq0TQz
...
Block device 252:0
--- Physical volumes ---
PV Name /dev/loop0
PV UUID Qwijfr-pxt3-qcQW-jl8q-Q6Uj-em1f-AVXd1L
PV Status allocatable
Total PE / Free PE 124 / 0
PV Name /dev/loop1
PV UUID sWFfXp-lpHv-eoUI-KZhj-gC06-jfwE-pe0oU2
PV Status allocatable
Total PE / Free PE 124 / 123
If you only want to display the physical volumes you might pipe the results of the above command to sed:
above command | sed -n '/--- Physical volumes ---/,$p'
dev=$(df /map1 | tail -n 1|awk '{print $1}')
echo $dev | grep -q ^/dev/mapper && lvdisplay -m $dev 2>/dev/null | awk '/Physical volume/{print $3}' || echo $dev
I have a linux VPX on XEN. Which is not creating any core-dump when panic is occurred.
Which part of the linux code contains crash dump creation program and how can I debug this thing ?
Please check server's VMCore configuration. Kindly follow below steps
1./etc/kdump.conf – will have the below mentioned lines.
-----------------------------snip-----------------------------
ext4 UUID=6287df75-b1d9-466b-9d1d-e05e6d044b7a
path /var/crash/vmcore
-----------------------------snip-----------------------------
2./etc/fstab – will have the UUID and filesystem data.
-----------------------------snip-----------------------------
# /etc/fstab
# Created by anaconda on Wed May 25 16:10:52 2011
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=117b7a8d-0a8b-4fc8-b82b-f3cfda2a02df / ext4 defaults 1 1
UUID=e696757d-0321-4922-8327-3937380d332a /boot ext4 defaults 1 2
UUID=6287df75-b1d9-466b-9d1d-e05e6d044b7a /data ext4 defaults 1 2
UUID=d0dc1c92-efdc-454f-a337-dd1cbe24d93d /prd ext4 defaults 1 2
UUID=c8420cde-a816-41b7-93dc-3084f3a7ce21 swap swap defaults 0 0
#/dev/dm-0 /data1 ext4 defaults 00
#/dev/mapper/mpathe /data1 ext4 defaults 00
/dev/mapper/mpathgp1 /data2 ext4 noatime,data=writeback,errors=remount-ro 0 0
LABEL=/DATA1 /data1 ext4 noatime,data=writeback,errors=remount-ro 00
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
-----------------------------snip-----------------------------
3.With the above configuration the VMCore will be generated in the /data/var/crash/vmcore path.
Note: the VM core will be generated more than 10 GBs, hence configure the path where we have enough space)
Regards,
Jain
I am configuring a Linux Server with ACL[Access Control Lists]. It is not allowing me to perform setfacl operation on one of the directoriy /xfiles. I am able to perform the setfacl on other directories as /tmp /op/applocal/. I am getting the error as :
root#asifdl01devv # setfacl -m user:eqtrd:rw-,user:feedmgr:r--,user::---,group::r--,mask:rw-,other:--- /xfiles/change1/testfile setfacl: /xfiles/change1/testfile: Operation not supported
I have defined my /etc/fstab as /dev/ROOTVG/rootlv / ext3 defaults 1 1 /dev/ROOTVG/varlv /var ext3 defaults 1 2 /dev/ROOTVG/optlv /opt ext3 defaults 1 2 /dev/ROOTVG/crashlv /var/crash ext3 defaults 1 2 /dev/ROOTVG/tmplv /tmp ext3 defaults 1 2 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/ROOTVG/swaplv swap swap defaults 0 0 /dev/APPVG/home /home ext3 defaults 1 2 /dev/APPVG/archives /archives ext3 defaults 1 2 /dev/APPVG/test /test ext3 defaults 1 2 /dev/APPVG/oracle /opt/oracle ext3 defaults 1 2 /dev/APPVG/ifeeds /xfiles ext3 defaults 1 2
I have a solaris server where the vfstab is defined as
cat vfstab
# fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/vx/dsk/bootdg/swapvol - - swap - no - swap - /tmp tmpfs - yes size=1024m /dev/vx/dsk/bootdg/rootvol /dev/vx/rdsk/bootdg/rootvol / ufs 1 no logging /dev/vx/dsk/bootdg/var /dev/vx/rdsk/bootdg/var /var ufs 1 no logging /dev/vx/dsk/bootdg/home /dev/vx/rdsk/bootdg/home /home ufs 2 yes logging /dev/vx/dsk/APP/test /dev/vx/rdsk/APP/test /test vxfs 3 yes - /dev/vx/dsk/APP/archives /dev/vx/rdsk/APP/archives /archives vxfs 3 yes - /dev/vx/dsk/APP/oracle /dev/vx/rdsk/APP/oracle /opt/oracle vxfs 3 yes - /dev/vx/dsk/APP/xfiles /dev/vx/rdsk/APP/xfiles /xfiles vxfs 3 yes -
I am not able to find out the issue. Any help would be appreciated.
Your fstab line is unreadable, but you may need to turn on the acl option for ext3 in your /ifeeds partition:
/dev/APPVG/ifeeds /xfiles ext3 defaults,acl 1 2