I had multiple datasets, deleted them all with multiple commands.
At first I saw that "root" dataset occupies like 100Gb, then 50Gb, then 20Gb... and it got "stuck" on 535M.
OS: FreebSD 11.0
I tried to "google it", but no result. No visible files in mountpoint /zm. Any insights?
zfs list -t all -o space -r zm_ssd512
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
zm_ssd512 461G 535M 0 96K 0 535M
zfs get all zm_ssd512
NAME PROPERTY VALUE SOURCE
zm_ssd512 type filesystem -
zm_ssd512 creation Wed Jun 21 12:02 2017 -
zm_ssd512 used 535M -
zm_ssd512 available 461G -
zm_ssd512 referenced 96K -
zm_ssd512 compressratio 1.00x -
zm_ssd512 mounted no -
zm_ssd512 quota none default
zm_ssd512 reservation none default
zm_ssd512 recordsize 128K default
zm_ssd512 mountpoint /zm local
zm_ssd512 sharenfs off default
zm_ssd512 checksum on default
zm_ssd512 compression lz4 local
zm_ssd512 atime on default
zm_ssd512 devices on default
zm_ssd512 exec on default
zm_ssd512 setuid on default
zm_ssd512 readonly off default
zm_ssd512 jailed off default
zm_ssd512 snapdir hidden default
zm_ssd512 aclmode discard default
zm_ssd512 aclinherit restricted default
zm_ssd512 canmount on default
zm_ssd512 xattr on default
zm_ssd512 copies 1 default
zm_ssd512 version 5 -
zm_ssd512 utf8only off -
zm_ssd512 normalization none -
zm_ssd512 casesensitivity sensitive -
zm_ssd512 vscan off default
zm_ssd512 nbmand off default
zm_ssd512 sharesmb off default
zm_ssd512 refquota none default
zm_ssd512 refreservation none default
zm_ssd512 primarycache all default
zm_ssd512 secondarycache all default
zm_ssd512 usedbysnapshots 0 -
zm_ssd512 usedbydataset 96K -
zm_ssd512 usedbychildren 535M -
zm_ssd512 usedbyrefreservation 0 -
zm_ssd512 logbias latency default
zm_ssd512 dedup off default
zm_ssd512 mlslabel -
zm_ssd512 sync standard default
zm_ssd512 refcompressratio 1.00x -
zm_ssd512 written 96K -
zm_ssd512 logicalused 178M -
zm_ssd512 logicalreferenced 35K -
zm_ssd512 volmode default default
zm_ssd512 filesystem_limit none default
zm_ssd512 snapshot_limit none default
zm_ssd512 filesystem_count none default
zm_ssd512 snapshot_count none default
zm_ssd512 redundant_metadata all default
Update: zdb -bb gives this (among other lines). So now I need to find out what is "SPA space map".
44.2K 183M 178M 535M 12.1K 1.02 99.88 SPA space map
ZFS space maps are internal data structures that describe the free and allocated space being used by ZFS. As you delete data, the space maps should also shrink, but there are limitations as to how much they can shrink as all of ZFS's metadata need to be accounted for in the space maps as well.
Related
I am having issues with Podman running out of space when importing. This is happening on a RHEL 8 VM that has been deployed for our group. We do have a 80GB /docker partition available, but I am missing some Podman configuration that says to use /docker. This VM
Can you all help me identify?
Here is part of my /etc/containers/storage.conf:
[storage]
# Default Storage Driver, Must be set for proper operation.
driver = "overlay"
# Temporary storage location
runroot = "/docker/temp"
# Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must
# ensure the labeling matches the default locations labels with the
# following commands:
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH
# graphroot = "/var/lib/containers/storage"
graphroot = "/docker"
We are running SELinux, so I did run these commands:
semanage fcontext -a -e /var/lib/containers/storage /docker
restorecon -R -v /docker
and restart the podman service. However, if I run
podman import docker.tar
We receive the error:
Getting image source signatures
Copying blob 848eb673668a [=>------------------------------------] 1.8GiB / 41.3GiB
Error: writing blob: storing blob to file "/var/tmp/storage2140624383/1": write /var/tmp/storage2140624383/1: no space left on device
df -H shows:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 84K 3.9G 1% /dev/shm
tmpfs 3.9G 9.3M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mapper/rhel_rhel86--svr-root 38G 7.2G 31G 20% /
/dev/mapper/rhel_rhel86--svr-tmp 4.7G 66M 4.6G 2% /tmp
/dev/mapper/rhel_rhel86--svr-home 43G 1.4G 42G 4% /home
/dev/sda2 495M 276M 220M 56% /boot
/dev/sdb1 79G 42G 33G 56% /docker
/dev/sda1 500M 5.9M 494M 2% /boot/efi
/dev/mapper/rhel_rhel86--svr-var 33G 1.6G 32G 5% /var
/dev/mapper/rhel_rhel86--svr-var_log 4.7G 109M 4.6G 3% /var/log
/dev/mapper/rhel_rhel86--svr-var_tmp 1.9G 47M 1.9G 3% /var/tmp
/dev/mapper/rhel_rhel86--svr-var_log_audit 9.4G 132M 9.2G 2% /var/log/audit
tmpfs 785M 8.0K 785M 1% /run/user/42
tmpfs 785M 0 785M 0% /run/user/1000
Do you guys know what I'm missing to tell Podman to use /docker instead of /var/tmp/storage2140624383 ?
################################################
Edited December 29:
I was able to change the tmpdir to /docker. However, upon import of this 54GB docker.tar file, it is still telling me I am running out of space. We were able to import a small .tar (around 800MB) successfully, so we know podman is working.
$ podman import docker.tar
Getting image source signatures
Copying blob b45265b317a7 done
Error: writing blob: adding layer with blob "sha256:b45265b317a7897670ff015b177bac7b9d5037b3cfb490d3567da959c7e2cf70": Error processing tar file(exit status 1): write /a65be6ac39ddadfec332b73d772c49d5f1b4fffbe7a3a419d00fd58fcb4bb752/layer.tar: no space left on device
This might be a pretty easy one:
Copying blob 848eb673668a [=>------------------------------------] 1.8GiB / 41.3GiB
vs
/dev/mapper/rhel_rhel86--svr-var_tmp 1.9G 47M 1.9G 3% /var/tmp
As you can see, the image will not fit into the desired temp space directory.
This is somewhat explained in the docs, which states, you can adjust this by changing the TMPDIR environment variable.
Some issue executing the following bash with Paramiko:
def format_disk(self, device, size, dformat, mount, name):
stdin_, stdout_, stderr_ = self.client.exec_command(f"pvcreate {device};" \
f"vgcreate {name}-vg {device};" \
f"lvcreate -L {size} --name {name}-lv {name}-vg;" \
f"mkfs.{dformat} /dev/{name}-vg/{name}-lv;" \
f"mkdir {mount};" \
f"echo '/dev/{name}-vg/{name}-lv {mount} {dformat} defaults 0 0' >> /etc/fstab")
print(f"mkfs.{dformat} /dev/{name}-vg/{name}-lv;")
Print statement outputs: mkfs.ext4 /dev/first_try-vg/first_try-lv; If I copy and paste this exact command on the server there are no errors and it formats the disk as expected.
Troubleshooting steps
Server before running python script:
ls: cannot access /first_try: No such file or directory
[root#localhost ~]# vgs
[root#localhost ~]# lvs
[root#localhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Feb 25 07:32:51 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=38b7e96a-71e5-4089-a348-bd23828f9dc8 / xfs defaults 0 0
UUID=72fd2a6a-85db-4596-9fc2-6604d0d865a3 /boot xfs defaults 0 0
Server after running python script:
[root#localhost ~]# ls /first_try/
[root#localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
first_try-vg 1 1 0 wz--n- <20.00g <15.00g
[root#localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
first_try-lv first_try-vg -wi-a----- 5.00g
[root#localhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Feb 25 07:32:51 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=38b7e96a-71e5-4089-a348-bd23828f9dc8 / xfs defaults 0 0
UUID=72fd2a6a-85db-4596-9fc2-6604d0d865a3 /boot xfs defaults 0 0
/dev/first_try-vg/first_try-lv /first_try ext4 defaults 0 0
[root#localhost ~]# mount -a
mount: wrong fs type, bad option, bad superblock on /dev/mapper/first_try--vg-first_try--lv,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
The error from mount -a indicates that the disk is not formatted.
If I format the disk manually and run mount -a it works.
Example:
[root#localhost ~]# mkfs.ext4 /dev/first_try-vg/first_try-lv
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310720 blocks
65536 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): mdone
Writing superblocks and filesystem accounting information: done
[root#localhost ~]# mount -a
[root#localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 18G 4.7G 14G 27% /
devtmpfs 471M 0 471M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 8.4M 478M 2% /run
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/sda1 297M 147M 151M 50% /boot
tmpfs 98M 12K 98M 1% /run/user/42
tmpfs 98M 0 98M 0% /run/user/0
/dev/mapper/first_try--vg-first_try--lv 4.8G 20M 4.6G 1% /first_try
Pariminko could not handle the output from mkfs. I changed the command to use the -q quiet flag and was able to get the script to run successfully.
New commmand mkfs -q -t {dformat} /dev/{name}-vg/{name}-lv
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Here's the output of my df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 34G 17G 67% /
devtmpfs 2.5G 0 2.5G 0% /dev
tmpfs 2.5G 140K 2.5G 1% /dev/shm
tmpfs 2.5G 8.9M 2.5G 1% /run
tmpfs 2.5G 0 2.5G 0% /sys/fs/cgroup
/dev/sda1 497M 133M 365M 27% /boot
I'd like to increase the size of / by 194 GB.
I ran the command lvextend -L +194G /dev/mapper/centos-root
I got he message that the filesystem has been resized. I rebooted the system. I'm expecting to see my / to be 244GB. However it isn't
VOLUME GROUP DETAILS
[root#localhost mapper]# vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 249.51 GiB
PE Size 4.00 MiB
Total PE 63874
Alloc PE / Size 63648 / 248.62 GiB
Free PE / Size 226 / 904.00 MiB
VG UUID icZPDf-z0cO-5qMl-Gbtr-XisU-6ptl-cpG3dz
LOGICAL VOLUME DETAILS
[root#localhost mapper]# lvdisplay
--- Logical volume ---
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID MzofJC-7I6W-XcM9-xwrT-Ns86-LdYt-OYltON
LV Write Access read/write
LV Creation host, time localhost, 2015-07-02 09:04:52 +1200
LV Status available
# open 2
LV Size 4.62 GiB
Current LE 1184
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID 1J50kj-hcBC-T5rY-y6LV-0xEI-ZVId-qBfgQl
LV Write Access read/write
LV Creation host, time localhost, 2015-07-02 09:04:53 +1200
LV Status available
# open 1
LV Size 244.00 GiB
Current LE 62464
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
/etc/fstab OUTPUT
#
# /etc/fstab
# Created by anaconda on Thu Jul 2 09:04:53 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=c4f605aa-56b0-4bde-ae0b-ddf6f0e4a983 /boot xfs defaults 0 0
#/dev/mapper/centos-home /home xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
How do I extend /dev/mapper/centos-root? Any help?
Instead of "lvextend -L +194G /dev/mapper/centos-root", use "lvextend -L +194G -r /dev/mapper/centos-root".
The -r option changes the logical volume and the associated file system sizes in only one step. In addition, this works for ext4 and xfs.
Also, don't use "df", use "lsblk", the display is much clearer.
AH, I hadn't bumped on this link - https://themacwrangler.wordpress.com/2015/01/16/re-sizing-partitions-in-centos7/. Very helpful.
I had to grow the filesystem too which I hadn't. Good learning.
Copying the contents of the page at the above link here just in case the content becomes unavailable in future.
Re-sizing partitions in Centos7
By default in a CentOS7 install we get a couple of partitions created for the root user and one for home usually something like this:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 2.7G 48G 6% /
devtmpfs 239M 0 239M 0% /dev
tmpfs 246M 0 246M 0% /dev/shm
tmpfs 246M 29M 217M 12% /run
tmpfs 246M 0 246M 0% /sys/fs/cgroup
/dev/mapper/centos-home 29G 33M 29G 1% /home
/dev/sda1 497M 96M 401M 20% /boot
Note that the root partition is 50Gb and the home partition is 29Gb
Often I find myself wanting or needing to remove the centos-home partition and expand the centos-root partition.
It is a pretty straight forward exercise, but one that I often forget the steps involved.
So here they are:
• First backup any data that might exist in /home so you can restore it later.
• Unmount the centos-home partition.
# umount /home/
• Next show the logical volumes.
# lvdisplay
--- Logical volume ---
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID azXfXa-YPiG-Bx9x-NfIO-eswN-iHVw-YsXpYe
LV Write Access read/write
LV Creation host, time localhost, 2014-12-01 13:02:00 +1100
LV Status available
# open 2
LV Size 1.03 GiB
Current LE 264
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/centos/home
LV Name home
VG Name centos
LV UUID LtYf7h-h1kx-p7OR-ZdN8-2Xo8-KXYT-uC2Roa
LV Write Access read/write
LV Creation host, time localhost, 2014-12-01 13:02:01 +1100
LV Status available
# open 0
LV Size 28.48 GiB
Current LE 7290
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID DjnSO6-gsbN-g83Q-VhfC-u3Ft-8DqY-sPMx35
LV Write Access read/write
LV Creation host, time localhost, 2014-12-01 13:02:03 +1100
LV Status available
# open 1
LV Size 50.00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
•Now remove the logical volume for centos-home.
# lvremove /dev/centos/home
Do you really want to remove active logical volume home? [y/n]: y
Logical volume "home" successfully removed
•You should now have the free space available in VFree when you have a look using vgs.
# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- 79.51g 28.48g
• Now resize the centos-root partition.
# lvresize -L +28.47GB /dev/mapper/centos-root
Rounding size to boundary between physical extents: 28.47 GiB
Extending logical volume root to 78.47 GiB
Logical volume root successfully resized
Note that I expanded the partition by slightly less than the available space, 28.47Gb instead of 28.48Gb, this is just to make sure you avoid hitting an insufficient free space error.
• Grow the partition to use all the free space
#xfs_growfs /dev/mapper/centos-root
• Confirm your new partition size.
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 79G 2.7G 76G 4% /
devtmpfs 239M 0 239M 0% /dev
tmpfs 246M 0 246M 0% /dev/shm
tmpfs 246M 29M 217M 12% /run
tmpfs 246M 0 246M 0% /sys/fs/cgroup
/dev/sda1 497M 96M 401M 20% /boot
After some problem with Docker and my dedicated server with a Debian (the provider give some OS image without some features needed by Docker, so I recompiled Linux kernel yesterday and activate the features needed, I followed some instruction in blog).
Now I was happy to have success with docker I tried to create image... and I have an error.
$ docker run -d -t -i phusion/baseimage /sbin/my_init -- bash -l
Unable to find image 'phusion/baseimage:latest' locally
Pulling repository phusion/baseimage
5a14c1498ff4: Download complete
511136ea3c5a: Download complete
53f858aaaf03: Download complete
837339b91538: Download complete
615c102e2290: Download complete
b39b81afc8ca: Download complete
8254ff58b098: Download complete
ec5f59360a64: Download complete
2ce4ac388730: Download complete
2eccda511755: Download complete
Status: Downloaded newer image for phusion/baseimage:latest
0bd93f0053140645a930a3411972d8ea9a35385ac9fafd94012c9841562beea8
FATA[0039] Error response from daemon: Cannot start container 0bd93f0053140645a930a3411972d8ea9a35385ac9fafd94012c9841562beea8: [8] System error: write /sys/fs/cgroup/docker/0bd93f0053140645a930a3411972d8ea9a35385ac9fafd94012c9841562beea8/cgroup.procs: no space left on device
More informations :
$ docker info
Containers: 3
Images: 12
Storage Driver: devicemapper
Pool Name: docker-8:1-275423-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 814.4 MB
Data Space Total: 107.4 GB
Data Space Available: 12.22 GB
Metadata Space Used: 1.413 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.146 GB
Udev Sync Supported: false
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Kernel Version: 3.19.0-xxxx-std-ipv6-64
Operating System: Debian GNU/Linux 8 (jessie)
CPUs: 4
Total Memory: 7.691 GiB
Name: ns3289160.ip-5-135-180.eu
ID: JK54:ZD2Q:F75Q:MBD6:7MPA:NGL6:75EP:MLAN:UYVU:QIPI:BTDP:YA2Z
System :
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 788M 456K 788M 1% /run
/dev/sda1 20G 7.8G 11G 43% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.7G 4.0K 1.7G 1% /dev/shm
/dev/sda2 898G 11G 842G 2% /home
Edit: command du -sk /var
# du -sk /var
3927624 /var
Edit: command fdisk -l
# fdisk -l
Disk /dev/loop0:
100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop1: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00060a5c
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 40962047 40957952 19.5G 83 Linux
/dev/sda2 40962048 1952471039 1911508992 911.5G 83 Linux
/dev/sda3 1952471040 1953517567 1046528 511M 82 Linux swap / Solaris
Disk /dev/mapper/docker-8:1-275423-pool: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 byte
You should not remove cgroup support in docker. Otherwise you may get warning like WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded. when you run a docker container.
A simple command should do the trick.
sudo echo 1 > /sys/fs/cgroup/docker/cgroup.clone_children
If it still does not work, run below commands and restart docker service:
sudo echo 0 > /sys/fs/cgroup/docker/cpuset.mems
sudo echo 0 > /sys/fs/cgroup/docker/cpuset.cpus
I installed docker via docker-lxc in the debian repos, I followed a tuto. I tried another solution (with success), I updated my source.list /etc/apt/source.list from jessie to sid, I removed docker-lxc with a purge and I installed docker.io.
The error changed. It was mkdir -p /sys/... can't create dir : access denied
So I find a comment in a blog and I tried the solution it was to comment this line previously added by the tutorial :
## file /etc/fstab
# cgroup /sys/fs/cgroup cgroup defaults 0 0
and reboot the server.
yum install -y libcgroup libcgroup-devel libcgroup-tools
cgclear
service cgconfig restart
mount -t cgroup none /cgroup
vi /etc/fstab
cgroup /sys/fs/cgroup cgroup defaults 0 0
I have a linux VPX on XEN. Which is not creating any core-dump when panic is occurred.
Which part of the linux code contains crash dump creation program and how can I debug this thing ?
Please check server's VMCore configuration. Kindly follow below steps
1./etc/kdump.conf – will have the below mentioned lines.
-----------------------------snip-----------------------------
ext4 UUID=6287df75-b1d9-466b-9d1d-e05e6d044b7a
path /var/crash/vmcore
-----------------------------snip-----------------------------
2./etc/fstab – will have the UUID and filesystem data.
-----------------------------snip-----------------------------
# /etc/fstab
# Created by anaconda on Wed May 25 16:10:52 2011
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=117b7a8d-0a8b-4fc8-b82b-f3cfda2a02df / ext4 defaults 1 1
UUID=e696757d-0321-4922-8327-3937380d332a /boot ext4 defaults 1 2
UUID=6287df75-b1d9-466b-9d1d-e05e6d044b7a /data ext4 defaults 1 2
UUID=d0dc1c92-efdc-454f-a337-dd1cbe24d93d /prd ext4 defaults 1 2
UUID=c8420cde-a816-41b7-93dc-3084f3a7ce21 swap swap defaults 0 0
#/dev/dm-0 /data1 ext4 defaults 00
#/dev/mapper/mpathe /data1 ext4 defaults 00
/dev/mapper/mpathgp1 /data2 ext4 noatime,data=writeback,errors=remount-ro 0 0
LABEL=/DATA1 /data1 ext4 noatime,data=writeback,errors=remount-ro 00
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
-----------------------------snip-----------------------------
3.With the above configuration the VMCore will be generated in the /data/var/crash/vmcore path.
Note: the VM core will be generated more than 10 GBs, hence configure the path where we have enough space)
Regards,
Jain