Command to know what was mounted in a folder - linux

I have a folder on my server that mounts volumes of a FreeNAS via the iSCSI protocol. I need to mount these same folders on another server but I can't figure out how they were mounted because the naming in FreeNAS and the folders are different.
 
Are there any commands I can use to see how they were assembled? Using the df command I have the following return:
/dev/sde 1008G 605G 352G 64% /mnt/folder1
/dev/sda 1008G 150G 808G 16% /mnt/folder2
/dev/sdf 4,0T 4,0T 0 100% /mnt/folder3
But this is not useful since I can't figure out which volumes these mounts are referencing.
I'm Using Debian GNU/Linux 8.9 (jessie) and FreeNAS 9.10.2.

As we discussed in the comments to the original question, /dev/sdX is simulated devices with iSCSI protocol. To manage those you would normally use iscsiadm command.

Related

Unable to access files created in Linux from windows share

We have mounted an NFS share which is on windows to Linux environment. Something like below.
10.0.0.10:/shared_1 nfs4 50G 19G 32G 38% /mnt
If i create any file from Linux box and try to access the file using windows share \<windows ip shared_1, it gives me the error as attached.
Please help me if i need to do any changes at windows or linux side.
i am not really an OS guy. Please help.

How to fix problem with zfs mount after upgrade to 12.0-RELEASE?

So I had to upgrade my system from 11.1 to 12.0 and now the system does not load. Stop on error Trying mount root zfs - Error 2 unknown filesystem.
And I do not have an old kernel which was good and worked well.
So How to fix mount problem?
Had tried to boot with the old kernel, but after one of the tries to freebsd-update upgrade there left only new kernel.
Expected no problems after the upgrade.
Actual - cannot load the system with Error 2 - unknown filesystem
P.S.
Found that /boot/kernel folder does not contain opensolaris.ko module.
How to copy this module to /boot partition on the system from LiveCD (this file exist on LiveCD)
Considering you have a FreeBSD USB stick ready... you can import the pool into a live environment and then mount individual datasets manually.
Considering "zroot" is your pool name
# mount -urw /
# zpool import -fR /mnt zroot
# zfs mount zroot/ROOT/default
# zfs mount -a // in case you want datasets to mount
# cd /mnt
Now do whatever you want...
You can also rollback to the last working snapshot (if there is any)
In case, your system is encrypted, you need to decrypt it first.

Solutions to resize root partition on live mounted system

I'm writing a Chef recipe to automate setting up software RAID 1 on an existing system with. The basic procedure is:
Clear partition table on new disk (/dev/sdb)
Add new partitions, and set then to raid using parted (sdb1 for /boot and sdb2 with LVM for /)
Create a degraded RAID with /dev/sdb using mdadm --create ... missing
pvcreate /dev/md1 && vgextend VolGroup /dev/md1
pvmove /dev/sda2 /dev/md1
vgreduce VolGroup /dev/sda2 && pvremove /dev/sda2
...
...
I'm stuck on no. 5. With 2 disks of the same size I always get an error:
Insufficient free space: 10114 extents needed, but only 10106 available
Unable to allocate mirror extents for pvmove0.
Failed to convert pvmove LV to mirrored
I think it's because when I do the mdadm --create, it adds extra information to the disk so it has slightly less physical extents.
To remedy the issue, one would normally reboot the system off a live distro and:
e2fsck -f /dev/VolGroup/lv_root
lvreduce -L -0.5G --resizefs ...
pvresize --setphysicalvolumesize ...G /dev/sda2
etc etc
reboot
and continue with step no. 5 above.
I can't do that with Chef as it can't handle the rebooting onto a live distro and continuing where it left off. I understand that this obviously wouldn't be idempotent.
So my requirements are to be able to lvreduce (somehow) on the live system without using a live distro cd.
Anyone out there have any ideas on how this can be accomplished?
Maybe?:
Mount a remote filesystem as root and remount current root elsewhere
Remount the root filesystem as read-only (but I don't know how that's possible as you can't unmount the live system in the first place).
Or another solution to somehow reboot into a live distro, script the resize and reboot back and continue the Chef run (Not sure if this is even popssible
Ideas?
I'm quite unsure chef is the right tool.
Not a definitive solution but what I would do for this case:
Create a live system with a chef and a cookbook
Boot on this
run chef as chef-solo with the recipe doing the work (which should work as the physical disk are unmounted at first)
The best way would be to write cookbooks to be able to redo the target boxes from scratch, once it's done you can reinstall entirely the target with the correct partitionning at system install time and let chef rebuild your application stack after that.

What is /dev/mapper/vg_root-lv_root directory? Why my docker relies on this directory?

du
lists
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_root-lv_root 38929872 36450548 503672 99% /
tmpfs 4025936 0 4025936 0%
/dev/shm /dev/sda1 53871 39618 201146 17% /boot
xxx.in:/vol/software/arch* 653053184 408978688 244074496 63% /usr/software
xxxx:/users003/gopir 3435973888 1024638080 2411335808 30% /u/gopir
I am not able to format it. In this case, my /dev/mapper/vg* is full. But i have space in my directory. How should i make use of my space? Why do my docker depends on this space rather than my space? Due to this, i get write error. I referred this but it doesn't help me. And what is this directory?
With below command, I managed to find which folder this drive maps to.
tree -LP 1 /dev/mapper/vg_os-lv_root
After that I was able to move folder to different drive (which have more space), in my case it was /var/lib/docker
You might have to install tree package using command yum install tree if not available by default.
Docker by design has a flaw OOB. It gets mounted on aufs by default. The better alternative to the same is to use Device mapper with lvm configured.
[https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#write-examples][1]
So in your case it seems you already have device mapper in place. So I suggest you create and extra loop-lvm config and add to your existing Volume Group based on the above documentation.

Mount Netapp NFS share permanently on RHEL 6.4

I am trying to mount a volume on a RHEL 6.4 virtual machine permanently.
My fstab entry is as:
172.17.4.228:/bp_nfs_test1 /mnt1 nfs rsize=8192,wsize=8192,intr
And I mounted the volume as:
mount 172.17.4.228:/bp_nfs_test1 /mnt1
When I run df -h I can see the volume and able to access it properly.
But when I reboot the VM, the mount is gone and not able to access it anymore even though the entry in /etc/fstab is present
I have to manually mount the volume again (mount -a), then only I am able to see my volume in df -h and access it.
Any help is appreciated
The mount process on boot is very early, so your network won't be online thus preventing the nfs share from being mounted. You'll need to enable netfs, which manages network file shares, and runs after the network is up. Your desired process is:
Standard mounts processed.
NFS share is skipped during initial mounts (by adding _netdev to options).
After network is online, netfs will process network file systems like nfs and bring them online.
To prevent automounter for attempting to mount your nfs share before the network services are available, add _netdev to your options:
172.17.4.228:/bp_nfs_test1 /mnt1 nfs rsize=8192,wsize=8192,intr,_netdev
Enable netfs:
chkconfig netfs on
Alternatively, you could also configure the share through the /etc/auto.master configuration and have it mount when the share is accessed.

Resources