how to mount using ceph-fuse and specify IP in /etc/fstab - linux

My server configuration is as follows
A ceph cluster server(10.1.1.138)
B ceph cluster server(10.1.1.54)
C ceph client (10.1.1.238)
I could mount using the following ceph-fuse command
sudo ceph-fuse -k /etc/ceph/ceph.client.admin.keyring -m 10.1.1.138:6789 /mnt/mycephfs/
But I don't know how to mount with /etc/fstab
The following setting is failed.
sudo vim /etc/fstab
10.1.1.138:/ /mnt/mycephfs fuse.ceph name=admin,secretfile=/home/ec2-user/admin.secret,noatime 0 2
sudo mount -a
-> Syntax error occured.
Using kerner driver mount instead of ceph-fuse is work.
sudo vim /etc/fstab
10.1.1.138:/ /mnt/mycephfs ceph name=admin,secretfile=/home/ec2-user/admin.secret,noatime 0 2
sudo mount -a
-> success
IP specification can not be found even in official tutorial
http://docs.ceph.com/docs/kraken/cephfs/fstab/
I don't know why there is no way to specfiy IP of each cluster server in offical tutorial.
If it could be mount without specifying IP, I would like to know its principle.
Am i misunderstanding something?
let me know there is something to be a hint.
Thank you for reading my question.

Related

glusterfs error when delete directory - Transport endpoint is not connected

I have mounted glusterfs on my CentOS 8 server but strange thing is, i can create directory but when i try to delete i get error Transport endpoint is not connected
Here is my mount point
$ mount | grep gluster
10.10.217.21:gluster_vol2/voyager on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
I have created dir
$ mkdir foo
$ rmdir foo
rmdir: failed to remove 'foo': Transport endpoint is not connected
But strange issue i can create file and successfully able to delete. I have verify basic things like firewall etc and all looks good. (I have no control on Gluster Storage so what else i can do from client side to debug?)
Looks like the problem is in the network.
The way to solve this is to remount the mountpoint:
sudo umount /mnt/glusterfs
sudo mount /mnt/glusterfs

Azure File Share - Mount

I create an Azure File Share on my Storage Account v2. Going under the label Connect I copied the command lines to mount the File Share with Samba v3.0
I didn't achieve my goal. Error received: Mount error(115): Operation now in progress
Useless the link of Azure: https://learn.microsoft.com/en-us/azure/storage/files/storage-troubleshoot-linux-file-connection-problems#mount-error115-operation-now-in-progress-when-you-mount-azure-files-by-using-smb-30
I have a Debian 10 fresh-updated ( yesterday ). I tried also with a docker image ubuntu:18.04, but the result didn't change, so I guess that there are more than my errors or possible mistakes.
The error is returned by the latest instruction:
$> mount -t cifs //MY_ACCOUNT.file.core.windows.net/MY_FILE_SHARE /mnt/customfolder -o vers=3.0,credentials=/etc/smbcredentials/MY_CREDENTIALS,dir_mode=0777,file_mode=0777,serverino
My tentatives:
I tried to change the version of Samba from 3.0 to 3.11 ---> NOTHING
I tried to use username and password instead of credentials ---> NOTHING
Using smbclient -I IP -p 445 -e -m SMB3 -U MY_USERNAME \\\\MY_ACCOUNT.file.core.windows.net\\MY_FILE_SHARE ----> NOTHING
Thanks for help.

NFS mount using CHEF on LINUX | permissions of directory not getting changed

I am trying to do an NFS mount using CHEF. I have mounted it successfully. Please find the below code.
# Execute mount
node['chef_book']['mount_path'].each do |path_name|
mount "/#{path_name['local']}" do
device "10.34.56.1:/data"
fstype 'nfs'
options 'rw'
retries 3
retry_delay 30
action %i[mount enable]
end
end
i am able to successfully mount and make an entry in fstab file. But, after mounting the user:group for the mount linked is changing to root:root , which i was not expecting.
i want to use myuser:mygroup as owner:group. I tried changing the same using chown command but am getting permission denied issue
request some guidance
As mentioned in the comment, this is not something Chef controls per se. After the mount, the folder will be owned by whatever the NFS server says. You can try to chmod the folder after mounting but that's up to your NFS configuration and whatnot as to if it will be allowed.

Gluster puppet mount not working

I am trying to mount a gluster volume called storage-test on my webserver with the manifest syntax below:
gluster::mount { '/glusterfs':
ensure => present,
volume => "storage:/storage-test",
options => 'defaults',
transport => 'tcp',
atboot => true,
dump => 0,
pass => 0,
}
When I run puppet agent -t everything is processed fine with no issues. I then check /etc/fstab and I see this entry: storage:/storage-test /glusterfs glusterfs defaults,transport=tcp 0 0 but when I type mount I don't see any entry regarding the mount as defined above there and when I also type df -h, I don't see any entry there as well. After I check all of this, I reboot the webserver. After the webserver comes back up, when I type mount I do see the mount point and the same applies to when I type df -h, I see the mount entry there as well. After about a minute, when I perform the same checks, the mount entry that was showing after the reboot is no longer in mount or df -h.
I have also tried setting ensure => mounted but when I run puppet, it says the status has changed from unmounted to mounted and then it just hangs to the point where I have to reboot the server to recover and even with that, when I type df -h it hangs again. What I'm I doing wrong? why isnt't the volume mounting? Any help would be greatly appreciated to resolve this.
I am running glusterfs 3.12.3 for both server and client. Module version is the latest version

check status of ZFS pool on Linux host with Icinga monitoring system

I have a server which is used for backup storage. It's running ZFS on Linux, configured with a RAID z2 data pool and shared via Samba.
I need to monitor the ZFS filesystem to at least be able to see how much space is available.
I thought a simple check_disk plugin will do this job.
I'm able to execute the command from the icinga server cli:
sudo -u nagios /usr/lib/nagios/plugins/check_nrpe -H <hostname> -c check_disk -a 10% 20% /data/backups
DISK OK - free space: /data/backups 4596722 MB (30% inode=99%);| /data/backups=10355313MB;13456832;11961628;0;14952036
But the GUI shows the following error:
DISK CRITICAL - /data/backups is not accessible: No such file or directory
It works under the check_mk monitoring system, but we are migrating from check_mk right now.
I don't have any problems with checking other filesystems (root, boot) in Icinga on this machine.
I would appreciate any advice.
Thanks
Line is in /etc/icinga/objects/linux.cfg on the server:
check_command check_nrpe_1arg!check_backup
Line is in /etc/nagios/nrpe.cfg on the client
command[check_backup]=/usr/lib64/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$

Resources