How to use Extended File Attributes on NFS? - linux

I have an NFS_Server - NFS_Client system.
My client is mounted to an NFS_Server directory.
I want to change the attribute of NFS_Server directory's files via NFS_Client mounted directory by using Extended File Attributes (xattr).
When I tried to set an attribute from the client side, it gives the following answer:
root#ubuntu:/mnt/nfs/var/nfs# setfattr -n user.comment -v "some comment" test.txt
setfattr: nfs.txt: Permission denied
My question is:
is it possible to use Extended File Attributes via NFS?
if possible, how can I do this?
UPDATE:
Server side:
$ more /etc/exports file has:
/var/nfs 192.168.56.123(rw,sync,no_subtree_check)
Client side:
$ root#ubuntu:/# mount -t nfs
192.168.56.130:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,vers=4,addr=192.168.56.130,clientaddr=192.168.56.123)
thank you...

You can use fuse_xattrs (a fuse filesystem layer) to emulate extended attributes (xattrs) on NFS shares. Basically you have to do:
mount the NFS share. e.g.: /mnt/shared_data
mount the fuse xattr layer:
$ fuse_xattrs /mnt/shared_data /mnt/shared_data_with_xattrs
Now all the files on /mnt/shared_data can be accessed on /mnt/shared_data_with_xattrs with xattrs support. The extended attributes will be stored on sidecar files. The extended attributes are not going to be stored on the server filesystem as extended attributes, they are going to be stored in sidecar files.
Sadly this is only a work-around.
disclaimer: I'm the author of fuse_xattrs.

(This article is old, but I came across this article when looking for this functionality, and it doesn't represent the current state.)
As others have mentioned, there is no support for extended attributes in NFS. However, there is significant interest in it, to the extent there is a proposed standard (RFC 8276).

All that is needed is Linux kernel version 5.9 or newer on both the server and client, then mount with NFS version 4.2 or newer. Support for extended attributes is enabled automatically when both server and client support nfs 4.2.
I have kernel version 5.15.16 on both my server and client with nfs-utils-2.5.4-r3, and it is working for me:
NFS Server /etc/exports
/ 192.168.0.42(rw,subtree_check,no_root_squash)
NFS Client /etc/fstab
192.168.0.42:/ /mnt/slowpc nfs noatime,nodiratime,noauto,hard,rsize=1048576,wsize=1048576,timeo=60,retrans=60 0 0
NFS Client
# mount | grep /mnt/slowpc
192.168.0.42:/ on /mnt/slowpc type nfs4 (rw,noatime,nodiratime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=60,retrans=60,sec=sys,local_lock=none)
# cd /mnt/slowpc/tmp
# touch file
# printf bar | attr -s foo file
Attribute "foo" set to a 3 byte value for file:
bar
# attr -l file
Attribute "foo" has a 3 byte value for file
NFS Server
# attr -l /tmp/file
Attribute "foo" has a 3 byte value for /tmp/file
At https://lwn.net/Articles/799185/ it is mentioned that the new mount option user_xattr is required. However the current nfs utilities do not support that option. Fortunately user_xattr is enabled automatically when possible.
# mount -o user_xattr /mnt/test
mount.nfs: an incorrect mount option was specified
# tail -n 1 /var/log/messages
Jan 30 02:51:08 utl01 kernel: nfs: Unknown parameter 'user_xattr'

Extended attributes are not supported by nfs.There is no handler for user attributes in nfs kernel module.For more information read RFC for nfsv4.

The NFS code in Linux 5.9 has finally presented support for user extended attributes (user xattrs).
The NFS server updates for Linux 5.9 have support for user-extended attributes on NFS. This is the functionality outlined via IETF's RFC 8276 for handling of file-system extended attributes in NFSv4. "This feature allows extended attributes (hereinafter also referred to as xattrs) to be interrogated and manipulated using NFSv4 clients. Xattrs are provided by a file system to associate opaque metadata, not interpreted by the file system, with files and directories. Such support is present in many modern local file systems. New file attributes are provided to allow clients to query the server for xattr support, with that support consisting of new operations to get and set xattrs on file system objects."
Source: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.9-NFS-Server-User-Xattr

Related

Failure of rsync of multi-user directory with sshfs fuse mount

I use rsync for automatic periodic syncing of the home folder (root user) in a linux server that is used by several people. A service that users need is the possibility of mounting remote directories through sshfs. However, when there is an sshfs mount, rsync fails giving the following messages
rsync: readlink_stat("/home/???/???") failed: Permission denied (13)
IO error encountered -- skipping file deletion
...
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]
Because of this error, the automated sync does not work as expected, in particular due to skipping the file deletion and a non-zero exit code. The sync is only necessary for the file system where home is mounted, so the wanted behavior is that the sshfs mounts be ignored. The -x / --one-file-system rsync option does not resolve it.
This problem is clearly explained in https://www.agwa.name/blog/post/how_fuse_can_break_rsync_backups . The follow-up article (https://www.agwa.name/blog/post/easily_running_fuse_in_an_isolated_mount_namespace) proposes a solution, though not an acceptable one because fuse mounts are only visible to the process created the mount.
I am looking for a solution that does not affect sshfs usability and is transparent for the users.
The problem is that FUSE denies stat access to other users, including root. Rsync requires stat access on all source files and directories specified. But when an rsync process owned by another user stats a FUSE mount-point, FUSE denies that process access to the mount-point's attributes, causing rsync to throw the said "permission denied" error. Mauricio Villega's solution works by telling rsync to skip FUSE mount-points listed by the mount command. Here is another version of Villega's solution that specifies a white-list of filesystem types using the findmnt command. I chose ext3 and ext4 but you may add other types as needed.
#!/bin/sh
# Which paths to rsync (note the lack of trailing slash tells rsync to preserve source path name at destination).
SOURCES=(
/home
)
# Which filesystem types are supported.
FSTYPES=(
ext3
ext4
)
# Rsync each source.
for SOURCE in ${SOURCES[#]}; do
# Build exclusion list (array of "--exclude=PATH").
excludedPaths=$(findmnt --invert --list --noheadings --output TARGET --types $(IFS=',';echo "${FSTYPES[*]}"))
printf -v exclusionList -- "--exclude=%s " ${excludedPaths[#]}
# Rsync.
rsync --archive ${exclusionList[#]} --hard-links --delete --inplace --one-file-system ${SOURCE} /backup
done
Note that it builds the exclusion list inside the loop to address a fundamental problem with this solution. That problem is due to rsync'ing from a live system where a user could create new FUSE mount-points while rsync is running. The exclusion list needs to be updated frequently enough to include new FUSE mount-points. You may divide the home directory further by each username by modifying the SOURCES array as shown.
SOURCES=(
/home/user1
/home/user2
)
If you are using LVM, an alternative solution is rsync from an LVM snapshot. An LVM snapshot provides a simple (e.g., no FUSE mount-points) and frozen view of the logical volume it is linked to. The downside is that you must reserve space for the LVM snapshot's copy-on-write (COW) activity. It is crucial that you discard the LVM snapshot after you are done with it; otherwise the LVM snapshot will continue to grow in size as modifications are made. Here is a sample script that uses LVM snapshots. Note that it does not need to build an exclusion list for rsync.
# Create and mount LVM snapshot.
lvcreate --extents 100%FREE --snapshot --name snapRoot /dev/vgSystem/lvRoot
mount -o ro /dev/mapper/snapRoot /root/mnt # Note that only root has access to this mount-point.
# Rsync each source.
for SOURCE in ${SOURCES[#]}; do
rsync --archive --hard-links --delete --inplace --one-file-system /root/mnt/${SOURCE} /backup
done
# Discard LVM snapshot.
umount /root/mnt
lvremove vgSystem/snapRoot
References:
"How FUSE Can Break Rsync Backups"
This error does not appear if the fuse mount points are excluded in the rsync command. Since it is an automated sync, the mount command can be used to obtain all fuse mount points. The output of the mount command may differ depending on the system, but in a debian jessie sshfs mounts appear as USER#HOST:MOUNTED_DIR on /path/to/mount/point type fuse.sshfs (rw,...). A simple way to automate the exclusion of fuse mounts in bash+sed is the following
SOURCE="/home/"
FUSEEXCLUDE=( $( mount |
sed -rn "
/ type fuse/ {
s|^[^ ]+ on ([^ ]+) type fuse.+|\1|;
/^${SOURCE//\//\\\/}.+/ {
s|^${SOURCE//\//\\\/}| --exclude |;
p;
}
}" ) )
rsync $OPTIONS "${FUSEEXCLUDE[#]}" "$SOURCE" "$TARGET"

mount_root not working/ found linux openwrt

I've updated my openwrt firmware using the web interface. Now the web interface is unreachable.
I lost my root password so i started my router (wr1043nd) in failsafe mode, but the mount_root command is not working:
$mount_root
""/bin/ash: mount_root: not found""
Any clue? I can't find any solution in the docs/ online
You can mount jffs2 partition manually. This partition contains your configuration, so when you mount it, you will be able to edit root password.
Use this command: mount -t jffs2 /dev/mtdblock3 /mnt/. Please note that mtd number may vary in different routers. If there is nothing in /mnt dir after issuing this command, try another mtdblock number.
Then go to /mnt dir and remove /etc/shadow and /etc/passwd files from there to reset root password.

Directory remapping between users or processes on Linux?

For example, I want to redirect the directory /data between users.
When user1 access /data, it accesses /data1 actually.
When user2 access /data, he accesses /data2 actually.
What technology should I use? cgroups? unionfs? others? I'm sorry I'm a newbie.
More advanced, redirection between processes.
process1 accesses /data1 as /data ,
process2 accesses /data2 as /data .
How can I do that?
There are Linux filesystem namespaces that can do what you want. You would create a new namespace and mount /data inside it as a bind mount to the real /data1 or /data2.
However, this is kind of tricky to do right now, as far as I know, and needs a lot of tooling that most Linux distros may not be using.
Most Unix software uses environment variables to find their data directories. In something like this, you'd have
export JACKSPROGRAMDATA=/data1
in the user's $HOME/.profile (or .bash_profile), and jacksprogram would use getenv(JACKSPROGRAMDATA) to read the value.
In Linux, you can use bind mounts to map directory or file to another path, and per-process mount namespaces to do it for specific process.
Bind mounts are implemented in -o bind option of mount. Mount namespace can be employed e.g. using unshare tool which is part of util-linux package.
See examples in this answer.
Mount namespaces allow to setup a different view of the filesystem private to all processes run within that namespace. You can then use mount --bind within that namespace to map directories.
For example, on user login you can create a namespace dedicated to that user. Within that namespace, you can use mount --bind to mount the directory /opt/data/$USER on top of data. You can then run the user shell in that namespace. For that shell and any other process started within that shell, any read or write in /data/ will end up reading and writing from /opt/data/$USER instead.
To automate the setup, you can use the pam_namespace pam module. A configuration file /etc/security/namespace.conf similar to this:
/data /opt/data/$USER level root,adm
could be all you need to make this work.
Alternatively, you could use an utility like faketree to do this interactively from the shell or in your CI/CD pipelines:
faketree --mount /opt/data/$USER:/data -- /bin/bash
(does not require root, uses namespaces)
You can read more about faketree in the main repository for the tool or in this blog post.

FreeBSD Jail and SSH - /dev/tty: No such file or directory

When I try to connect through SSH from inside the JAIL I get this error:
# ssh test#test.com
...
debug1: read_passphrase: can not open / dev / tty: No such file or directory
Host key verification failed.
Outside JAIL everythng is working properly. Any ideas?
Steps to reproduce:
# jls
JID IP Address Hostname Path
1 10.10.3.1 demo.example.com /jails/demo
# jexec 1 tcsh
(inside jail:)
# ssh test#test.com
Does your jail root have a populated /dev filesystem through a devfs mount? It looks like it doesn't right now.
Important note: You should be able to use devfs rules to limit the devices visible to jailed processes. In particular, access to raw disk device nodes is a bad idea. The jail(8) manpage describes this near the following paragraph:
It is important that only appropriate device nodes in devfs be exposed to a jail; access to disk devices in the jail may permit processes in the jail to bypass the jail sandboxing by modifying files outside of the jail. See devfs(8) for information on how to use devfs rules to limit access to entries in the per-jail devfs. A simple devfs ruleset for jails is available as ruleset #4 in /etc/defaults/devfs.rules.
You should be able to mount devfs under /jails/demo/dev and apply the recommended jail device rules by running as root the following commands:
# mkdir /jails/demo/dev
# mount -t devfs devfs /jails/demo/dev
# devfs -m /jails/demo/dev rule -s 4 applyset
Of course, you can also write a custom ruleset in /etc/defaults/devfs.rules, even a special devfs ruleset that only applies to a specific jail.
For more details see also the manpages for jail(8), devfs(8), and devfs.rules(5).
You may also experience this if you're entered the jail via the jail command. If you start up the jail and SSH into it, you should have better luck.
The devfs filesystem is probably not mounted in your jail. Many things will fail, not just ssh.
To mount a properly-filtered devfs automotically, your best bet is to use rc.conf variables:
jail_enable=YES
jail_list="JAILNAME"
jail_devfs_enable=YES
jail_JAILNAME_rootdir='/jails/demo'
jail_JAILNAME_hostname="demo"
Then you can stop/stop it using "/etc/rc.d/jail start demo", e

How to register FUSE filesystem type with mount(8) and fstab?

I've written a small FUSE-based filesystem and now the only part's missing is that I want to register it with fstab(5) to auto-mount it on system startup and/or manually mount it with just mount /srv/virtual-db. How can I achieve this?
I know, I can just run /usr/bin/vdbfs.py /srv/virtual-db from some init script, but that's not exactly pretty.
I'm sorry because this may be not exactly a programming question, but it's highly related, as the packaging and deployment is still the programmer's job.
In general, one "registers" a new mount filesystem type by creating an executable mount.fstype.
$ ln -s /usr/bin/vdbfs.py /usr/sbin/mount.vdbfs
If vdbfs.py takes mount-ish arguments (i.e. dev path [-o opts]), then mount -t vdbfs and using vdbfs as the 3rd field in fstab will work. If it doesn't, you can create a wrapper which does take arguments of that form and maps them to whatever your vdbfs.py takes.
FUSE should also install a mount.fuse executable; mount.fuse 'vdbfs.py#dev' path -o opts will go on and call vdbfs.py dev path -o opts. In that case, you can use fuse as your filesystem type and prefix your device with vdbfs.py#.
So to clarify ephemient's answer, there are two options:
Edit /etc/fstab like this:
# <file system> <mount point> <type> <options> <dump> <pass>
# ...
vdbfs.py#<dev> /srv/virtual-db fuse user,<other-opts> 0 0
Or,
Create an executable prefixed with "mount." (ensuring it can be used
with mount-like options):
$ ln -s /usr/bin/vdbfs.py /usr/sbin/mount.vdbfs
And edit /etc/fstab like this:
# <file system> <mount point> <type> <options> <dump> <pass>
# ...
<dev> /srv/virtual-db vdbfs.py user,<other-opts> 0 0
With regards to auto-mounting at start up and manually mounting with mount, the user and noauto options are relevant and fully supported by fuse itself so you don't have to implement them yourself. The user option lets a non-priveleged user who is a member of the "fuse" group mount your filesystem with the mount command, and noauto directs your filesystem not to automatically mount at startup. If you don't specify noauto, it will automatically mount.
To clarify #patryk.beza comment on the accepted answer, the correct way to mount a FUSE file system is by setting the file system type to fuse.<subtype>.
For example, to mount an s3fs-fuse implementation, which does not provide a specific /sbin/mount.* wrapper and uses normally the s3fs user command to mount S3 buckets, one can use this command as root:
mount -t fuse.s3fs bucket-name /path/to/dir -o <some,options>
or this line in /etc/fstab:
bucket-name /path/to/dir fuse.s3fs <some,options> 0 0
or this SystemD mount unit (for example, /etc/systemd/system/path-to-dir.mount):
[Unit]
Description=S3 Storage
After=network.target
[Mount]
What=bucket-name
Where=/path/to/dir
Type=fuse.s3fs
Options=<some,options>
[Install]
WantedBy=multi-user.target
How this works: mount recognizes the concept of "filesystem subtypes" when the type is formatted with a period (i.e. <type>.<subtype>), so that a type with the format fuse.someimpl is recognized to be the responsibility of the FUSE mount helper /sbin/mount.fuse. The FUSE mount helper then resolves the someimpl part to the FUSE implementation, in the same way as the # format is used in the original answer (I think this is just a path search for a program named <subtype>, but I'm not 100% sure about it).
You could just use fuse filesystem type. The following works on my system:
smbnetfs /media/netbios fuse defaults,allow_other 0 0
Another example:
sshfs#user#example.com:/ /mnt fuse user,noauto 0 0
After researching a lot found this solution to mount fuse filesystem suing fstab entry. I was using fuse for s3bucket to mount on local linux machine.
.passwd-s3fs : Is containing credentials to access your aws account 1] Secret key and 2] Access Key .
uid : User Id. You can type linux command id and you can get uid
Syntax:
s3fs#<Bucket_Name> <Mounted_Direcotry_Path> fuse _netdev,allow_other,passwd_file=/home/ubuntu/.passwd-s3fs,use_cache=/tmp,umask=002,uid=<User_Id> 0 0
Example:
s3fs#myawsbucket /home/ubuntu/s3bucket/mys3bucket fuse _netdev,allow_other,passwd_file=/home/ubuntu/.passwd-s3fs,use_cache=/tmp,umask=002,uid=1000 0 0
To mount you need run following command.
mount -a
To check your bucket is mounted properly or not use following command to check which shows all mounted points.
df -h

Resources