Linux: simulating/masking user ownership upon mount of 'external' partitions? [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
This is my problem: I have a partition on my Ubuntu system, let's call it myhdrive, which is not automounted upon boot (so I use disk mounter applet, or Nautilus to mount it manually). When it is mounted, listing this partition looks like this in Ubuntu:
$ ls -la /media/myhdrive/
total 5743740
drwxr-xr-x 8 myusername myusername 4096 2011-07-21 08:19 .
drwxr-xr-x 4 root root 4096 2011-07-21 04:13 ..
-rw-r--r-- 1 myusername myusername 98520 2011-07-21 08:19 file1.txt
-rw-r--r-- 1 myusername myusername 3463 2011-07-21 08:19 file2.txt
Now, let's say I shutdown the Ubuntu OS - and boot, let's say, OpenSUSE from a USB key on the same machine. The myhdrive partition will again not be automounted, and then I have to mount it manually (again from the file manager there). The thing is, when mounted under OpenSUSE, the same drive displays the following listing:
$ ls -la /media/myhdrive/
total 5743740
drwxr-xr-x 8 1000 1000 4096 2011-07-21 08:19 .
drwxr-xr-x 4 0 0 4096 2011-07-21 04:13 ..
-rw-r--r-- 1 1000 1000 98520 2011-07-21 08:19 file1.txt
-rw-r--r-- 1 1000 1000 3463 2011-07-21 08:19 file2.txt
Obviously, myusername has uid of 1000 in the Ubuntu system, and there it is recognized - while the same username is not present in the OpenSUSE system, and so the uid is not replaced with a filename.
The problem is, of course, that I cannot write to the myhdrive from OpenSUSE by default - I'd first have to chown the entire partition - and then, when I get back to the Ubuntu system, I'd have to chown it back again.
It's quite clear to me that this will not be possible using the GUI tools - but is there a method or a command line switch, such that I can "fake ownership": mount this partition in such a way, that the real uid of 1000 is interpreted as 'the currently logged-in user' in the USB-booted case (including that, when writes are made to the partition by 'the currently logged-in user', they are recorded under the uid of 1000)?
Thanks in advance for any answers,
Cheers!

No. Either keep the authdb files in sync, or use an external auth server.

Related

What does the -s command show? and why it changes with -h? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
What does the the first column shows when using the -s command with ls command?
$ ls -als
41 -rw-r--r-- 1 user user 165287 Jul 10 11:18 '.tutorial.term'
1 lrwxrwxrwx 1 user user 18 Jul 1 08:40 .bash_profile -> /home/user/.bashrc
3 -rw-r--r-- 1 user user 2355 Jul 1 08:40 .bashrc
Does it show the number of blocks used for that file? or the size of blocks used for that file?
If I add the -h command to the mix, which prints sizes in a human readable format, why does the first column changes too? and why does the value differs from that in the 6th column which represents the actual size of the file?
$ ls -alsh
41K -rw-r--r-- 1 user user 163K Jul 10 12:34 '.tutorial.term'
512 lrwxrwxrwx 1 user user 18 Jul 1 08:40 .bash_profile -> /home/user/.bashrc
2.5K -rw-r--r-- 1 user user 2.3K Jul 1 08:40 .bashrc
as ls man page says, -s will print the allocated size of each file, in blocks
The size of a file and the space it occupies on your hard drive are rarely the same. Disk space is allocated in blocks. If a file is smaller than a block, an entire block is still allocated to it because the file system doesn’t have a smaller unit of real estate to use. reference
also when you use, -h option, it will change allocated block size and file content size into bytes to be human readable. So block size can be different from file size because, it often happens that file content won't use all allocated size
If you want to know why ls -l and ls -s give different sizes, read this answer. Basically, -l returns the actual size of the file while -s returns the size in the filesystem. h makes all sizes human-readable, including the ones for -s and -l.

What is the deference between 'ls -lh' and 'ls -si'? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have executed both the comments but the size seems different in both output.
ls -lh
total 147M
-rw------- 1 root root 3.4K Sep 30 14:58 anaconda-ks.cfg
-rw-r--r-- 1 root root 247 Sep 30 14:58 install.post.log
-rw-r--r-- 1 root root 54 Sep 30 14:58 install.postnochroot.log
-rw-r--r-- 1 root root 147M Sep 30 14:58 jdk-7u79-linux-x64.gz
ls -l --si
total 154M
-rw------- 1 root root 3.5k Sep 30 14:58 anaconda-ks.cfg
-rw-r--r-- 1 root root 247 Sep 30 14:58 install.post.log
-rw-r--r-- 1 root root 54 Sep 30 14:58 install.postnochroot.log
-rw-r--r-- 1 root root 154M Sep 30 14:58 jdk-7u79-linux-x64.gz
If you would have checked the manpage for ls with the command man ls you would have seen the following:
-l use a long listing format
-h, --human-readable
with -l and/or -s, print human readable sizes (e.g., 1K 234M
2G)
-i, --inode
print the index number of each file
-s, --size
print the allocated size of each file, in blocks
So you see, each parameter just defines what and how information will be put to the screen. What you see (the difference in size) is the -h or --human-readable command, which will output more readable filesizes instead of printing always the bytes. Using -s will print the filesize in blocks on your HDD, which depends on the block size of your filesystem. From the information provided, i would say your filesystem has a 1kb blocksize. So the real content of the file would be 3.4kb, but must fill up the blocks, so on your disk the file requires 4kb or 4 blocks of space.

Programmatically create a btrfs file system whose root directory has a specific owner

Background
I have a test script that creates and destroys file systems on the fly, used in a suite of performance tests.
To avoid running the script as root, I have a disk device /dev/testdisk that is owned by a specific user testuser, along with a suitable entry in /etc/fstab:
$ ls -l /dev/testdisk
crw-rw---- 1 testuser testuser 21, 1 Jun 25 12:34 /dev/testdisk
$ grep testdisk /etc/fstab
/dev/testdisk /mnt/testdisk auto noauto,user,rw 0 0
This allows the disk to be mounted and unmounted by a normal user.
Question
I'd like my script (which runs as testuser) to programmatically create a btrfs file system on /dev/testdisk such that the root directory is owned by testuser:
$ mount /dev/testdisk /mnt/testdisk
$ ls -la /mnt/testdisk
total 24
drwxr-xr-x 3 testuser testuser 4096 Jun 25 15:15 .
drwxr-xr-x 3 root root 4096 Jun 23 17:41 ..
drwx------ 2 root root 16384 Jun 25 15:15 lost+found
Can this be done without running the script as root, and without resorting to privilege escalation (use of sudo) within the script?
Comparison to other file systems
With ext{2,3,4} it's possible to create a filesystem whose root directory is owned by the current user, with the following command:
mkfs.ext{2,3,4} -F -E root_owner /dev/testdisk
Workarounds I'd like to avoid (if possible)
I'm aware that I can use the btrfs-convert tool to convert an existing (possibly empty) ext{2,3,4} file system to btrfs format. I could use this workaround in my script (by first creating an ext4 filesystem and then immediately converting it to brtfs) but I'd rather avoid it if there's a way to create the btrfs file system directly.

Can't expose a fuse based volume to a Docker container

I'm trying to provide my docker container a volume of encrypted file system for internal use.
The idea is that the container will write to the volume as usual, but in fact the host will be encrypting the data before writing it to the filesystem.
I'm trying to use EncFS - it works well on the host, e.g:
encfs /encrypted /visible
I can write files to /visible, and those get encrypted.
However, when trying to run a container with /visible as the volume, e.g.:
docker run -i -t --privileged -v /visible:/myvolume imagename bash
I do get a volume in the container, but it's on the original /encrypted folder, not going through the EncFS. If I unmount the EncFS from /visible, I can see the files written by the container. Needless to say /encrypted is empty.
Is there a way to have docker mount the volume through EncFS, and not write directly to the folder?
In contrast, docker works fine when I use an NFS mount as a volume. It writes to the network device, and not to the local folder on which I mounted the device.
Thanks
I am unable to duplicate your problem locally. If I try to expose an encfs filesystem as a Docker volume, I get an error trying to start the container:
FATA[0003] Error response from daemon: Cannot start container <cid>:
setup mount namespace stat /visible: permission denied
So it's possible you have something different going on. In any case, this is what solved my problem:
By default, FUSE only permits the user who mounted a filesystem to have access to that filesystem. When you are running a Docker container, that container is initially running as root.
You can use the allow_root or allow_other mount options when you mount the FUSE filesystem. For example:
$ encfs -o allow_root /encrypted /other
Here, allow_root will permit the root user to have acces to the mountpoint, while allow_other will permit anyone to have access to the mountpoint (provided that the Unix permissions on the directory allow them access).
If I mounted by encfs filesytem using allow_root, I can then expose that filesystem as a Docker volume and the contents of that filesystem are correctly visible from inside the container.
This is definitely because you started the docker daemon before the host mounted the mountpoint. In this case the inode for the directory name is still pointing at the hosts local disk:
ls -i /mounts/
1048579 s3-data-mnt
then if you mount using a fuse daemon like s3fs:
/usr/local/bin/s3fs -o rw -o allow_other -o iam_role=ecsInstanceRole /mounts/s3-data-mnt
ls -i
1 s3-data-mnt
My guess is that docker does some bootstrap caching of the directory names to inodes (someone who has more knowledge of this than can fill in this blank).
Your comment is correct. If you simply restart docker after the mounting has finished your volume will be correctly shared from host to your containers. (Or you can simply delay starting docker until after all your mounts have finished mounting)
What is interesting (but makes complete since to me now) is that upon exiting the container and un-mounting the mountpoint on the host all of my writes from within the container to the shared volume magically appeared (they were being stored at the inode on the host machines local disk):
[root#host s3-data-mnt]# echo foo > bar
[root#host s3-data-mnt]# ls /mounts/s3-data-mnt
total 6
1 drwxrwxrwx 1 root root 0 Jan 1 1970 .
4 dr-xr-xr-x 28 root root 4096 Sep 16 17:06 ..
1 -rw-r--r-- 1 root root 4 Sep 16 17:11 bar
[root#host s3-data-mnt]# docker run -ti -v /mounts/s3-data-mnt:/s3-data busybox /bin/bash
root#5592454f9f4d:/mounts/s3-data# ls -als
total 8
4 drwxr-xr-x 3 root root 4096 Sep 16 16:05 .
4 drwxr-xr-x 12 root root 4096 Sep 16 16:45 ..
root#5592454f9f4d:/s3-data# echo baz > beef
root#5592454f9f4d:/s3-data# ls -als
total 9
4 drwxr-xr-x 3 root root 4096 Sep 16 16:05 .
4 drwxr-xr-x 12 root root 4096 Sep 16 16:45 ..
1 -rw-r--r-- 1 root root 4 Sep 16 17:11 beef
root#5592454f9f4d:/s3-data# exit
exit
[root#host s3-data-mnt]# ls /mounts/s3-data-mnt
total 6
1 drwxrwxrwx 1 root root 0 Jan 1 1970 .
4 dr-xr-xr-x 28 root root 4096 Sep 16 17:06 ..
1 -rw-r--r-- 1 root root 4 Sep 16 17:11 bar
[root#host /]# umount -l s3-data-mnt
[root#host /]# ls -als
[root#ip-10-0-3-233 /]# ls -als /s3-stn-jira-data-mnt/
total 8
4 drwxr-xr-x 2 root root 4096 Sep 16 17:28 .
4 dr-xr-xr-x 28 root root 4096 Sep 16 17:06 ..
1 -rw-r--r-- 1 root root 4 Sep 16 17:11 bar
You might be able to work around this by wrapping the mount call in nsenter to mount it in the same Linux mount namespace as the docker daemon, eg.
nsenter -t "$PID_OF_DOCKER_DAEMON" encfs ...
The question is whether this approach will survive a daemon restart itself. ;-)

linux permissions on aws : basic [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am new to linux and I am having a problem with permissions. Quite a long time ago I had created an AWS EC2 instance from scratch using step by step tutorials scattered over the web. I managed to upload an html website over there and linking the domain to it etc...
Now that after six months I am connecting again to the EC2 instance using MobaXTerm SSH or SFTP session, I can't get to upload new files or rename old files etc. I am using the regular ec2-user which from what I understand is quite a privileged user nearly as permissable as root.
I connect successfully with the old key that I had created and I can arrive to the desired directory. But I simply can't upload new files or replace old ones because I get a permission denied error. I don't know why and how to fix.
Last login: Fri Apr 25 13:18:26 2014 from 85.232.210.97
__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-ami/2014.03-release-notes/
[ec2-user#ip-172-31-47-208 ~]$ cd ./var/www/html/
-bash: cd: ./var/www/html/: No such file or directory
[ec2-user#ip-172-31-47-208 ~]$ cd .
[ec2-user#ip-172-31-47-208 ~]$ cd ..
[ec2-user#ip-172-31-47-208 home]$ cd ..
[ec2-user#ip-172-31-47-208 /]$ cd var/www/html/
**[ec2-user#ip-172-31-47-208 html]$ mv index.html index_old.html
mv: cannot move ‘index.html’ to ‘index_old.html’: Permission denied**
[ec2-user#ip-172-31-47-208 html]$ ls -l
total 164
drwxrwxr-x 2 ec2-user ec2-user 4096 Mar 27 16:03 css
-rw-rw-r-- 1 ec2-user ec2-user 5686 Mar 25 08:34 favicon.ico
drwxrwxr-x 2 ec2-user ec2-user 4096 Mar 27 16:04 font
drwxrwxr-x 14 ec2-user ec2-user 4096 Mar 27 16:18 images
**-rwxrwxrwx 1 ec2-user ec2-user 48675 Apr 25 13:41 index.html**
drwxrwxr-x 4 ec2-user ec2-user 4096 Mar 27 16:19 js
drwxrwxr-x 3 ec2-user ec2-user 4096 Mar 27 16:20 nbproject
drwxrwxrwx 2 ec2-user ec2-user 4096 Apr 25 13:30 old
drwxrwxr-x 3 ec2-user ec2-user 4096 Mar 27 16:20 php
-rw-rw-r-- 1 ec2-user ec2-user 41041 Sep 17 2013 PIE.htc
drwxrwxr-x 24 ec2-user ec2-user 4096 Mar 27 16:22 skins
-rw-rw-r-- 1 ec2-user ec2-user 30951 Mar 26 19:07 style.css
[ec2-user#ip-172-31-47-208 html]$
Can you guide me? What to check? Where to start and continue to dig to sort the issue?
I used WinSCP and SFTP also to manage file uploads easily but the permission issue remains unchanged.
Thank you
In order to add or remove files to/from a directory, you need to have write permission on the directory in question, which is /var/www/html in your case.(I originally wrote just a comment, but thinking again there is only one reason why you see what you are seeing.)Use ls -ld /var/www/html to have a look at the permissions on the directory itself. It should probably belong to root:ec2-user, which in turn means it should likely be chmod 775 (owner and group have read/write/execute permission, others may not write).

Resources