Normal user touching a file in /var/run failed - linux

I have a program called HelloWorld belonging to user test
HelloWorld will create a file HelloWorld.pid in /var/run to keep single instance.
I using following command to try to make test can access /var/run
usermod -a -G root test
However, when I run it, falied
could someone help me?

What are the permissions on /var/run? On my system, /var/run is rwxr-xr-x, which means only the user root can write to it. The permissions do not allow write access by members of the root group.
The normal way of handling this is by creating a subdirectory of /var/run that is owned by the user under which you'll be running your service. E.g.,
sudo mkdir /var/run/helloworld
sudo chown myusername /var/run/helloworld
Note that /var/run is often an ephemeral filesystem that disappears when your system reboots. If you would like your target directory to be created automatically when the system boots you can do that using the systemd tmpfiles service.

Some linux systems store per-user runtime files in /var/run/user/UID/.
In this case you can create your pid file in /var/run/user/$(id -u test)/HelloWorld.pid.
Alternatively just use /tmp.
You may want to use the user's name as a prefix to the pid filename to avoid collision with other users, for instance /tmp/test-HelloWorld.pid.

Related

Vulnerabilty when running docker as non-root?

I'm kind of fighting with privileges (no troll) for my Docker project as I'm trying to make one of my user on Docker, able to read/write a volume shared with the host, while the host user should also be able to read/write with the docker user in this directory.
In my case, nor Docker user, nor Host user should be root. This mean that, on the shared volume, the user which is running docker shouldn't be able to reach files in the volume that aren't. However, I discovered that running a volume as an user without root privileges do not save root's files
Example
For instance, in the following situation
Users directory with two files, one root one non-root, the user's name is user and has no root privileges, however he is part of Docker group :
C:/.../directory :
dwrx-rx--x root file1
dwrx-rx--x user file2
Users run docker through the command :
docker run -v /c/.../directory:/volume:rw -e USER_ID=$(id -u) -e GROUP_ID=$(id -g)
And the entrypoint of Docker is the following script.sh :
#!/bin/bash
usermod -u ${USER_ID} dockeruser \;
groupmod -g ${GROUP_ID} dockeruser ;
chown dockeruser:dockeruser -R /volume ;
exit;
The permissions, even are changed on the host's directory, even for roots file that I shouldn't have been to write on :
C:/.../directory :
dwrx-rx--x user file1
dwrx-rx--x user file2
Is it normal that, an user that isn't the root could do anything with files which do not belongs to him ?
I'm pretty a beginner so, I don't know if it's a misleading vulnerability due to the fact we force user to not be root nor sudo but in fact it doesn't change anything, or if I just am getting it wrong ^^, so feel free to tell me if it's not the way I should handle it.
Regards,
Waldo

mount cifs too long due to chown for each file

I need to run an application on a VM , where I can do my set up in a script that will be run as root when the machine is built.
In this script I would like to mount a windows FS, so using CIFS.
So I am writing the following in the fstab:
//win/dir /my/dir cifs noserverino,ro,uid=1002,gid=1002,credentials=/root/.secret 0 0
After this, still in the same script, I try to mount it:
mount /my/dir
THat results in output of 2 lines for each file:
chown: changing ownership of `/my/dir/afile' Read-only file system
Because I have a lot of files, this takes forever...
With the same fstab I have asked an admin to manually mount the same directory :
sudo mount /my/dir
-> this is very quick with NO extra output.
I assume the difference of behavior is due to the fact that the script is run as root.
Any idea how to avoid the issue while keeping the idea of the script run as root ( this is not under my control )
Cheers.
Renaud

Linux Sudo users disable change directory to /

We are using sudo users with limited commands to execute and assigned default home directory /home/sudouser but if that particular sudo user is running command cd \ its changing the directory to the main root directory /. This behaviour is totally insecure for us.
We need it such that if the sudo user is entering cd / or cd it changes directory to their home directory /home/sudouser
Please let us know how we can implement this?
Don't ever try to restrict a sudo user to only a directory or a command, a sudo user can by definition do what he wants.
In your case, having a script that assigns the home directory is I think a better idea. To solve the trouble of permissions look for the suid bit in permissions: http://www.linuxnix.com/suid-set-suid-linuxunix/
For example: create a sh file that has the following permissions: "-rwsr--r--" that is owned by root and as a group that can be accessed by the user whom you want to use the script.
Then in the file you create a simple script to execute the command to change default directory with let's say two parameters (username and directory)

Directory remapping between users or processes on Linux?

For example, I want to redirect the directory /data between users.
When user1 access /data, it accesses /data1 actually.
When user2 access /data, he accesses /data2 actually.
What technology should I use? cgroups? unionfs? others? I'm sorry I'm a newbie.
More advanced, redirection between processes.
process1 accesses /data1 as /data ,
process2 accesses /data2 as /data .
How can I do that?
There are Linux filesystem namespaces that can do what you want. You would create a new namespace and mount /data inside it as a bind mount to the real /data1 or /data2.
However, this is kind of tricky to do right now, as far as I know, and needs a lot of tooling that most Linux distros may not be using.
Most Unix software uses environment variables to find their data directories. In something like this, you'd have
export JACKSPROGRAMDATA=/data1
in the user's $HOME/.profile (or .bash_profile), and jacksprogram would use getenv(JACKSPROGRAMDATA) to read the value.
In Linux, you can use bind mounts to map directory or file to another path, and per-process mount namespaces to do it for specific process.
Bind mounts are implemented in -o bind option of mount. Mount namespace can be employed e.g. using unshare tool which is part of util-linux package.
See examples in this answer.
Mount namespaces allow to setup a different view of the filesystem private to all processes run within that namespace. You can then use mount --bind within that namespace to map directories.
For example, on user login you can create a namespace dedicated to that user. Within that namespace, you can use mount --bind to mount the directory /opt/data/$USER on top of data. You can then run the user shell in that namespace. For that shell and any other process started within that shell, any read or write in /data/ will end up reading and writing from /opt/data/$USER instead.
To automate the setup, you can use the pam_namespace pam module. A configuration file /etc/security/namespace.conf similar to this:
/data /opt/data/$USER level root,adm
could be all you need to make this work.
Alternatively, you could use an utility like faketree to do this interactively from the shell or in your CI/CD pipelines:
faketree --mount /opt/data/$USER:/data -- /bin/bash
(does not require root, uses namespaces)
You can read more about faketree in the main repository for the tool or in this blog post.

Sshfs as regular user through fstab

I'd like to mount a remote directory through sshfs on my Debian machine, say at /work. So I added my user to fuse group and I run:
sshfs user#remote.machine.net:/remote/dir /work
and everything works fine. However it would be very nice to have the directory mounted on boot. So I tried the /etc/fstab entry given below:
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user 0 0
sshfs asks for password and mounts almost correctly. Almost because my regular user has no access to the mounted directory and when I run ls -la /, I get:
d????????? ? ? ? ? ? work
How can I get it with right permissions trough fstab?
Using option allow_other in /etc/fstab allows other users than the one doing the actual mounting to access the mounted filesystem. When you booting your system and mounting your sshfs, it's done by user root instead of your regular user. When you add allow_other other users than root can access to mount point. File permissions under the mount point still stay the same as they used to be, so if you have a directory with 0700 mask there, it's not accessible by anyone else but root and the owner.
So, instead of
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user 0 0
use
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user,allow_other 0 0
This did the trick for me at least. I did not test this by booting the system, but instead just issued the mount command as root, then tried to access the mounted sshfs as a regular user.
Also to complement previous answer:
You should prefer the [user]#[host] syntax over the sshfs#[user]#[host] one.
Make sure you allow non-root users to specify the allow_other mount option in /etc/fuse.conf
Make sure you use each sshfs mount at least once manually while root so the host's signature is added to the .ssh/known_hosts file.
$ sudo sshfs [user]#[host]:[remote_path] [local_path] -o allow_other,IdentityFile=[path_to_id_rsa]
REF: https://wiki.archlinux.org/index.php/SSHFS
Also, complementing the accepted answer: there is a need that the user on the target has a right to shell, on target machine: sudo chsh username -> /bin/bash.
I had a user who had /bin/false, and this caused problems.

Resources