Symbolic links to folders whose parent directory has no execute permission - linux

I am trying to do a soft link from one directory to another, the directory I am trying to access I have read and execute. However, its parent directory I do NOT have execute permissions.
Is there a way to do a soft link, to my desired directory without giving me execute permission to the parent directory?
Below is the code I used:
ln -s /home/dir1/dir2/desired_directory symbolic_link_name
the link just comes up as red with grey background.
Thank you.

Although this is not possible with symlinks, you could do it with mount --bind. Note that if the whole point is to circumvent security, then this is probably a very bad idea.
Your command would be
mount --bind /home/dir1/dir2/desired_directory mount_dir
There are a few issues to be aware of:
The target directory mount_dir must exist before (same as any mount point)
Root access is required to execute the mount commmand
The created "link" will not persist after a reboot unless a corresponding line is added to /etc/fstab
If the origin directory contains mounted file systems, these will not be transferred to the target. The mount points will appear as empty directories.
Using mount --bind may be considered bad practice because most programs are not aware that the "link" is not a standard directory. For instance it allows the creation of loops in the directory tree which make any tree parsing application (think "ls -R") enter a possibly infinite loop.
It may be hazardous when combined with recursive delete operations. See for instance Yet another warning about mount --bind and rm -rf.

Symbolic links are not a way to circumvent permissions set on their targets. No, there is no way to do what you want. If it was possible it would be a serious security issue.

Related

what does [rsync: failed to set times on "/."] really means?

sync: failed to set times on "/." (in XXXXXXXXXXX): Operation not permitted (1)
./
sent 483,746 bytes received 2,706 bytes 324,301.33 bytes/sec
total size is 161,339,379,726 speedup is 331,665.57
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1207) [sender=3.1.3]
ERROR: The synchronisation failed.
Done ...
I got this weird "/." I don't get what is this path or even what it refers to.
Since it's an error 23, I can confirm that other files are well transferred. All have different rights and groups that are indeed compatible.
Also I do not want to use --omit-dir-times.
So how do I fix this ? Where is or what is "/."?
how do I fix this ?
Possible run the command as root or as an owner of current directory.
Where is or what is "/."?
This path seems to be absolute, it's the file system root directory - /.
From wikipedia path (computing):
Two dots ("..") point upwards in the hierarchy, to indicate the parent directory; one dot (".") represents the current directory itself.
The destination dir/folder does not allow modification of the specific folder.
eg.
rsync: [generator] failed to set times on "/mnt/tmp/.": Operation not permitted
I was mounting a ISO or UDF file image through the loop filesystem to /mnt/tmp. The required folder permissions were not permissive enough, and should be changed prior to mounting the filesystem to /mnt/tmp.
Although just a warning, still interferes with interpreting the command return value upon exit of the command.
Change the dir/folder permissions prior to mounting or working with the folder, or add yourself to the group with appropriate group permissions.
chmod -R a+rwX /mnt/tmp
I have experienced the same problem: In my case the cause was some bad extended filesystem attribute setting.
Specifically, I had set "chattr +i /path/to/somedir", by mistake, and therefore made somedir (and content) unwritable. To fix, use "chattr -i /path/to/somedir
It may not be so obvious that attributes are the problem. The permissions may be all good (say rwx), but attributes overrule those regular unix permissions.
I found that I had not reconfigured my repository to use ssh. The error listed is what I got. I then changed my .git/config file to use ssh: and it cleared up.

How to hide a device/volume after mounting on Linux

I wrote a little program, where i mounted an encrypted volume after the user inserts the password with veracrypt an showed the content to the user in a specific way inside my programm. Everything works fine, but i want to prevent that the volume is shown in Nautilus.
The following command mounts the volume:
"veracrypt -m ro /path/to/file/file -p" + pw
Veracrypt help command shows:
--fs-options=OPTIONS
Filesystem mount options. The OPTIONS argument is passed to mount(8)
command with option -o when a filesystem on a VeraCrypt volume is mounted.
This option is not available on some platforms.
But i'm not able to find a mount option for linux-mount command, which will do the job. Is there any? What can i do?
A simple workaround may be to mount the volume to a directory that begins with a .. These directories should not be visible in most file managers without specifying they should show hidden files/folders.
Of course, obscuring the folder isn't perfect and is no substitute for setting proper access control specifies via chmod.

Prevent a Unix domain socket file in the filesystem from being deleted while socket is bound

Is it possible on Linux or MacOSX to prevent a Unix domain socket file (e.g. in /tmp) that is currently bound from being deleted? I want a mode 0777 socket that users can connect to but that users cannot delete while the daemon is running.
Right now a normal user can 'rm' the socket, preventing anyone else from accessing it until the daemon is restarted. Seems like it should be 'busy' if it's bound.
You could make a new subdirectory and set read only permissions on the directory after you make the socket:
mkdir /tmp/blah
cd /tmp/blah
# do stuff to create /tmp/blah/socket
chmod 555 /tmp/blah
rm /tmp/blah/socket
rm: cannot remove /tmp/blah/socket: Permission denied
(or the equivalent to that from C / your language of choice)
It depends entirely on the directory that contains the socket. /tmp is somewhat special in that it has the "sticky bit" set on the directory (if you execute ls -ld /tmp you will see the permissions field is usually: drwxrwxrwt or, more usefully, mode 1777. That sticky bit (the t at the end) is important when set on a directory. Quoting man chmod:
The restricted deletion flag or sticky bit is a single bit, whose interpretation depends on the file type. For directories, it prevents
unprivileged users from removing or renaming a file in the directory unless they own the file or the directory; this is called the re‐
stricted deletion flag for the directory, and is commonly found on world-writable directories like /tmp. For regular files on some older
systems, the bit saves the program's text image on the swap device so it will load more quickly when run; this is called the sticky bit.
This is exactly what you want - file-system level protection against a user removing the file. It is also 100% portable to all modern UNIX-like environments.
So, if you are creating your endpoint in /tmp you already have the protections you want. If you want to create the endpoint elsewhere, for example /opt/sockets, simply chmod 1777 /opt/sockets. The last part of the "trick" to getting the protections you want is to ensure that the root user is the actual owner of the endpoint. If the endpoint is owned by user fred then fred will always be able to delete the endpoint, which may well be a desirable thing. But if not, simply chown root:root /path/to/endpoint.

Linux folder permissions

At my office, we have a network directory structure like this:
/jobs/2004/3999-job_name/...
/jobs/2004/4000-job_name/...
The issue is that employees rename the "4000-job_name" folders (which in turn breaks other things that rely on the name being consistent with a database).
How can I stop users from renaming the parent folder while still allowing them full control of that folder's contents?
Please keep in mind that this is a Samba share that Windows users will be accessing.
I think you want to do this:
chmod a=rx /jobs #chdir and lsdir allowed, modifying not
chmod a=rwx /jobs/* #allow everything to everyone in the subdirectories
Since the directories /jobs/* are in fact files in /jobs their names cannot be changed without the write permission for /jobs. In the subdirectories of /jobs/ everyone is allowed to do anything with the commands above.
Also, be sure to set the permissions of new directories to rwx as you add them.
(edit by Bill K to fix the examples--the solution was correct but he misread the question due to the strange coloring SO added)
The question has already been answered, so I'm just gonna make a brief remark: in your question, you use the terms "folder" and "directory" interchangeably. Those two are very different, and in my experience 99% of all problems with Unix permissions have to do with confusing the two. Remember: Unix has directories, not folders.
EDIT: a folder is two pieces of cardboard glued together, that contain files. So, a folder is a container, it actually physically contains the files it holds. So, obviously a file can only be in one container at a time. To rename a file, you not only need access to the folder, you also need access to the file. Same to delete a file.
A directory, OTOH, is itself a file. [This is, in fact, exactly how directories were implemented in older Unix filesystems: just regular files with a special flag, you could even open them up in an editor and change them.] It contains a list of mappings from name to location (think phone directory, or a large warehouse). [In Unix, these mappings are called links or hardlinks.] Since the directory only contains the names of the files, not the files themselves, the same file can be present in multiple directories under different names. To change the name of a file (or more precisely to change a name of a file, since it can have more than one), you only need write access to the directory, not the file. Same to delete a file. Well, actually, you can't delete a file, you can only delete an entry in the directory – there could still be other entries in other directories pointing to that file. [That's why the syscall/library function to delete a file is called unlink and not delete: because you just remove the link, not the file itself; the file gets automatically "garbage collected" if there are no more links pointing to it.]
That's why I believe the folder metaphor for Unix directories is wrong, and even dangerous. The number one security question on one of the Unix mailinglists I'm on, is "Why can A delete B's files, even though he doesn't have write access to them?" and the answer is, he only needs write access to the directory. So, because of the folder metaphor, people think that their files are safe, even if they are not. With the directory metaphor, it would be much easier to explain what's going on: if I want to delete you from my phonebook, I don't have to hunt you down and kill you, I just need a pencil!
If you make the parent directory--/jobs/2004/--non-writable for the users, they won't be able to rename that folder.
I did the following experiment on my own machine to illustrate the point:
ndogg#seriallain:/tmp$ sudo mkdir jobs
ndogg#seriallain:/tmp$ sudo mkdir jobs/2004
ndogg#seriallain:/tmp$ sudo mkdir jobs/2004/3999-job_name/
ndogg#seriallain:/tmp$ cd jobs/2004/
ndogg#seriallain:/tmp/jobs/2004$ sudo chown ndogg.ndogg 3999-job_name/
ndogg#seriallain:/tmp/jobs/2004$ ls -alh
total 12K
drwxr-xr-x 3 root root 4.0K 2009-03-13 18:23 .
drwxr-xr-x 3 root root 4.0K 2009-03-13 18:23 ..
drwxr-xr-x 2 ndogg ndogg 4.0K 2009-03-13 18:23 3999-job_name
ndogg#seriallain:/tmp/jobs/2004$ touch 3999-job_name/foo
ndogg#seriallain:/tmp/jobs/2004$ mv 3999-job_name/ blah
mv: cannot move `3999-job_name/' to `blah': Permission denied

Setting default permissions for newly created files and sub-directories under a directory in Linux?

I have a bunch of long-running scripts and applications that are storing output results in a directory shared amongst a few users. I would like a way to make sure that every file and directory created under this shared directory automatically had u=rwxg=rwxo=r permissions.
I know that I could use umask 006 at the head off my various scripts, but I don't like that approach as many users write their own scripts and may forget to set the umask themselves.
I really just want the filesystem to set newly created files and directories with a certain permission if it is in a certain folder. Is this at all possible?
Update: I think it can be done with POSIX ACLs, using the Default ACL functionality, but it's all a bit over my head at the moment. If anybody can explain how to use Default ACLs it would probably answer this question nicely.
To get the right ownership, you can set the group setuid bit on the directory with
chmod g+rwxs dirname
This will ensure that files created in the directory are owned by the group. You should then make sure everyone runs with umask 002 or 007 or something of that nature---this is why Debian and many other linux systems are configured with per-user groups by default.
I don't know of a way to force the permissions you want if the user's umask is too strong.
Here's how to do it using default ACLs, at least under Linux.
First, you might need to enable ACL support on your filesystem. If you are using ext4 then it is already enabled. Other filesystems (e.g., ext3) need to be mounted with the acl option. In that case, add the option to your /etc/fstab. For example, if the directory is located on your root filesystem:
/dev/mapper/qz-root / ext3 errors=remount-ro,acl 0 1
Then remount it:
mount -oremount /
Now, use the following command to set the default ACL:
setfacl -dm u::rwx,g::rwx,o::r /shared/directory
All new files in /shared/directory should now get the desired permissions. Of course, it also depends on the application creating the file. For example, most files won't be executable by anyone from the start (depending on the mode argument to the open(2) or creat(2) call), just like when using umask. Some utilities like cp, tar, and rsync will try to preserve the permissions of the source file(s) which will mask out your default ACL if the source file was not group-writable.
Hope this helps!
It's ugly, but you can use the setfacl command to achieve exactly what you want.
On a Solaris machine, I have a file that contains the acls for users and groups. Unfortunately, you have to list all of the users (at least I couldn't find a way to make this work otherwise):
user::rwx
user:user_a:rwx
user:user_b:rwx
...
group::rwx
mask:rwx
other:r-x
default:user:user_a:rwx
default:user:user_b:rwx
....
default:group::rwx
default:user::rwx
default:mask:rwx
default:other:r-x
Name the file acl.lst and fill in your real user names instead of user_X.
You can now set those acls on your directory by issuing the following command:
setfacl -f acl.lst /your/dir/here
in your shell script (or .bashrc) you may use somthing like:
umask 022
umask is a command that determines the settings of a mask that controls how file permissions are set for newly created files.
I don't think this will do entirely what you want, but I just wanted to throw it out there since I hadn't seen it in the other answers.
I know you can create directories with permissions in a one-liner using the -m option:
mkdir -m755 mydir
and you can also use the install command:
sudo install -C -m 755 -o owner -g group /src_dir/src_file /dst_file

Resources