Make all files under a directory read-only without changing permissions? - linux

I'm a beginner in linux.I have a question on filesystem that is it possible to make all files under a directory read-only without changing permissions?

No it is not possible. The write permission grants the ability to write a file. So that you need to change it anyway to make the file read-only for specific user or group of users.
You'd probably like to read this at spare time.

If you have suitable privileges, and if the filesystem is not critical to the machine (do not try this with "/", your root filesystem), you can remount the filesystem read-only. The details differ slightly from one system to another, but a useful discussion is found here:
https://askubuntu.com/questions/296331/how-to-mount-a-hard-disk-as-read-only-from-the-terminal
that applies to Linux and BSDs.
Because this does not actually modify the files, you can undo this by remounting the filesystem again, with read/write permissions, e.g., "rw" where the "ro" option was used. For specific information you should read the manpage(s) for mount and fstab for the system you are using.

Related

Make files read-only unless accessed by symlink path

Say I have a folder tree like:
root/
ro/
symlink-to-ro/
my question is two-fold:
(a) is there a way to make all files in the ro directory read-only, but if the files are accessed by way of the symlink, make them writable?
(b) the reverse of (a): is there a way to make the files writable only if they are accessed directly?
This is just for *nix/MacOS
No. Permissions are assigned to inodes, not to directory entries; so the same set of permissions is checked regardless of the path you used to access the file.
EDIT: Scratch that. I just remembered there is a way: while files and folders don't carry permissions, mounts can be set to be readonly. If you were on Linux, a read-only bind mount would be exactly what you are looking for. AFAIK OSX can't do that, so you can fake it with an NFS mount (not as nice).

Set default level of access for files within a directory

I know that I can set the access level of a directory using chmod, but I need to specify a default level of access for every new file that is ever created in a directory, until the end of time.
Is there some way to accomplish this? chmod'ing every single file every time it gets generated in this directory isn't practical in a production environment, I need to make all files created in this directory default to 777.
Perhaps a little OT for StackOverflow.
Couple of options really, depending on what filesystem you've got.
Some filesystems support ACLs. http://linux.about.com/library/cmd/blcmdl1_setfacl.htm
Standard Unix won't allow you to force users to create mode 777, but you can set group setuid on a directory, such that all created files in that directory are owned by that group. If your default umask includes group write, that may do the trick.
On some filesystems, you can use inotify to detect changes and trigger a binary (like chmod).

What is the proper place to put named pipes on Linux?

I've got a few processes that talk to each other through named pipes. Currently, I'm creating all my pipes locally, and keeping the applications in the same working directory. At some point, it's assumed that these programs can (and will) be run from different directories. I need to create these pipes I'm using in a known location, so all of the different applications will be able to find the pipes they need.
I'm new to working on Linux and am not familiar with the filesystem structure. In Windows, I'd use something like the AppData folder to keep these pipes. I'm not sure what the equivalent is in Linux.
The /tmp directory looks like it probably could function just nicely. I've read in a few places that it's cleared on system shutdowns (and that's fine, I have no probably re-creating the pipes when I start back up.) but I've seen a few other people say they're losing files while the system is up, as if it's cleaned periodically, which I don't want to happen while my applications are using those pipes!
Is there a place more suited for application specific stores? Or would /tmp be the place that I'd want to keep these (since they are after all, temporary.)?
I've seen SaltStack using /var/run. The only problem is that you need root access to write into that directory, but let's say that you are going to run your process as a system daemon. SaltStack creates /var/run/salt at the installation time and changes the owner to salt so that later on it can be used without root privileges.
I also checked the Filesystem Hierarchy Standard and even though it's not really important so much, even they say:
System programs that maintain transient UNIX-domain sockets must place them in this directory.
Since named pipes are something very similar, I would go the same way.
On newer Linux distros with systemd /run/user/<userid> (created by pam_systemd during login if it doesn't already exist) can be used for opening up sockets and putting .pid files there instead of /var/run where only root has access. Also note that /var/run is a symlink to /run so /var/run/user/<userid> can also be used. For more infos check out this thread. The idea is that system daemons should have a /var/run/<daemon name>/ directory created during installation with proper permissions and put their sockets/pid files in there while daemons run by the user (such as pulseaudio) should use /run/user/<userid>/. Another option is /tmp and /var/tmp.

Maintain file and folder permissions inside archives

I am packaging and distributing a program I made for Windows,Linux and Mac. I plan to put the files and folders in zip archives.
If I set the correct folder and file permissions and then compress into zip and redistribute them, will those permissions be maintained when the user extracts them in Linux or Mac systems ? Or do they have to set the permissions themselves ?
zip does not store file permissions in the archive.
tar archives will preserve file permissions on Linux and OS X. I have no idea what happens on Windows. If you can test things out on Windows and it works, this is probably your best bet. It probably depends on what tool people use to unpack the archives.
Another option would be to create an installer, although there are few non-commercial options for creating cross-platform installers. Wikipedia has a list.
An installer is your best option here.
Lets me explain a bit better why.
Windows has these permissions:
Modify
Read & Execute
Read
Write
Which are assigned to Groups or Usernames,
Unix based systems have:
Read
Write
Execute
Which can be assigned to owner, group and others.
Has you can see, its difficult to map permissions from one system to another, since the filesystems handle permissions differently.
However some zip utilities like Info-Zip supports Unix based filesystem features, such as user and group IDs, file permissions, and support for symbolic links. It also support NTFS filesystem permissions, and will make an attempt to translate from NTFS permissions to Unix permissions or vice-versa when extracting files. This can result in potentially unintended combinations, e.g. .exe files being created on NTFS volumes with executable permission denied.*
If you are planning on distributing your program, an installer is indeed your best solution.
*From wikipedia: Zip (file format)

Is there any way to chroot a linux filemanager?

Just wondering for an idea if it would be possible for a filemanager like xfe, rox, nautilus to be able to run (at launch) with chroot aka not being able to go down the tree.
I would be interested if anyone has an idea on how to do so; it's for a cybercoffe where I don't want people to access other directories.
(solution except using linux fs permission).
Your file manager will need to see and access of the special files you are trying to hide (such as /proc content and /dev content) in order to work properly.
So yes, you can run a file manager in a chroot, but you need to put (a minimal version of) /dev/ and /proc in the chroot for it to work.
I would either hack the source of the file manager to hide what you want or go all the way and run the file manager in a virtual machine so no damage can be done by end user to real computing resources. qemu/kvm is excellent for that.
What's wrong with using permissions? Generate a temp user on login, give them write access only to their homedir. Anyone who would try to hack your system is not going to have trouble getting around whatever roadblocks you have in place. THey'd probably start by firing up an xterm anyway. Besides, security through obscurity isn't.

Resources