Linux: changing file ownership without a copy? - linux

I have a REST server whose purpose is to organize files generated by various users. To keep things simple, both the server and the users have access to a shared network filesystem.
The workflow is as follows: the user generates the file in a temp folder. He then notifies the server who then puts the file in a place of its own and stores some metadata in a database. The server should then own the files and take care of their deletion as needed.
My problem is the following: since the files can be quite big, I'd like to avoid a costly copy and instead simply move the files from the temp folder to their final destination. However, moving the files prevents the server from changing their ownership (see here for example).
Is there a way around this, without 1) copying the file, and 2)running the server as root?
EDIT: a couple precisions:
The file to be moved can be a directory with a hierarchy of files
It would be nice to have the server own the files in the final location to restrict access to other users.

If you create a separate user just to handle the chown, you can give that user the CAP_CHOWN capability, and you can have a single executable owned by that user that has the setuid bit set on it (so it executes as that user).
For security, this executable should do as little as possible, with as many checks as possible.
It should do the chown for the server user after the server user does the move. It should exist in a directory that is not writable by other users; it can do checks to insure that it is happy with all the attributes of the files it is asked to chown (current owner, location, etc.), it can have the server user hard-coded (so nobody else can use it), etc.
This will probably have to be a small C program, since most systems don't let you use setuid with scripts. You can find several small example programs on the web that do chown -- one is here

You should use a user group for all users and the server. Make the temp directory owned by that group and set it group-writable and sgid.
chown :groupname /path/to/temp
chmod g+s /path/to/temp
chmod 770 /path/to/temp
Then the server can adopt ownership of the file easily. Of course this means users can write other users' files, but I guess this is not a concern because they stay there a very short time?

Related

Set default level of access for files within a directory

I know that I can set the access level of a directory using chmod, but I need to specify a default level of access for every new file that is ever created in a directory, until the end of time.
Is there some way to accomplish this? chmod'ing every single file every time it gets generated in this directory isn't practical in a production environment, I need to make all files created in this directory default to 777.
Perhaps a little OT for StackOverflow.
Couple of options really, depending on what filesystem you've got.
Some filesystems support ACLs. http://linux.about.com/library/cmd/blcmdl1_setfacl.htm
Standard Unix won't allow you to force users to create mode 777, but you can set group setuid on a directory, such that all created files in that directory are owned by that group. If your default umask includes group write, that may do the trick.
On some filesystems, you can use inotify to detect changes and trigger a binary (like chmod).

Linux users', specifically Apache, permissions settings, [Linux noob :]

I have user:nobody and group:nogroup set for apache in httpd.conf.
Since I also use my own user to manage files on ssh through Samba, I would like to have access to the www folder for read/write, and also allow apache to read these files.
Some folders should have apache's write permissions.
Should I leave apache as nobody|nogroup?
I was thinking I should set my own user under a group called say "webadmins" and set apache a new user called say "apache" under the same group. Then allow the group to read from all files, but only my user will have write files. Whenever apache would need a write permission inside a folder, I would manually change that. Is this a fair enough approach or am I missing something?
Thanks!
usually any daemon will need to access a number of ressources.
it is therefore good practice to run each daemon under a special user:group, rather than nobody:nogroup.
traditionally (e.g. on Debian systems) apache runs as www-data:www-data.
finally, user permissions take precedence over group permissions (which in turn take precedence over other permissions).
this means that a directory where the user does not have write perms but the user's group can write is effectively r/o for that user (but not for other members of the group)

What is the proper place to put named pipes on Linux?

I've got a few processes that talk to each other through named pipes. Currently, I'm creating all my pipes locally, and keeping the applications in the same working directory. At some point, it's assumed that these programs can (and will) be run from different directories. I need to create these pipes I'm using in a known location, so all of the different applications will be able to find the pipes they need.
I'm new to working on Linux and am not familiar with the filesystem structure. In Windows, I'd use something like the AppData folder to keep these pipes. I'm not sure what the equivalent is in Linux.
The /tmp directory looks like it probably could function just nicely. I've read in a few places that it's cleared on system shutdowns (and that's fine, I have no probably re-creating the pipes when I start back up.) but I've seen a few other people say they're losing files while the system is up, as if it's cleaned periodically, which I don't want to happen while my applications are using those pipes!
Is there a place more suited for application specific stores? Or would /tmp be the place that I'd want to keep these (since they are after all, temporary.)?
I've seen SaltStack using /var/run. The only problem is that you need root access to write into that directory, but let's say that you are going to run your process as a system daemon. SaltStack creates /var/run/salt at the installation time and changes the owner to salt so that later on it can be used without root privileges.
I also checked the Filesystem Hierarchy Standard and even though it's not really important so much, even they say:
System programs that maintain transient UNIX-domain sockets must place them in this directory.
Since named pipes are something very similar, I would go the same way.
On newer Linux distros with systemd /run/user/<userid> (created by pam_systemd during login if it doesn't already exist) can be used for opening up sockets and putting .pid files there instead of /var/run where only root has access. Also note that /var/run is a symlink to /run so /var/run/user/<userid> can also be used. For more infos check out this thread. The idea is that system daemons should have a /var/run/<daemon name>/ directory created during installation with proper permissions and put their sockets/pid files in there while daemons run by the user (such as pulseaudio) should use /run/user/<userid>/. Another option is /tmp and /var/tmp.

How to have multiple websites access a common directory

I have multiple websites on a dedicated server running under Linux/Apache. The sites need to access common data from a directory named 'DATA' under the doc root. I cannot replicate this directory for every site. I would like to put this under a common directory (say /DATA) and provide a symbolic link to this directory from the doc root for each of the sites.
www/DATA -> /DATA
Is there a better way of doing this?
If I put this common directory (/DATA) directly under Linux root directory, can there be problems from Linux standpoint as the directory size can be several gigabytes and the sub directories under /DATA will need have write permissions.
Thanks
Use Alias along with the Directory directive. This will allow the site to access the directory via a url path.
I'm not sure what exactly it means that you'll have scripts accessing the directory to provide data. Executing shell scripts to read an produce data is a different story entirely, but you probably want to avoid this if this is what you're doing. Application pages could be included in the data directory and use a relative path to get to the data. Then all sites get the same scripts and data.
I don't know what your data is, but I'd probably opt to put it in a database. Think about how you have to update multiple machines if you have to scale your app. Maybe the data you have is simple and a DB is overkill.

CHMOD and the security for the directories on my server

I have a folder on my server on which I have changed the permissions to 777 (read, write and execute all) to allow users to upload their pictures.
So I want to know, what are the security risks involved in this?
I have implemented code to restrict what file formats can be uploaded, but what would happen if someone was to find the location of the directory, can this pose any threat to my server?
Can they start uploading any files they desire?
Thanks.
When users are uploading files to your server through a web form and some PHP script, the disk access on the server happens with the user id the web server is running under (usually nobody, www-data, apache, _httpd or even root).
Note here, that this single user id is used, regardless of which user uploads the file.
As long as there are no local users accessing the system by other means (ssh, for example), setting the upload directories permissions to 0777 would make not much of a difference -- appart from somebody exploiting a security vulnerability somewhere else in your system there's no one those permissions apply to anyway, and such an attacker would probably just use /tmp.
It is always good practice to set only those permissions on a file or directory that are actually needed. In this case that means probably something like:
drwxrws--- 5 www-data www-data 4096 Nov 17 16:44 upload/
I'm assuming that other local users besides the web server will want to access those files, like the sysadmin or a web designer. Add those users to the group your web server runs under and they don't need sudo or root privileges to access that directory. Also, the +s means that new files and directories in upload/ will automatically be owned by the same group.
As to your last question: just because an attacker knows where the directory is, doesn't mean he can magically make files appear there. There still has to be some sort of service running that accepts files and stores them there... so no, setting the permissions to 0777 doesn't directly make it any less safe.
Still, there are several more dimensions to "safety" and "security" that you cannot address with file permissions in this whole setup:
uploaders can still overwrite each others files because they all work with the same user id
somebody can upload a malicious PHP script to the upload directory and run it from there, possibly exploit other vulnerabilities on your system and gain root access
somebody can use your server to distribute child porn
somebody could run a phishing site from your server after uploading a lookalike of paypal.com
...and there are probably more. Some of those problems you may have addressed in your upload script, but then again, understanding of unix file permissions and where they apply comes usually waaaay at the beginning when learning about security issues, which shows that you are probably not ready yet to tackle all of the possible problems.
Have your code looked at by somebody!
By what means are these users uploading their pictures? If it's over the web, then you only need to give the web server or the CGI script user access to the folder.
The biggest danger here is that users can overwrite other users files, or delete other users files. Nobody without access to this folder will be able to write to it (unless you have some kind of guest/anonymous user).
If you need a directory that everyone can create files in, what you want is to mimic the permissions of the /tmp directory.
$ chown root:root dir; chmod 777 dir; chmod +t dir;
This way any user can create a file, but they cannot delete files owned by other users.
Contrary to what others have said, the executable bit on a directory in unix systems means you can make that directory your current directory (cd to it). It has nothing to do with executing (execution of a directory is meaningless). If you remove the executable bit, nobody will be able to 'cd' to it.
If nothing else, I would remove the executable permissions for all users (if not owner and group as well). With this enabled, someone could upload a file that looks like a picture but is really an executable, which might cause no end of damage.
Possibly remove the read and write permissions for all users as well and restrict it to just owner and group, unless you need anonymous access.
You do not want the executable bit on. As far as *nix goes, the executable bit means you can actually run the file. So, for example, php scripts can be uploaded as type JPEG, and then someone can run that script if they know the location and it's within the web directory.

Resources