How to have multiple websites access a common directory - linux

I have multiple websites on a dedicated server running under Linux/Apache. The sites need to access common data from a directory named 'DATA' under the doc root. I cannot replicate this directory for every site. I would like to put this under a common directory (say /DATA) and provide a symbolic link to this directory from the doc root for each of the sites.
www/DATA -> /DATA
Is there a better way of doing this?
If I put this common directory (/DATA) directly under Linux root directory, can there be problems from Linux standpoint as the directory size can be several gigabytes and the sub directories under /DATA will need have write permissions.
Thanks

Use Alias along with the Directory directive. This will allow the site to access the directory via a url path.
I'm not sure what exactly it means that you'll have scripts accessing the directory to provide data. Executing shell scripts to read an produce data is a different story entirely, but you probably want to avoid this if this is what you're doing. Application pages could be included in the data directory and use a relative path to get to the data. Then all sites get the same scripts and data.
I don't know what your data is, but I'd probably opt to put it in a database. Think about how you have to update multiple machines if you have to scale your app. Maybe the data you have is simple and a DB is overkill.

Related

How to perform multithreading with PowerShell?

How can we use multithread in PowerShell.
My Query is below:
I have a root folder which has various base folder under it.
I want to sync the folders from server to another using SFTP on remote.
So I want all the folders from the left folder structure to get synced to another server folder using multithreading so that the transfer becomes faster.
I am using WinSCP.net SynchronizeDirectories to sync, but its quite slow.
Please suggest a better way if any one can.

Linux: changing file ownership without a copy?

I have a REST server whose purpose is to organize files generated by various users. To keep things simple, both the server and the users have access to a shared network filesystem.
The workflow is as follows: the user generates the file in a temp folder. He then notifies the server who then puts the file in a place of its own and stores some metadata in a database. The server should then own the files and take care of their deletion as needed.
My problem is the following: since the files can be quite big, I'd like to avoid a costly copy and instead simply move the files from the temp folder to their final destination. However, moving the files prevents the server from changing their ownership (see here for example).
Is there a way around this, without 1) copying the file, and 2)running the server as root?
EDIT: a couple precisions:
The file to be moved can be a directory with a hierarchy of files
It would be nice to have the server own the files in the final location to restrict access to other users.
If you create a separate user just to handle the chown, you can give that user the CAP_CHOWN capability, and you can have a single executable owned by that user that has the setuid bit set on it (so it executes as that user).
For security, this executable should do as little as possible, with as many checks as possible.
It should do the chown for the server user after the server user does the move. It should exist in a directory that is not writable by other users; it can do checks to insure that it is happy with all the attributes of the files it is asked to chown (current owner, location, etc.), it can have the server user hard-coded (so nobody else can use it), etc.
This will probably have to be a small C program, since most systems don't let you use setuid with scripts. You can find several small example programs on the web that do chown -- one is here
You should use a user group for all users and the server. Make the temp directory owned by that group and set it group-writable and sgid.
chown :groupname /path/to/temp
chmod g+s /path/to/temp
chmod 770 /path/to/temp
Then the server can adopt ownership of the file easily. Of course this means users can write other users' files, but I guess this is not a concern because they stay there a very short time?

Set default level of access for files within a directory

I know that I can set the access level of a directory using chmod, but I need to specify a default level of access for every new file that is ever created in a directory, until the end of time.
Is there some way to accomplish this? chmod'ing every single file every time it gets generated in this directory isn't practical in a production environment, I need to make all files created in this directory default to 777.
Perhaps a little OT for StackOverflow.
Couple of options really, depending on what filesystem you've got.
Some filesystems support ACLs. http://linux.about.com/library/cmd/blcmdl1_setfacl.htm
Standard Unix won't allow you to force users to create mode 777, but you can set group setuid on a directory, such that all created files in that directory are owned by that group. If your default umask includes group write, that may do the trick.
On some filesystems, you can use inotify to detect changes and trigger a binary (like chmod).

Recomended places and practices to store files generated from a Tomcat application?

this is relative to tomcat + spring + linux. I am wondering what could be a good practice and place to store files. My idea is to put everything on the filesystem then keep track of them using the DB. My doubt is WHERE? In fact I could put everything in the webapp directory, but that way some good collegue or even me, could forget about that and erase everything during a clean+deploy. The other idea is to use a folder in the filesystem... but in Linux which one would be standard for this? More than this, there is the permission problem, I assume that tomcat runs as the tomcat user. So it can't create folders around in the filesystem at will. I'd have to create it by myself using root user and then changing the owner.... There is nothing wrong with this, but I'd like to automate the process, so that no intervention is needed. Any hints?
The Filesystem Hierarchy Standard defines standard paths for different kinds of files. You don't make it absolutely clear what kind of files you're storing and how they're used. At least
/srv/yourappname
/var/lib/yourappname
would be appropriate.
As for the privileges, you'll either have to create the directories with proper privileges during installation. If that's impossible, settle for the webapps directory.

CHMOD and the security for the directories on my server

I have a folder on my server on which I have changed the permissions to 777 (read, write and execute all) to allow users to upload their pictures.
So I want to know, what are the security risks involved in this?
I have implemented code to restrict what file formats can be uploaded, but what would happen if someone was to find the location of the directory, can this pose any threat to my server?
Can they start uploading any files they desire?
Thanks.
When users are uploading files to your server through a web form and some PHP script, the disk access on the server happens with the user id the web server is running under (usually nobody, www-data, apache, _httpd or even root).
Note here, that this single user id is used, regardless of which user uploads the file.
As long as there are no local users accessing the system by other means (ssh, for example), setting the upload directories permissions to 0777 would make not much of a difference -- appart from somebody exploiting a security vulnerability somewhere else in your system there's no one those permissions apply to anyway, and such an attacker would probably just use /tmp.
It is always good practice to set only those permissions on a file or directory that are actually needed. In this case that means probably something like:
drwxrws--- 5 www-data www-data 4096 Nov 17 16:44 upload/
I'm assuming that other local users besides the web server will want to access those files, like the sysadmin or a web designer. Add those users to the group your web server runs under and they don't need sudo or root privileges to access that directory. Also, the +s means that new files and directories in upload/ will automatically be owned by the same group.
As to your last question: just because an attacker knows where the directory is, doesn't mean he can magically make files appear there. There still has to be some sort of service running that accepts files and stores them there... so no, setting the permissions to 0777 doesn't directly make it any less safe.
Still, there are several more dimensions to "safety" and "security" that you cannot address with file permissions in this whole setup:
uploaders can still overwrite each others files because they all work with the same user id
somebody can upload a malicious PHP script to the upload directory and run it from there, possibly exploit other vulnerabilities on your system and gain root access
somebody can use your server to distribute child porn
somebody could run a phishing site from your server after uploading a lookalike of paypal.com
...and there are probably more. Some of those problems you may have addressed in your upload script, but then again, understanding of unix file permissions and where they apply comes usually waaaay at the beginning when learning about security issues, which shows that you are probably not ready yet to tackle all of the possible problems.
Have your code looked at by somebody!
By what means are these users uploading their pictures? If it's over the web, then you only need to give the web server or the CGI script user access to the folder.
The biggest danger here is that users can overwrite other users files, or delete other users files. Nobody without access to this folder will be able to write to it (unless you have some kind of guest/anonymous user).
If you need a directory that everyone can create files in, what you want is to mimic the permissions of the /tmp directory.
$ chown root:root dir; chmod 777 dir; chmod +t dir;
This way any user can create a file, but they cannot delete files owned by other users.
Contrary to what others have said, the executable bit on a directory in unix systems means you can make that directory your current directory (cd to it). It has nothing to do with executing (execution of a directory is meaningless). If you remove the executable bit, nobody will be able to 'cd' to it.
If nothing else, I would remove the executable permissions for all users (if not owner and group as well). With this enabled, someone could upload a file that looks like a picture but is really an executable, which might cause no end of damage.
Possibly remove the read and write permissions for all users as well and restrict it to just owner and group, unless you need anonymous access.
You do not want the executable bit on. As far as *nix goes, the executable bit means you can actually run the file. So, for example, php scripts can be uploaded as type JPEG, and then someone can run that script if they know the location and it's within the web directory.

Resources