Subversion creating revision directories with too-strict permissions - linux

This morning, I tried to commit a revision to Subversion and found that all of a sudden I did not have permission to do so.
Can't move '/svn/db/txn-protorevs/21000-ga9.rev' to '/svn/db/revs/21/21001':
Permission Denied
Looking at the revs directory, I noticed that somebody had committed the 21000th revision, and the group write permission for the new directory is missing for some reason.
drwxrwsr-x 2 svn svn 24K 2008-10-27 10:04 19
drwxrwsr-x 2 svn svn 24K 2008-12-18 07:13 20
drwxr-sr-x 2 jeff svn 4.0K 2008-12-18 11:18 21
Setting the group write permission on that directory allows me to commit, so I'm good for another 1000 revisions. But why does this happen, and what can I change to make sure it doesn't happen again?

If you have more than one developer accessing the repository through the file:// protocol, you may want to look into setting up a Subversion server (using svnserve or Apache). With that solution, the server itself is responsible for all access and permissions on the repository files, and you won't run into this problem.
From the SVN Book:
Do not be seduced by the simple idea of having all of your users access a repository directly via file:// URLs. Even if the repository is readily available to everyone via a network share, this is a bad idea. It removes any layers of protection between the users and the repository: users can accidentally (or intentionally) corrupt the repository database, it becomes hard to take the repository offline for inspection or upgrade, and it can lead to a mess of file permission problems (see the section called “Supporting Multiple Repository Access Methods”). Note that this is also one of the reasons we warn against accessing repositories via svn+ssh:// URLs—from a security standpoint, it's effectively the same as local users accessing via file://, and it can entail all the same problems if the administrator isn't careful.

The best way to solve this problem is to access the repository through a server.
If you don't mind unencrypted communication (which seems to be the case since you're using file://), svnserve is very easy to set up:
svnserve -d -r /svn
See this reference for help in setting it up and configuring authentication.
The bummer is that you'll have to set up each user's authentication separately.
To hook up into your OS's authentication you'd need to set up a Apache SVN server, which is a little more complicated, see these general instructions. You can find specific instructions for your OS with some googling.
Finally, if you want the fastest route to preventing the group write permission being reset while still using file://, just have everyone set a proper umask (002) in their shell startup, or use svn through a wrapper script that sets it:
#!/bin/bash
# svnwrapper.sh
umask 002
/usr/bin/env svn $*
Be sure that this umask isn't a security problem in your environment.

The most likely cause is like Greg said. Someone is accessing the repository through the file:// protocol and has an overly restrictive umask.

Related

Mercurial ACL prevents pull

Please help me with understanding the mechanics behind Mercurial in combination with ACL.
Our team uses Mercurial as a versioning system. The setup is very simple: two developers (one linux, one windows), remote repo (linux). Every time, the windows user W checks in modifications and the linux user L wants to pull afterwards, the following error messages (depending on the altered file(s)) are displayed:
pulling from ssh://user#domain.com
searching for changes
adding changesets
transaction abort!
rollback completed
abort: stream ended unexpectedly (got 0 bytes, expected 4)
remote: abort: Permission denied: /repopath/.hg/store/data/paper/tmp.txt.i
This is because the file access is handled by linux's ACL lists. After the ACL permissions are corrected with the setfacl command, everything runs smooth and L is able to pull. Even if W clones the repo with correct permission, the (new/ modified) files in the .hg directory have the wrong (default) permissions. The parent folder of the repo has the correct permission set, so I don't know from where those permissions are inherited.
Did somebody face similar problems? Thank you in advance!
On my Linux box I had to set the sticky bit for group permissions. Essentially when you create a directory that will become a mercurial repository you need to use chmod g+s <repoDirectory> That will force anything created under that directory to have read/write/execute for group members regardless of what their defaults for file creation are. I was using standard Unix groups instead of ACL lists so I'm not sure how this will work out for you.
When creating new files inside .hg/store Mercurial copies the (classic) file permissions from the .hg/store directory itself and uses the user/group of the writing user unless overridden with something like the sticky group bit (as #Eric Y) mentioned. When modifying one of those files it retains the existing ownership and permissions if your user's umask allows it.
To my knowledge Mercurial does no special handling of file system level ACLs -- almost no tool does, which is why the ACL system also includes inheritance rules, where directories have their own ACLs and also have default ACLs that are inherited by newly created objects inside that directory -- maybe you need to look into setting the repositories default ACL in addition to setting its ACL.
That said, are you sure you really want to be using ACLs? If you're already using them and familiar with them that's great, but if you broke them out just to get 2-user-access working in Mercurial then you're betting off using a dedicated group (like developers) and the sticky group bit or using a single shared ssh account dev#unixhost with separate ssh private keys for each (see the SharedSSH page in the Mercurial wiki for an example).
ACLs are very powerful, but seldom necessary.
Note to other readers: Nothing we're saying in this question has anything to do with Mercurial's ACL Extension -- that's within Mercurial and this is file system ACL level stuff.
There's only releases for Debian-based systems (like Ubuntu), but you should check out mercurial-server. It handles access control for Mercurial repos in a flexible manner, but outside of filesystem ACLs.

Subversion with Apache and permissions issues

I have setup an SVN repository for use with Apache 2 via svnadmin create command and appropriate vhost configuration. I found that, in order to correctly use the repository, this must be owned by wwwrun user (or www group) or chmodded to 777.
I would like to ask if it's possible to explicitly tell Apache to impersonate another user when serving requests to a certain path (from vhost.conf), like with suphp extension, so I won't mess with permissions once I create a repository.
Thank you in advance
To impersonate another user, apache would need to have elevated privileges - this would miss the point of running apache with limited rights (as use wwwrun in your example) in the first place. Therefore, pick one of the following
Run apache as root (dangerous, since a compromised apache will compromise your entire system)
Make wwwrun member of the svnrepo group that you give access to your repository to
Create a suid binary and a corresponding apache module to allow apache to impersonate (very complicated, easy to mess up - that's how suphp does it)
Change the permissions of the repository itself to allow everybody, wwwrun, or the www group.
Quite frankly, I don't see the problem you're having with the second or last option. Why can't you allow wwwrun to access your svn repository?

Is setting the SUID/SGID bit on the SVN binary a security risk?

I would like to use a callback feature of an SVN repository (Unfuddle) to ping a URL on my server whenever a commit has been made. I have a PHP script accepting the message and attempting to call a shell script to execute an 'svn update'.
The problem I'm facing is that Apache is running under user 'www-data' and does not have access to the local repository: '.svn/lock' permission denied. I have read all about setting SUID/SGID on shell scripts and how most *NIX OS's simply don't support it because of the security risks involved.
However I can set the SUID/SGID bit on the SVN binary file located at /usr/bin/svn. This alleviates the problem by allowing any user to issue SVN commands on any repository; not the most ideal...
My question is what's the most logical/sane/secure way to implement this type of setup and if I did leave the bits set on the svn binary does that open up a major security risk that I'm failing to realize?
Sorry for the long-winded post; this is my first question and I wanted to be thorough.
Thanks
There are 2 types of solutions for this kind of problem, polling or event driven.
An example of a polling solution would be to have a cronjob running on your server updating every N minutes. This would probably be the easiest to maintain if it works for you. You would sidestep the whole permissions issue by running the cron from the correct account.
The solution you covered is an event driven solution. They are typically less resource intensive, but can be harder to set up. An another example of an event driven solution would be to have www-data belong to an svn group. Set the SGID bit and chown the repository directory to the svn group. This should allow anyone in that group to check-in/out.
If you need to limit to updating, you can escalate privileges or change user temporarily. You use ssh single purpose keys (aka command keys) to ssh in as the user with the correct privileges. The single purpose key can then be used to do the update.
Another way to escalate privileges would be to use sudo -u [user] [command]. Update the /etc/sudoers file to allow www-data to escalate/change user to one that can perform the update.
Either way I would NOT use SUID/SGID scripts.
As CodeRich already said, you can set up a cron job to frequently update tue working copy(that's also the solution I would use).
Setting svn SUID/SGID is bad, because svn can write files everywhere in the file system (think of a public accessible repository containing a passwd and shadow file, checked out into your /etc). You could use a little suid wrapper program(which is SUID to your user account, not root), which chdir into your working copy and executes svn with the correct parameters there. You can look at ikiwiki which does this when it is used as a cgi.
Another way is to change the permissions of your working copy, so that the www-data user can create and write files there.
change the permissions on your working copy so that Apache can write to it. You have a directory you want to change, you need permissions to do so. Its that simple :)
The problem you then face is allowing any Apache user (or hacked page) to write all over your repo, not a good thing. So - you need to only allow a part of the system to write to it, and the best way to do that is to run your php script as the user who already owns the repo.
That's easily achieved by running the php script as a cgi, or fastcgi process. You can specify the user to use, it doesn't have to be www-data at all, though it does require a bit more setting up, you can have the best of event-driven and security as you're likely to get.
Here's a quick explanation of phpSuexec that does this.

How should I completely mirror a server user rsync, without using root password?

I use rsync to mirror a remote server. It is said that using a root password with rsync is dangerous, so I created a special rsync user. It seems to work fine, but cannot copy some files because of file permissions. I want to mirror whole directories for backup, and I guess this cannot be done without using root password, I mean if root does not give permissions on a specific files, no other account can read them. Is there other solutions and why shouldn't I use root account in rsync (I only do one way copying, that does not effect source).
If you want the whole server, then yes, you need root. However, instead of "pulling" (where you have a cron on your local server that does "rsync remote local"), can you possibly do it by "push" (where you have a cron on the remote server that does "rsync local remote"?) In this case, you won't need to configure the remote server to accept inbound root connections.
One option is to use an ssh login as root, but using ssh pubkey authentication instead of a password. In general, pubkeys are the wya to go, if you want to automate this later.
You'll want to look into the PermitRootLogin sshd_config setting, in particular the without-password setting or, if you want to get even more sophisticated and (probably) secure, the forced-commands-only setting.
Some useful links:
http://troy.jdmz.net/rsync/index.html
http://www.debian-administration.org/articles/209

CHMOD and the security for the directories on my server

I have a folder on my server on which I have changed the permissions to 777 (read, write and execute all) to allow users to upload their pictures.
So I want to know, what are the security risks involved in this?
I have implemented code to restrict what file formats can be uploaded, but what would happen if someone was to find the location of the directory, can this pose any threat to my server?
Can they start uploading any files they desire?
Thanks.
When users are uploading files to your server through a web form and some PHP script, the disk access on the server happens with the user id the web server is running under (usually nobody, www-data, apache, _httpd or even root).
Note here, that this single user id is used, regardless of which user uploads the file.
As long as there are no local users accessing the system by other means (ssh, for example), setting the upload directories permissions to 0777 would make not much of a difference -- appart from somebody exploiting a security vulnerability somewhere else in your system there's no one those permissions apply to anyway, and such an attacker would probably just use /tmp.
It is always good practice to set only those permissions on a file or directory that are actually needed. In this case that means probably something like:
drwxrws--- 5 www-data www-data 4096 Nov 17 16:44 upload/
I'm assuming that other local users besides the web server will want to access those files, like the sysadmin or a web designer. Add those users to the group your web server runs under and they don't need sudo or root privileges to access that directory. Also, the +s means that new files and directories in upload/ will automatically be owned by the same group.
As to your last question: just because an attacker knows where the directory is, doesn't mean he can magically make files appear there. There still has to be some sort of service running that accepts files and stores them there... so no, setting the permissions to 0777 doesn't directly make it any less safe.
Still, there are several more dimensions to "safety" and "security" that you cannot address with file permissions in this whole setup:
uploaders can still overwrite each others files because they all work with the same user id
somebody can upload a malicious PHP script to the upload directory and run it from there, possibly exploit other vulnerabilities on your system and gain root access
somebody can use your server to distribute child porn
somebody could run a phishing site from your server after uploading a lookalike of paypal.com
...and there are probably more. Some of those problems you may have addressed in your upload script, but then again, understanding of unix file permissions and where they apply comes usually waaaay at the beginning when learning about security issues, which shows that you are probably not ready yet to tackle all of the possible problems.
Have your code looked at by somebody!
By what means are these users uploading their pictures? If it's over the web, then you only need to give the web server or the CGI script user access to the folder.
The biggest danger here is that users can overwrite other users files, or delete other users files. Nobody without access to this folder will be able to write to it (unless you have some kind of guest/anonymous user).
If you need a directory that everyone can create files in, what you want is to mimic the permissions of the /tmp directory.
$ chown root:root dir; chmod 777 dir; chmod +t dir;
This way any user can create a file, but they cannot delete files owned by other users.
Contrary to what others have said, the executable bit on a directory in unix systems means you can make that directory your current directory (cd to it). It has nothing to do with executing (execution of a directory is meaningless). If you remove the executable bit, nobody will be able to 'cd' to it.
If nothing else, I would remove the executable permissions for all users (if not owner and group as well). With this enabled, someone could upload a file that looks like a picture but is really an executable, which might cause no end of damage.
Possibly remove the read and write permissions for all users as well and restrict it to just owner and group, unless you need anonymous access.
You do not want the executable bit on. As far as *nix goes, the executable bit means you can actually run the file. So, for example, php scripts can be uploaded as type JPEG, and then someone can run that script if they know the location and it's within the web directory.

Resources