We're using vlad the deployer for deploying rails apps to production and test servers. All our servers are Ubuntu servers.
We have a problem related with linux permissions.
Vlad uses ssh to put files on any server, be it production or test. My company has several people, and each one has a different account on each server.
On the other hand, the way our Apache server is configured, it uses the "owner" of a website directory for reading files on that directory.
As a result, the user that makes the first deployment becomes the "owner" of the site; other users can't make deployments - Apache will not be able to read the modified files, since the owner has changed.
Normally this isn't much of an issue, but now holidays are approaching and we'd like to solve this as cleanly as possible - for example, we'd like to avoid sharing passwords/ssh keys.
Ideally I would need one vlad task that does something to the permissions of the deployed files so they could be completely modified by other users. I don't know enough about unix commands in order to do this.
I would do it with group permissions.
have the web root be /var/www/your-app/current
/var/www/your-app/ should be group writable by the group that all persons doing deploys belong to.
set up the deploy scripts so that they write to a directory called /var/www/your-app/>timestamp< where timestamp is the current timestamp.
/var/www/your-app/current is a symlink, and when you have sucessfully copied all files to the new directory you update the target of the symlink, so that it points to the directory you created.
This way everyone can deploy, and you can see who deployed what version.
This also makes the deploy atomic, so nothing will break if you lose your network connection in the middle of the deploy.
Since you won't delete the old catalogs, you can easy do a rollback to a "last good" state, if you manage to introduce some bug.
Why don't you make all the files publicly readable? In the ~/.bashrc of each user put the line
umask o=r
http://en.wikipedia.org/wiki/Umask
BTW I have never heard of such an Apache option; are you saying when Apache reads a file from /home/USER it runs with the UID of USER, instead of "nobody" or "apache"? That sounds wonky.
I've been fighting with it for a couple months now and I've only found a couple ways to do it:
Use a single shared account for all the users deploying to the server (boo!)
Use different accounts, but make a chown to a common user account (www-data, rails, or similar) before performing account-dependant tasks (such as the svn update). This might work, but I haven't tested it.
Use access control lists. Someone has hinted at me that this might be the right solution. However, I don't have the knowledge or time to make this work properly.
For now, we are just continuing using one single user per project, and chowning everything manually when needed. It's a bit of a pain, but it works.
Related
I am trying to set up a web application to work in IIS.
Among other things, I have created an "application" node pointing to my directory with binaries.
That application node uses pass-through-authentication, and it uses an AppPool for which my current user is set as the identity.
For some reason, IIS thinks it cannot access those files, as evidenced by the "Test Connection" output:
The user name + password combination is definitely correct, as IIS checks the validity of the credentials already upon input.
Note that this is by far not the first time this is happening: I have set up this web application on many, many machines already over the past five years, and yet, every single time a new developer joins the team, or we have to set up a new machine, we keep struggling with these access rights issues for multiple hours, or even days.
In the end (just like I have started in this case), all kinds of users (<machine name>\User, <machine name>\Benutzer, "Everyone", authenticated users, administrators, anonymous user, IIS_IUSRS, ...) have been granted full access to all files on the disk. Usually, at some point (after so much trying, configuring, switching forth and back, that no-one knows what actually solved the issue), the problem is gone.
What is a more systematic and minimal approach to troubleshooting (or, better yet, avoid) this issue when setting up a web application in IIS?
For file access issues, use Process Monitor will be fine.
Set the filter as "Process Name is w3wp.exe" "Result is not SUCCESS"
Add "user" column. Then you can check at which step, user access files unsuccessfully.
I want to automate testing of my users' source code files by letting them upload c++,python, lisp, scala, etc. files to my linux machine where a service will find them in a folder and then compile/run them to verify that they are correct. This server contains no important information about any of my users, so there's no database or anything for someone to hack. But I'm no security expert so I'm still worried about a user somehow finding a way to run arbitrary commands with root privileges (basically I don't have any idea what sorts of things can go wrong). Is there a safe way to do this?
They will. If you give someone the power to compile, it is very hard not to escalate to root. You say that server is not important to you, but what if someone sends you an email from that server, or alters some script, to obtain some info on your home machine or another server you use?
At least you need to strongly separate you from them. I would suggest linux containers, https://linuxcontainers.org/ they are trendy these days. But be careful, this is the kind of service that is always dangerous, no matter how much you protect yourself.
Read more about chroot command in Linux.
This way you can provide every running user program with separate isolated container.
You should under no circumstances allow a user to run code on your server with root privileges. A user could then just run rm –rf / and it would delete everything on your server.
I suggest you make a new local user / group that has very limited permissions, e.g. can only access one folder. So when you run the code on your server, you run it in that folder, and the user can not access anything else. After the code has finished you delete the content of the folder. You should also test this vigorously to check that they really cant destroy / manipulate anything.
If you're running on FreeBSD you could also look at Jails, which is sort-of a way of virtualization and limiting a user / program to that sandbox.
I'm trying to reimplement an existing server service in Node.JS. That service can be compared to a classic FTP server: authenticated users can read/create/modify files, but restricted to the permissions given to the matching system user name.
I'm pretty sure I can't have Node.JS run as root and switch users using seteuid() or alike since that would break concurrency.
Instead, can I let my Node.JS process run as ROOT and manually check permissions when accessing files? I'm thinking about some system call like "could user X create a file in directory Y?"
Otherwise, could I solve this by using user groups? Note that the service must be able to delete/modify a file created by the real system user, which may not set a special group just so that the service can access the file.
Running node as root sounds dangerous, but I assume there aren't many options left for you. Most FTP servers run as root too, for the same reason. Though, it means you need to pay a severe attention to the security of the code you are going to run.
Now to the question:
You are asking whether you can reimplement the UNIX permissions control in node.js. Yes you can, but Should Not! It is almost 100% chance you will leave holes or miss edge cases Unix core has already taken care of.
Instead use the process.setuid(id) as you mentioned. It will not defeat concurrency, but you need to think of parallel concurrency rather than async now. That is an extra work, but will release you of an headache of reinventing the Unix security.
Alternatively, if all of the operations you want to carry on filesystem involve shell commands, then you can simply modify them to the following pattern:
runuser -l userNameHere -c 'command'
I'm developing on my local machine (apache2, php, mysql). When I want to upload files to my live server (nginx, mysql, php5-fpm), I first backup my www folder, extract the databases, scp everything to my server (which is tedious, because it's protected with opiekey), log myself in, copy the files from my home directory on the server to my www directory and if I'm lucky and the file permissions and everything else works out, I can view the changes online. If I'm unlucky I'll have to research what went wrong.
Today, I changed only one file, and had to go through the entire process just for this file. You can imagine how annoying that is. Is there a faster way to do this? A way to automate it all? Maybe something like "commit" in SVN and off you fly?
How do you guys handle these types of things?
PS: I'm very very new to all this, so bear with me! For example I'm always copying files into my home directory on the server, because scp cannot seem to copy them directly into the /var/www folder?!
There are many utilities which will do that for you. If you know python, try fabric. If you know ruby, you may prefer capistrano. They allow you to script both local and remote operations.
If you have a farm of servers to take care of, those two might not work at the scale you want. For over 10 servers, have a look at chef or puppet to manage your servers completely.
Whether you deploy from local checkout, packaged source (my preferred solution), remote repository, or something entirely different is up to you. Whatever works for you is ok. Just make sure your deployments are reproducible (that is you can always say "5 minutes ago it wasn't broken, I want to have what now what I had 5 minutes ago"). Whatever way of versioning you use is better than no versioning (tagged releases are probably the most comfortable).
I think the "SVN" approach is very close to what you really want. You make a cron job that will run "svn update" every few minutes (or hg pull -u if using mercurial, similar with git). Another option is to use dropbox (we use it for our web servers sometimes) - this one is very easy to setyp and share with non-developers (like UI designers)...
rsync will send only the changes between your local machine and the remote machine. It would be an alternative to scp. You can look into how to set it up to do what you need.
You can't copy to /var/www because the credentials you're using to log in for the copy session doesn't have access to write on /var/www. Assuming you have root access, change the group (chown) on /var/www (or better yet, a sub directory) to your group and change the permissions to allow your group write access (chmod g+w).
rsync is fairly lightweight, so it should be simple to get going.
this is relative to tomcat + spring + linux. I am wondering what could be a good practice and place to store files. My idea is to put everything on the filesystem then keep track of them using the DB. My doubt is WHERE? In fact I could put everything in the webapp directory, but that way some good collegue or even me, could forget about that and erase everything during a clean+deploy. The other idea is to use a folder in the filesystem... but in Linux which one would be standard for this? More than this, there is the permission problem, I assume that tomcat runs as the tomcat user. So it can't create folders around in the filesystem at will. I'd have to create it by myself using root user and then changing the owner.... There is nothing wrong with this, but I'd like to automate the process, so that no intervention is needed. Any hints?
The Filesystem Hierarchy Standard defines standard paths for different kinds of files. You don't make it absolutely clear what kind of files you're storing and how they're used. At least
/srv/yourappname
/var/lib/yourappname
would be appropriate.
As for the privileges, you'll either have to create the directories with proper privileges during installation. If that's impossible, settle for the webapps directory.