Linux: Uploading files to a live server - How to automate process? - linux

I'm developing on my local machine (apache2, php, mysql). When I want to upload files to my live server (nginx, mysql, php5-fpm), I first backup my www folder, extract the databases, scp everything to my server (which is tedious, because it's protected with opiekey), log myself in, copy the files from my home directory on the server to my www directory and if I'm lucky and the file permissions and everything else works out, I can view the changes online. If I'm unlucky I'll have to research what went wrong.
Today, I changed only one file, and had to go through the entire process just for this file. You can imagine how annoying that is. Is there a faster way to do this? A way to automate it all? Maybe something like "commit" in SVN and off you fly?
How do you guys handle these types of things?
PS: I'm very very new to all this, so bear with me! For example I'm always copying files into my home directory on the server, because scp cannot seem to copy them directly into the /var/www folder?!

There are many utilities which will do that for you. If you know python, try fabric. If you know ruby, you may prefer capistrano. They allow you to script both local and remote operations.
If you have a farm of servers to take care of, those two might not work at the scale you want. For over 10 servers, have a look at chef or puppet to manage your servers completely.
Whether you deploy from local checkout, packaged source (my preferred solution), remote repository, or something entirely different is up to you. Whatever works for you is ok. Just make sure your deployments are reproducible (that is you can always say "5 minutes ago it wasn't broken, I want to have what now what I had 5 minutes ago"). Whatever way of versioning you use is better than no versioning (tagged releases are probably the most comfortable).

I think the "SVN" approach is very close to what you really want. You make a cron job that will run "svn update" every few minutes (or hg pull -u if using mercurial, similar with git). Another option is to use dropbox (we use it for our web servers sometimes) - this one is very easy to setyp and share with non-developers (like UI designers)...

rsync will send only the changes between your local machine and the remote machine. It would be an alternative to scp. You can look into how to set it up to do what you need.
You can't copy to /var/www because the credentials you're using to log in for the copy session doesn't have access to write on /var/www. Assuming you have root access, change the group (chown) on /var/www (or better yet, a sub directory) to your group and change the permissions to allow your group write access (chmod g+w).
rsync is fairly lightweight, so it should be simple to get going.

Related

Updating a website through SSH

I'm only partially familiar with shell and my command line, but I understand the usage of * when uploading and downloading files.
My question is this: If I have updated multiple files within my website's directory on my local device, is there some simple way to re-upload every file and directory through the put command to just update every single file and place files not previously there?
I'd imagine that i'd have to somehow
put */ (to put all of the directories)
put * (to put all of the files)
and change permissions accordingly
It may also be in my best interests to first clear the directory to I have a true update, but then there's the problem of resetting all permissions for every file and directory. I would think it would work in a similar manner, but I've had problems with it and I do not understand the use of the -r recursive option.
Basically such functionality is perfected within the rsync tool. And that tool can also be used in a "secure shell way"; as lined out in this tutorial.
As an alternative, you could also look into sshfs. That is a utility that allows you to "mount" a remote file system (using ssh) in your local system. So it would be completely transparent to rsync that it is syncing a local and a remote file system; for rsync, you would just be syncing to different directories!
Long story short: don't even think about implementing such "sync" code yourself. Yes, rsync itself requires some studying, as many unix tools it is extremely powerful; thus you have to be very diligent when using it. But thing is: this is a robust, well tested tool. The time required to learn about it will pay out pretty quickly.

git: can I issue commands from two computers mounted to same file system

I hope I can explain this in a simple way ...
The files I am adding to git is on a Linux server. I access these files from various computers, depending on where I am. Sometimes it is with a Windows machine, with a drive mapped to a network drive. Sometimes I ssh into the server.
I created my git repository while working on the Windows machine with a network drive mapped to the appropriate file system, lets call it W:. I was in W:\ when I created the repository.
When I ssh into the server the directory would be something like: \home\mydir\WORKING_DIR\
Can I now, while in my ssh session, issue git commands to update the repository on the Linux macine?
This is not an answer, but it is too long for the comments.
I'm getting to the end of my tether with git. It has now completely messed up everything. Trying to google for a solution is really fruitless. Nothing is specific enough and then when you do try something that might be relevant it just totally screws things up further.
I tried changing the path in the config file manually. But I really didn't know what to change it to. If it should be relative, then relative to what?
I tried a couple of things and ended up with /home/myname/myworkingdir/
However, now it deleted my files again and set me back to some unknown state. Fortunately I backed my files up beforehand. So I tried to copy them back into place and add them again. I get "fatal: 'myfilename and path in here' is beyond a symbolic link. I have no idea what that is supposed to mean.
git status just shows more things to be deleted.
There are probably situations where this works without any issue (e.g. git status) and others where git assumes exclusive access (e.g. attempting to commit the same change simultaneously from two computers which both have access to the same working directory).
Wanting to ask this seems like a symptom of misunderstanding the Git model, anyway. You'll be much better off with a separate working directory on each computer (or even multiple check-outs on the same computer). Git was designed for distributed, detached operation - go with that, and you'll be fine.

How to let users run arbitrary source code on my server

I want to automate testing of my users' source code files by letting them upload c++,python, lisp, scala, etc. files to my linux machine where a service will find them in a folder and then compile/run them to verify that they are correct. This server contains no important information about any of my users, so there's no database or anything for someone to hack. But I'm no security expert so I'm still worried about a user somehow finding a way to run arbitrary commands with root privileges (basically I don't have any idea what sorts of things can go wrong). Is there a safe way to do this?
They will. If you give someone the power to compile, it is very hard not to escalate to root. You say that server is not important to you, but what if someone sends you an email from that server, or alters some script, to obtain some info on your home machine or another server you use?
At least you need to strongly separate you from them. I would suggest linux containers, https://linuxcontainers.org/ they are trendy these days. But be careful, this is the kind of service that is always dangerous, no matter how much you protect yourself.
Read more about chroot command in Linux.
This way you can provide every running user program with separate isolated container.
You should under no circumstances allow a user to run code on your server with root privileges. A user could then just run rm –rf / and it would delete everything on your server.
I suggest you make a new local user / group that has very limited permissions, e.g. can only access one folder. So when you run the code on your server, you run it in that folder, and the user can not access anything else. After the code has finished you delete the content of the folder. You should also test this vigorously to check that they really cant destroy / manipulate anything.
If you're running on FreeBSD you could also look at Jails, which is sort-of a way of virtualization and limiting a user / program to that sandbox.

FTP folders mounted locally used for SVN repository

I would like to create a SVN repository remotely using FTP protocol.
Is it advisable to do the following steps
mount the FTP directory as local with culftpfs
create a repository as if it is local with svnadmin create
use it like in everyday life?
Do you know any issue with that approach?
RESULT AFTER MY ATTEMPT
I did try an attempy but I get an errro that looks like a timeout. THe real problem is that this approach is too slow. The solution of copying the repository each time looks more feasable or a simple script to back-up the folder.
It is a dangerous approach, however if you are working alone(as in "single user"), it would work. The biggest problems are:
You cannot provide exclusive locking mechanics over network
All Users will have direct access to all repositorie's internal files, if somebody deletes a file in revs, your repository is damaged beyond repair
You should setup an apache with
SVNAutoversioning on
then you could mount your repoURL as WebDav folder. Each change on these files will result in a single commit without need of a workingcopy

vlad the deployer - deploying with different users?

We're using vlad the deployer for deploying rails apps to production and test servers. All our servers are Ubuntu servers.
We have a problem related with linux permissions.
Vlad uses ssh to put files on any server, be it production or test. My company has several people, and each one has a different account on each server.
On the other hand, the way our Apache server is configured, it uses the "owner" of a website directory for reading files on that directory.
As a result, the user that makes the first deployment becomes the "owner" of the site; other users can't make deployments - Apache will not be able to read the modified files, since the owner has changed.
Normally this isn't much of an issue, but now holidays are approaching and we'd like to solve this as cleanly as possible - for example, we'd like to avoid sharing passwords/ssh keys.
Ideally I would need one vlad task that does something to the permissions of the deployed files so they could be completely modified by other users. I don't know enough about unix commands in order to do this.
I would do it with group permissions.
have the web root be /var/www/your-app/current
/var/www/your-app/ should be group writable by the group that all persons doing deploys belong to.
set up the deploy scripts so that they write to a directory called /var/www/your-app/>timestamp< where timestamp is the current timestamp.
/var/www/your-app/current is a symlink, and when you have sucessfully copied all files to the new directory you update the target of the symlink, so that it points to the directory you created.
This way everyone can deploy, and you can see who deployed what version.
This also makes the deploy atomic, so nothing will break if you lose your network connection in the middle of the deploy.
Since you won't delete the old catalogs, you can easy do a rollback to a "last good" state, if you manage to introduce some bug.
Why don't you make all the files publicly readable? In the ~/.bashrc of each user put the line
umask o=r
http://en.wikipedia.org/wiki/Umask
BTW I have never heard of such an Apache option; are you saying when Apache reads a file from /home/USER it runs with the UID of USER, instead of "nobody" or "apache"? That sounds wonky.
I've been fighting with it for a couple months now and I've only found a couple ways to do it:
Use a single shared account for all the users deploying to the server (boo!)
Use different accounts, but make a chown to a common user account (www-data, rails, or similar) before performing account-dependant tasks (such as the svn update). This might work, but I haven't tested it.
Use access control lists. Someone has hinted at me that this might be the right solution. However, I don't have the knowledge or time to make this work properly.
For now, we are just continuing using one single user per project, and chowning everything manually when needed. It's a bit of a pain, but it works.

Resources