I am tasked with monitoring the changes made to the source files of a website. I am not developing the website, just watching it. I am a firm believer in using version control, and am a fan of git, but the developer who is actually maintaining the site is not, and I have decided it is better to let him continue to work however he wants (don't ask). I do not want to have to give him any instructions whatsoever (except possibly telling him that I am adding files or directories that he can ignore).
I consider myself an intermediate-level user of git, so I want to run this by an expert or two.
I am thinking I can install git on the (Linux) server, and then ask for status, and do commits, via SSH. Will this work without jeopardizing the normal operation of the web server?
Yes, using Git on a server should not interfere with the normal operation of the server (as mentioned in the comments, doing this on a production server is dodgy but I'll leave that to one side.)
Note that using Git normally will create a .git directory at the root of whatever you're tracking. If that is your web server root directory, you might want to consider whether this is a risk as far as external access to the contents of the .git directory (depending on your server setup, this may or may not be a concern).
If you want to create the .git directory somewhere else outside your working tree, see the GIT_DIR environment variable.
Related
I have set up a development web server using VMWare and Debian. It's all set up fine, but I have an problem.
I need to be able to work with the files on the server, or a copy of them. But, it's important that both sets of files are in sync. For example, in my text editor if I'm working on index.php I don't want to have to upload with FTP each time, and I don't want to manually keep track of what files I've edited etc.
Any ideas on how I can achieve this?
Besides version controlling you can achieve it with sshfs. It is basically like mounting a remote directory in your local system.
More info:
http://en.wikipedia.org/wiki/SSHFS
https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh
After much searching I felt the best solution for my case is to use lsyncd to upload files to the development server anytime a change is made.
Although I use git I felt setting up a Git server and having to commit and push every time I make a change isn't what I want to be doing. Using lsyncd I'm able to use git on my local machine to keep track of the project.
Is there any way to say to GIT to stop copying file group and owner settings? My situation is as followed:
I am developing on home server where I need to use my users permissions (not root) in order to develop in Eclipse IDE (Eclipse crying if files are in root owner and group as it cannot work with them).
Once I am done, I am using GIT to synchronize with remote server which is running on Red Hat and file/folders groups and owners are server specific. However when I will synchronize it will copy my home servers permissions as well and apache on remote server throwing errors as it cannot read files so I need to reset it myself after every commit on new/changed files.
Any thoughts how to change my workflow?
P.S: I am using Linux/Debian on home server
Check the answer here: How do you deal with file ownership in git?
You're not doing anything wrong, this is just basic git behavior. You can change the permissions locally to what they need to be on the server and do a new commit. Or you can create a script to fix all of the ownerships/permissions on the server when you do your sync.
If you are using a git push to push the changes to your server via a git repository on the server, you can create a post-receive hook to call this script.
http://git-scm.com/book/en/Customizing-Git-Git-Hooks
One other thing did occur to me. A lot of Linux distros set the default umask as 0077 or 0007. Since I'm the only one using my laptop, I changed mine to 0002 since it just makes many things easier (plus my home directory is still 700). So all files I create will be rwxrwxr-x. Changing your umask would keep you from needing to think about setting the permissions later.
I found my answer to my question after a while and though I will answer just for the record.
User/group ownership is not shared (and therefore stored) through the repository. Only numeric file permissions are transferred (e.g. 644).
The file which is updated/created will adapt user and group ownership from the current user that is running Git commands.
I want to move only the website files changed since the published revision to a hosting account using SSH or FTP. The hosting account is Linux based but does have have any version control installed, so I can't simply do an update there, and the solution must run on the local development machines.
I'm essentially trying to do what http://www.deployhq.com/ does, but for free. I want to publish changes without having to re-upload everything or manually choose the files to move. I'm open to simply using a bash script that compares versions and copies each file (how? not that great with bash) since we'll be using Linux for development, but something with a web interface would be nice.
Thanks in advance for the help!
This seems more like a job for rsync than one for hg, given that that target doesn't have hg installed.
Something like so:
rsync -avz /path/to/local/files/ remote_host:/remote/path/
This would transfer all files, recursively (-r), from .../local/files/ and place them in /remote/path. The -az compresses and perserves file attributes.
rsync takes care of only transferring files that have changed. Be sure to watch for trailing slashed when specifying source paths, they matter (see the link above).
I have a web dev. client using a shared host that doesn't allow shell access, and thus no access to SVN, Git, etc. I've tried to convince him to move to one of the many cheap options that allow it, but he won't do it. If I use version control on my staging server, are there any tools that will allow me to replicate the changes to production via ftp? Locally I have both mac & windows, the staging server is linux, so something that works on any of those platforms....
Using your Linux staging server you could keep a separate checked out copy that you use specifically for that host and then use a utility to mirror that directory with the host server.
LFTP is useful for this kind of thing. Its available for most Linux distributions and includes a 'mirror' function:
Mirror specified source directory to
local target directory. If target
directory ends with a slash, the source base name is appended to
target
directory name. Source and/or target can be URLs pointing to
directories.
Some kind of ftp mirror software is what you need. Not tested it but a quick search gave me this Java application. You could run that over your up-to-date checked out repository.
Good thing for keeping SVN repo and FTP copy in sync is svn2web. May I suggest creating separate branch for production copy and do merges to that branch for uploading to production server.
You probably need to write a batch file that is able to
Export the SVN repository
Upload the exported files to your Linux server via FTP
Short of finding / implementing some FUSE based CoW file system that supports immutable versions .. I'd just find another (more developer friendly) host. As far as I know, no FTP server supports this natively, nor can I think of any elegant means of putting it in place with script hackery.
I could be wrong.
This question (and answer) really helped me just now as I implemented version control via gitolite on a separate server and lftp.
Here’s what I did:
Set up gitolite on my ubuntu staging server
created base repo (i.e. foo.git) on staging server
cloned foo.git into working directory on staging server
cloned foo.git into working directory on local development machine
Developed locally
Pushed changes to foo.git repo on staging server
On staging server, logged into working directory, and pulled in changes from foo.git
lftp-ed into shared host (like you mention above)
Once in shared host, ran:
mirror -R --only-newer --delete --parallel=10 /source/directory/ /target/directory
Notes on the mirror command options:
-R - this pushes the source/directory to the target/directory. (mirror pulls in from target to source without this, think reverse)
—only-newer - without this option, even if you only changed one file, the mirror command will send all the files in the source directory over to the target directory. with this option only the changed (newer) files are transferred over the wire.
—delete - deletes files that are no longer in the source directory but still in the target directory. one of my pushes involved deleting expired assets. without this option, the same files would have stayed put on my shared host after executing the mirror command.
—parallel=10 - transfers 10 files at once (instead of 1 by default). this made the process much faster
While this is what worked for me, I’m sure there are ways to improve on this. I was grateful for this question and thought i’d share my experience.
Rsync will do this over an FTP connection. You probably already have it installed if you’re on a Unix-like system.
My home Perforce server died. I set up a new one.
The project I set it up to support died in the planning phase. The contents of the depot at that point were some prototype code and we never got to setting up a disaster recovery plan.
The dev machines still have the existing code on them. As much as possible, I'd like the change of servers to be transparent to the developers--use the same depositories and the same directories, just change the name of the server to connect to and get back to work.
What do I need to do in order to make this happen?
I assume you don't have access to the perforce depot files from your dead server? I assume you know that you will lose all your history.
If that's the case all you need to do is setup the new server, create a user / client with the same root clientspec path as your original clientspec was using on your dev machine and checkin all the files into perforce. Pretty simple really...
You may need to rebind is SCM binding that you may have in tools like Visual Studio but that's about it.
What Shane suggested will populate the depot with one person's version of the files. But if you have another user who also has a copy then you'll need a couple of extra steps.
Firstly, just set one machine up as suggested by Shane.
You now need to get the second user set up. If you are confident that the version of the code user 2 has exactly matches what you put in the new server, then just create a client spec (probably same name as used before), and then sync using the "Force" flag. This will overwrite all the files on user 2's machine, and - more importantly - ensure Perforce knows which versions you really have.
However, if you are in any doubt as to any differences in code, then do not do the initial sync from the second user's machine. Instead, set up the client spec, then use the "Reconcile offline work" option - from P4V select the workspace, then it's a right click option. Then just walk through the subsequent dialog to sort out what you need.
Finally, if you want a very quick & dirty backup system for your server, I've posted some notes on my blog here - should take you just a couple of minutes to set up.